CN107945284B - Scene display method for automatic driving vehicle and intelligent glasses - Google Patents

Scene display method for automatic driving vehicle and intelligent glasses Download PDF

Info

Publication number
CN107945284B
CN107945284B CN201711309717.4A CN201711309717A CN107945284B CN 107945284 B CN107945284 B CN 107945284B CN 201711309717 A CN201711309717 A CN 201711309717A CN 107945284 B CN107945284 B CN 107945284B
Authority
CN
China
Prior art keywords
tunnel
scene
intelligent glasses
environment
automatic driving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711309717.4A
Other languages
Chinese (zh)
Other versions
CN107945284A (en
Inventor
蔡任轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Deco Cci Capital Ltd
Original Assignee
Guangzhou Deco Cci Capital Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Deco Cci Capital Ltd filed Critical Guangzhou Deco Cci Capital Ltd
Priority to CN201711309717.4A priority Critical patent/CN107945284B/en
Priority to PCT/CN2017/117678 priority patent/WO2019114013A1/en
Publication of CN107945284A publication Critical patent/CN107945284A/en
Application granted granted Critical
Publication of CN107945284B publication Critical patent/CN107945284B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Traffic Control Systems (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)

Abstract

The invention relates to the technical field of intelligent glasses, in particular to a scene display method for an automatic driving vehicle and intelligent glasses, wherein the method comprises the following steps: the intelligent glasses detect whether the current environment where the automatic driving vehicle is located is a tunnel environment or not; if yes, the intelligent glasses acquire a tunnel real scene corresponding to a tunnel environment; the intelligent glasses mix the real scene of the tunnel with the virtual scene of the tunnel stored in the intelligent glasses in advance to obtain a mixed scene of the tunnel; the intelligent glasses output the tunnel mixed scene to the user wearing the intelligent glasses. By implementing the embodiment of the invention, other scenes can be displayed for the passengers of the automatic driving vehicle when the automatic driving vehicle enters the tunnel, so that the driver can keep a waking state, and the passengers can take emergency measures in time when the automatic driving vehicle encounters an emergency, thereby reducing the probability of traffic accidents.

Description

Scene display method for automatic driving vehicle and intelligent glasses
Technical Field
The invention relates to the technical field of intelligent glasses, in particular to a scene display method for an automatic driving vehicle and intelligent glasses.
Background
Currently, the growing sophistication of autopilot technology allows riders to freely do what they want in an autopilot vehicle. However, in practice, it has been found that when the autonomous vehicle enters a tunnel, passengers may feel drowsy and weak due to the dark and sealed tunnel environment, so that when the autonomous vehicle encounters an emergency, the passengers cannot take emergency measures in time, thereby increasing the probability of traffic accidents.
Disclosure of Invention
The embodiment of the invention discloses a scene display method for an automatic driving vehicle and intelligent glasses, which can display other scenes for passengers of the automatic driving vehicle when the automatic driving vehicle enters a tunnel, so that the drivers keep awake states, and the passengers can take emergency measures in time when the automatic driving vehicle encounters an emergency, thereby reducing the probability of traffic accidents.
The first aspect of the embodiment of the invention discloses a scene display method for an automatic driving vehicle, which comprises the following steps:
the intelligent glasses detect whether the current environment where the automatic driving vehicle is located is a tunnel environment;
if yes, the intelligent glasses acquire a tunnel real scene corresponding to the tunnel environment;
the intelligent glasses mix the tunnel real scene with the tunnel virtual scene to obtain a tunnel mixed scene;
the smart glasses output the tunnel hybrid scene to a wearer of the smart glasses.
As an optional implementation manner, in a first aspect of an embodiment of the present invention, the acquiring, by the smart glasses, a tunnel reality scene corresponding to the tunnel environment includes:
the intelligent glasses are connected with a cloud database of the automatic driving vehicle; the cloud database is a database used by the automatic driving vehicle in real time in the automatic driving process;
the intelligent glasses acquire a tunnel real scene corresponding to the tunnel environment from the cloud database; the tunnel reality scene is a scene which is acquired by a scene acquisition system of the automatic driving vehicle and stored in the cloud database.
As an optional implementation manner, in a first aspect of the embodiment of the present invention, the tunnel real scene is composed of a tunnel wall real scene and a tunnel road surface real scene, and the intelligent glasses mix the tunnel real scene with the tunnel virtual scene to obtain a tunnel mixed scene, including:
the intelligent glasses acquire the position information of the tunnel environment;
the intelligent glasses acquire an administrative region to which the position information belongs according to the position information;
the intelligent glasses acquire a first virtual scene corresponding to a first scenic spot in the administrative area as a first tunnel virtual scene;
the intelligent glasses cover the first tunnel virtual scene with the tunnel wall real scene to obtain a tunnel mixed scene.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, after the smart glasses overlay the first tunnel virtual scene on the tunnel wall real scene to obtain a tunnel mixed scene, and before the smart glasses output the tunnel mixed scene to a wearer of the smart glasses, the method further includes:
the intelligent glasses judge whether a scene change instruction is received;
if so, the intelligent glasses judge whether the automatic driving vehicle drives away from the tunnel environment, and if not, a second virtual scene corresponding to a second scenic spot in the administrative area is obtained and used as a second tunnel virtual scene;
and the intelligent glasses cover the second tunnel virtual scene with the first tunnel virtual scene to obtain a tunnel mixed scene.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the outputting, by the smart glasses, the tunnel mixing scene to a wearer of the smart glasses includes:
the intelligent glasses determine the current daylighting degree of the eyes of the wearer according to the size of the pupils of the wearer;
the intelligent glasses perform light adjustment on the tunnel mixed scene according to the lighting degree to obtain a non-differential tunnel mixed scene;
the intelligent glasses adjust a projection device built in the intelligent glasses according to the undifferentiated tunnel mixed scene, so that the projection range of the projection device is the same as the visual range of the wearer;
the smart glasses project the undifferentiated tunnel hybrid scene into the wearer's eyes through the projection device.
A second aspect of an embodiment of the present invention discloses smart glasses, including:
the detection unit is used for detecting whether the current environment where the automatic driving vehicle is located is a tunnel environment;
the acquisition unit is used for acquiring a tunnel real scene corresponding to the tunnel environment when the detection result of the detection unit is positive;
the mixing unit is used for mixing the tunnel real scene and the tunnel virtual scene to obtain a tunnel mixed scene;
an output unit for outputting the tunnel mixing scene to a wearer of the smart glasses.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the obtaining unit includes:
a connection subunit for connecting a cloud database of the autonomous vehicle; the cloud database is a database used by the automatic driving vehicle in real time in the automatic driving process;
the first acquiring subunit is configured to acquire a tunnel reality scene corresponding to the tunnel environment from the cloud database; the tunnel reality scene is a scene which is acquired by a scene acquisition system of the automatic driving vehicle and stored in the cloud database.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the tunnel real scene is composed of a tunnel wall real scene and a tunnel road surface real scene, and the mixing unit includes:
the second acquiring subunit is used for acquiring the position information of the tunnel environment;
the third acquisition subunit is used for acquiring the administrative region to which the position information belongs according to the position information;
the fourth acquiring subunit is configured to acquire a first virtual scene corresponding to a first scenic spot in the administrative area, and use the first virtual scene as a first tunnel virtual scene;
and the covering subunit is used for covering the first tunnel virtual scene with the tunnel wall real scene to obtain a tunnel mixed scene.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the mixing unit further includes:
the first judgment subunit is used for judging whether a scene change instruction is received or not;
the second judgment subunit is used for judging whether the automatic driving vehicle drives away from the tunnel environment or not when the judgment result of the first judgment subunit is yes;
a fifth acquiring subunit, configured to acquire, when the determination result of the second determining subunit is negative, a second virtual scene corresponding to a second sight spot in the administrative area, as a second tunnel virtual scene;
the covering subunit is further configured to cover the first tunnel virtual scene with the second tunnel virtual scene, so as to obtain a tunnel mixed scene.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the output unit includes:
the determining subunit is used for determining the current daylighting degree of the eye of the wearer according to the pupil size of the wearer;
the dimming subunit is used for performing light adjustment on the tunnel mixed scene according to the light collection degree to obtain a non-differential tunnel mixed scene;
the adjusting subunit is configured to adjust a projection device built in the smart glasses according to the undifferentiated tunnel hybrid scene, so that a projection range of the projection device is the same as a visual range of the wearer;
a projection subunit for projecting the undifferentiated tunnel hybrid scene into the wearer's eye by the projection device.
A third aspect of an embodiment of the present invention discloses an intelligent glasses, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the scene display method for the automatic driving vehicle disclosed by the first aspect of the embodiment of the invention.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, the intelligent glasses detect whether the automatic driving vehicle is in a tunnel environment, and when the automatic driving vehicle is detected to be in the tunnel environment, the real tunnel scene and the virtual tunnel scene are mixed to obtain a tunnel mixed scene, and the tunnel mixed scene is output to a wearer of the intelligent glasses. Therefore, by implementing the embodiment of the invention, the intelligent glasses can display other scenes for the passengers of the automatic driving vehicle when the automatic driving vehicle enters the tunnel, so that the drivers can watch other scenes to keep a clear state, and the passengers can take emergency measures in time when the automatic driving vehicle encounters an emergency, thereby reducing the probability of traffic accidents.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic flow chart illustrating a method for displaying a scene of an autonomous vehicle according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart diagram illustrating another method for displaying a scene of an autonomous vehicle according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a pair of smart glasses according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of another pair of smart glasses according to the present disclosure;
FIG. 5 is a schematic structural diagram of another pair of smart glasses according to the present disclosure;
fig. 6 is a schematic structural diagram of another smart glasses disclosed in the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the embodiments and drawings of the present invention are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The embodiment of the invention discloses a scene display method for an automatic driving vehicle and intelligent glasses, which can display other scenes for passengers of the automatic driving vehicle when the automatic driving vehicle enters a tunnel, so that the drivers keep awake states, and the passengers can take emergency measures in time when the automatic driving vehicle encounters an emergency, thereby reducing the probability of traffic accidents.
Example one
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a scene display method for an autonomous vehicle according to an embodiment of the present invention. As shown in fig. 1, the scene display method for an autonomous vehicle may include the steps of:
101. the intelligent glasses detect whether the current environment of the automatic driving vehicle is a tunnel environment, if so, step 102 is executed; if not, the flow is ended.
As an optional implementation manner, before the smart glasses detect whether the current environment where the autonomous vehicle is located is a tunnel environment, the method may further include:
the intelligent glasses sense whether the illumination intensity changes suddenly or not through the optical sensor, and if the illumination intensity changes suddenly is detected, the intelligent glasses are executed to detect whether the current environment where the automatic driving vehicle is located is a tunnel environment or not.
By implementing the implementation mode, a precondition can be provided for the work of the intelligent glasses, so that the intelligent glasses are not required to be always in the state of judging the environment, and the work load of the intelligent glasses is reduced.
As an optional implementation, the detecting, by the smart glasses, whether the current environment in which the autonomous vehicle is located is a tunnel environment may include:
the intelligent glasses collect surrounding environment information through a camera built in the intelligent glasses, a scene which is the same as or similar to the current environment is searched in a database according to the surrounding environment information, whether the current environment is a tunnel environment is judged according to a label carried by the same or similar scene, and if yes, step 102 is executed; if not, ending the process; the database may be an external database in which the storage unit of the smart glasses is connected to the smart glasses through a network, and the network connection mode is not limited in the embodiments of the present invention.
By implementing the implementation mode, a method for matching and identifying scenes can be provided for the intelligent glasses, so that the intelligent glasses can rapidly distinguish whether the current scene is a tunnel scene, the speed of judging the current environment by the intelligent glasses is increased, the judgment delay is reduced, and better interactive experience is brought to users.
As an optional implementation, the detecting, by the smart glasses, whether the current environment in which the autonomous vehicle is located is a tunnel environment may include:
the intelligent glasses collect surrounding environment information through a camera arranged in the intelligent glasses, wherein the environment information comprises environment information in the automatic driving vehicle and external environment information which can be seen through the configuration glass of the automatic driving vehicle; the intelligent glasses acquire illumination information and road information of the environment information according to the acquired surrounding environment information, and judge whether the current environment is a tunnel environment, if so, executing step 102; otherwise, the flow is ended.
By implementing the implementation mode, a method for judging whether the current environment is the tunnel environment or not can be provided for the intelligent glasses, so that the intelligent glasses can judge the scene according to the actual situation, the problem that the traditional scene matching is easy to make mistakes is avoided, and the accuracy of the intelligent glasses in judging the scene is improved.
102. The intelligent glasses acquire a tunnel reality scene corresponding to the tunnel environment.
As an optional implementation manner, the acquiring, by the smart glasses, a tunnel reality scene corresponding to a tunnel environment may include:
the intelligent glasses perform simulation modeling on the tunnel environment in the cache space and generate a virtual tunnel reality scene, wherein the virtual tunnel reality scene is completely the same as the tunnel environment.
By implementing the implementation mode, the method for acquiring the real scene of the tunnel can be provided, so that the intelligent glasses can acquire the tunnel environment by using strong computing power, the traditional step of acquiring data from a server is avoided, and the working efficiency of the intelligent glasses is improved.
Further optionally, the simulation modeling of the tunnel environment by the smart glasses in the cache space may include:
the intelligent glasses analyze the focusing range of the pupil of the user, determine the priority of the simulation modeling area according to the focusing range of the pupil, preferentially perform simulation modeling in the focusing range of the pupil of the user, and perform simulation modeling in other areas after the simulation modeling of the focusing range of the pupil.
By implementing the implementation mode, a hierarchical simulation modeling method can be provided for the intelligent glasses when the intelligent glasses perform simulation modeling on the tunnel environment, so that the instantaneous workload of the intelligent glasses is reduced, the overall working efficiency of the intelligent glasses is improved, and the situation that the instantaneous power of the intelligent glasses is too high (the damage of components and parts caused by the too high instantaneous power and the influence caused by sudden power consumption change are avoided).
Still further optionally, the determining, by the smart glasses, the priority of the area for simulation modeling, and the performing simulation modeling preferentially in the focusing range of the pupil of the user may include:
the intelligent glasses determine the priority of the simulation modeling area, perform rough simulation modeling in the focusing range of the pupil of the user preferentially, and optimize the simulation modeling after the rough simulation modeling is completed to obtain complete simulation modeling, wherein the rough simulation modeling is higher than the recognition degree of human eyes (namely, the human eyes cannot distinguish the difference between the simulation modeling and a real scene).
By implementing the implementation mode, the speed of outputting the real scene of the tunnel by the intelligent glasses can be improved, so that the intelligent glasses can obtain the real scene of the tunnel more quickly, and the work of the intelligent glasses is improved.
103. The intelligent glasses mix the real scene of the tunnel with the virtual scene of the tunnel to obtain a mixed scene of the tunnel.
As an optional implementation manner, the mixing, by the smart glasses, the tunnel real scene and the tunnel virtual scene to obtain a tunnel mixed scene may include:
the intelligent glasses display the tunnel real scene in the cache space and cover the tunnel virtual scene in the tunnel real scene, so that the mixed effect is achieved, and the tunnel mixed scene is obtained.
By implementing the implementation mode, the intelligent glasses can obtain a tunnel mixed scene on the basis of a tunnel real scene, and the full data is not needed to be compared and selected to obtain the mixed scene, so that the working efficiency of the intelligent glasses is improved.
104. The smart glasses output the tunnel hybrid scene to the wearer of the smart glasses.
For example, when a user wears intelligent glasses to pass through a tunnel on an automatic driving vehicle, the intelligent glasses worn by the user can detect that the current environment is a tunnel environment, and after a determination result is obtained through detection, mural information is searched on a network connected with the intelligent glasses, and the mural information is used as a tunnel virtual scene to be mixed with a tunnel real scene collected by the intelligent glasses, so that the scene seen by the user is not a monotonous tunnel scene but a rich and colorful mural tunnel scene is depicted in the automatic driving vehicle, the user can look bright in front, and the condition that the user is tired is avoided; when the user watches the scene of the mural tunnel, the road and the surrounding traffic flow are the real road and traffic flow conditions (the real information of the tunnel mixed scene is the same as the real height or even the same as the real height), so that when the situation occurs in the automatic driving vehicle, the user can timely master the conditions, and then the vehicle is controlled to avoid traffic accidents.
In the method described in fig. 1, the intelligent glasses may detect that the autonomous vehicle enters the tunnel, and when the autonomous vehicle enters the tunnel, the intelligent glasses obtain a tunnel real scene corresponding to a current tunnel environment, mix the tunnel real scene with a tunnel virtual scene (the tunnel virtual scene may be a scene pre-stored in the intelligent glasses), obtain a tunnel mixed scene, and output the tunnel mixed scene to the user after obtaining the tunnel mixed scene. It can be seen that the method described in fig. 1 can display other scenes for the occupants of the autonomous vehicle when the autonomous vehicle enters the tunnel, so that the driver is kept awake, and thus when the autonomous vehicle encounters an emergency, the occupants can take emergency measures in time, and the probability of traffic accidents is reduced.
Example two
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating another scene display method for an autonomous vehicle according to an embodiment of the present disclosure. As shown in fig. 2, the tunnel reality scene is composed of a tunnel wall reality scene and a tunnel road surface reality scene, and the scene display method for the autonomous vehicle may include the steps of:
201. the intelligent glasses detect whether the current environment of the automatic driving vehicle is a tunnel environment, if so, the step 202 is executed; if not, the flow is ended.
As an optional implementation manner, before the smart glasses detect whether the current environment where the autonomous vehicle is located is a tunnel environment, the method may further include:
the intelligent glasses sense whether the illumination intensity changes suddenly or not through the optical sensor, and if the illumination intensity changes suddenly is detected, the intelligent glasses are executed to detect whether the current environment where the automatic driving vehicle is located is a tunnel environment or not.
By implementing the implementation mode, a precondition can be provided for the work of the intelligent glasses, so that the intelligent glasses are not required to be always in the state of judging the environment, and the work load of the intelligent glasses is reduced.
As an optional implementation, the detecting, by the smart glasses, whether the current environment in which the autonomous vehicle is located is a tunnel environment may include:
the intelligent glasses collect surrounding environment information through a camera built in the intelligent glasses, a scene which is the same as or similar to the current environment is searched in a database according to the surrounding environment information, whether the current environment is a tunnel environment is judged according to a label carried by the same or similar scene, and if yes, the step 202 is executed; if not, ending the process; the database may be an external database in which the storage unit of the smart glasses is connected to the smart glasses through a network, and the network connection mode is not limited in the embodiments of the present invention.
By implementing the implementation mode, a method for matching and identifying scenes can be provided for the intelligent glasses, so that the intelligent glasses can rapidly distinguish whether the current scene is a tunnel scene, the speed of judging the current environment by the intelligent glasses is increased, the judgment delay is reduced, and better interactive experience is brought to users.
As an optional implementation, the detecting, by the smart glasses, whether the current environment in which the autonomous vehicle is located is a tunnel environment may include:
the intelligent glasses collect surrounding environment information through a camera arranged in the intelligent glasses, wherein the environment information comprises environment information in the automatic driving vehicle and external environment information which can be seen through the configuration glass of the automatic driving vehicle; the intelligent glasses acquire illumination information and road information of the environment information according to the acquired surrounding environment information, and judge whether the current environment is a tunnel environment, if so, executing step 102; otherwise, the flow is ended.
By implementing the implementation mode, a method for judging whether the current environment is the tunnel environment or not can be provided for the intelligent glasses, so that the intelligent glasses can judge the scene according to the actual situation, the problem that the traditional scene matching is easy to make mistakes is avoided, and the accuracy of the intelligent glasses in judging the scene is improved.
202. The intelligent glasses are connected with a cloud database of the automatic driving vehicle; the cloud database is a database used by the automatic driving vehicle in real time in the automatic driving process.
In the embodiment of the invention, the automatic driving vehicle is provided with a large number of sensors, and information obtained by the large number of sensors is transmitted to the cloud database in real time, in other words, data in the cloud database is obtained and stored by the automatic driving vehicle through the large number of sensors.
203. The intelligent glasses acquire a tunnel real scene corresponding to a tunnel environment in a cloud database; the tunnel reality scene is a scene which is acquired by a scene acquisition system of the automatic driving vehicle and stored in a cloud database.
In the step 202 to the step 203, the cloud database can provide a first-level buffer platform for the smart glasses, so that the smart glasses can be directly connected to the cloud database to execute a single acquisition operation, the tunnel reality scene is prevented from being directly acquired through various sensors, the current directional transmission is acquired through the original variable in the process, and the working flow of the smart glasses is simplified. Step 202 to step 203 are implemented, the intelligent glasses can also directly acquire scene information other than the automatic driving vehicle, so that when a user watches the scene information, the visual angle blockage caused by the vehicle body can be ignored, and better visual experience is brought to the user.
204. And the intelligent glasses acquire the position information of the tunnel environment.
As an optional implementation, the acquiring, by the smart glasses, the location information of the tunnel environment may include:
the intelligent glasses detect the current position through GPS positioning and determine the current position information.
By implementing the implementation mode, the method for acquiring the current tunnel environment position information is provided for the intelligent glasses.
In the embodiment of the present invention, the position information of the tunnel environment obtained by the smart glasses may be obtained by GPS positioning, or may be obtained by networking to identify a scene, which is not limited in the embodiment of the present invention.
205. And the intelligent glasses acquire the administrative region to which the position information belongs according to the position information.
206. The intelligent glasses acquire a first virtual scene corresponding to a first scenic spot in the administrative area and serve as a first tunnel virtual scene.
207. The intelligent glasses cover the first tunnel virtual scene with the tunnel wall real scene to obtain a tunnel mixed scene.
Step 204-step 207 are implemented, the intelligent glasses can output the sight spot information of the current environment for the user, and the sight spot information is used as sight spot introduction to be covered in the real scene of the tunnel wall, so that a tunnel mixed scene is obtained, and the user can view sight spot images of the current administrative area in the automatic driving vehicle.
In the real-time steps 204 to 207, the smart glasses can also provide scene information with a 3D effect for the user, so that the user can view an almost real scene in the autonomous driving vehicle, experience a feeling of being personally on the scene, and further improve the interaction experience of the user.
208. The intelligent glasses judge whether a scene change instruction is received, if so, step 209 is executed; otherwise, the flow is ended.
In the embodiment of the present invention, the scene change instruction may be a gesture instruction or a voice instruction, which is not limited in the embodiment of the present invention.
209. The intelligent glasses judge whether the automatic driving vehicle drives away from the tunnel environment, if so, the process is ended; if not, go to step 210.
As an alternative embodiment, the smart glasses determining whether the autonomous vehicle is driving out of the tunnel environment may include:
the intelligent glasses judge whether the automatic driving vehicle drives away from the tunnel environment or not according to the illumination intensity, and if the automatic driving vehicle is detected to drive away from the tunnel environment, the process is ended; if no autonomous vehicle is detected to be traveling out of the tunnel environment, step 210 is performed.
By implementing the implementation mode, whether the user drives away from the tunnel environment or not can be quickly detected, and the subsequent steps are executed according to the detection result, so that the working efficiency of the intelligent glasses is improved.
210. And the intelligent glasses acquire a second virtual scene corresponding to a second scenic spot in the administrative area as a second tunnel virtual scene.
211. And the intelligent glasses cover the virtual scene of the second tunnel with the virtual scene of the first tunnel to obtain a tunnel mixed scene.
By implementing the step 208 to the step 211, the smart glasses can appropriately switch scenes when receiving a scene switching instruction, so as to meet the experience requirement of multiple scenes of the user.
212. The intelligent glasses determine the daylighting degree of the current eyes of the wearer according to the size of the pupils of the wearer.
As an optional implementation, the determining, by the smart glasses, the lighting level of the current eye of the wearer according to the size of the pupil of the wearer may include:
the intelligent glasses search the lighting degree corresponding to the pupil size on the network connected with the intelligent glasses according to the pupil size of the wearer, and whether the lighting degree has higher reliability on the network is judged, if yes, the lighting degree is determined to be the lighting degree of the eyes of the wearer.
By implementing the embodiment, the intelligent glasses can determine the daylighting degree of the glasses of the wearer according to the network data, so that the accuracy of the daylighting degree determination is improved.
213. And the intelligent glasses perform light adjustment on the tunnel mixed scene according to the light collection degree to obtain a non-differential tunnel mixed scene.
In the embodiment of the invention, the undifferentiated tunnel mixed scene is a scene with the same illumination intensity as the display illumination intensity.
214. The intelligent glasses adjust the built-in projection device of the intelligent glasses according to the mixed scene of the undifferentiated tunnel, so that the projection range of the projection device is the same as the visual range of the wearer.
Implementing step 214 may avoid situations where the wearer does not view the scene completely or where there is an edge interface to view the scene.
215. The intelligent glasses project the undifferentiated tunnel hybrid scene into the eyes of the wearer through the projection device.
Step 212 to step 215 are implemented, the smart glasses may determine the optical information parameters (such as brightness and illumination intensity) of the generated scene according to the display scene and the light sensing experience of the user, so that the user can better view the scene information without discomfort, and better substitute feeling is provided for the user, thereby improving the experience of the user.
In the method described in fig. 2, the smart glasses may detect that the autonomous vehicle enters the tunnel, and when the autonomous vehicle enters the tunnel, the smart glasses acquires a tunnel real scene corresponding to a current tunnel environment through an implementation cloud database of the autonomous vehicle, acquires current position information, determines a current administrative area, further acquires sight spot information of the current administrative area, determines that the sight spot information is a tunnel virtual scene, mixes the tunnel real scene with the tunnel virtual scene to obtain a tunnel mixed scene, and after the tunnel mixed scene is obtained, detects a current eye use condition of the user, adjusts output details according to the eye use condition, and outputs the tunnel mixed scene to the user after the adjustment is completed, wherein when the user wants to switch the scene for viewing, the smart glasses may further receive a switching instruction input by the user, thereby switching scenes. It can be seen that the method described in fig. 2 can display the famous scenery spot scene of the current position for the rider of the autonomous vehicle when the autonomous vehicle enters the tunnel, so that the driver keeps awake state, and switches other scenes for the rider when the rider wants to watch other scenes, thereby providing various interactive experiences for the rider of the autonomous vehicle, and when the autonomous vehicle encounters an emergency, the rider can take emergency measures in time, and the probability of traffic accidents is reduced.
EXAMPLE III
Referring to fig. 3, fig. 3 is a schematic structural diagram of an intelligent glasses according to an embodiment of the present invention. As shown in fig. 3, the smart glasses may include:
the detection unit 301 is configured to detect whether a current environment in which the autonomous vehicle is located is a tunnel environment.
As an optional implementation manner, the smart glasses may further include a light intensity detection unit, where the light intensity detection unit is configured to sense whether the illumination intensity changes suddenly through the light sensor before the detection unit 301 detects whether the current environment where the autonomous vehicle is located is the tunnel environment, and if the illumination intensity changes suddenly, trigger the detection unit 301 to perform the operation of detecting whether the current environment where the autonomous vehicle is located is the tunnel environment.
By implementing the implementation mode, a precondition can be provided for the work of the intelligent glasses, so that the intelligent glasses are not required to be always in the state of judging the environment, and the work load of the intelligent glasses is reduced.
As an alternative embodiment, the detection unit 301 may include:
the acquisition subunit is used for acquiring surrounding environment information through a camera built in the intelligent glasses;
the searching subunit is used for searching a scene which is the same as or similar to the current environment in the database according to the surrounding environment information; the database may be an external database in which the storage unit of the smart glasses is connected to the smart glasses through a network, and the network connection mode is not limited in the embodiments of the present invention;
and the environment judgment subunit is used for judging whether the current environment is the tunnel environment according to the label carried by the same or similar scene.
By implementing the implementation mode, a method for matching and identifying scenes can be provided for the intelligent glasses, so that the intelligent glasses can rapidly distinguish whether the current scene is a tunnel scene, the speed of judging the current environment by the intelligent glasses is increased, the judgment delay is reduced, and better interactive experience is brought to users.
As an alternative embodiment, the detecting subunit 301 may include:
the system comprises an acquisition subunit, a display unit and a display unit, wherein the acquisition subunit is used for acquiring surrounding environment information through a camera arranged in intelligent glasses, and the environment information comprises environment information inside an automatic driving vehicle and external environment information which can be seen through configuration glass of the automatic driving vehicle;
and the environment judgment subunit is further used for acquiring the illumination information and the road information of the environment information according to the acquired surrounding environment information to judge whether the current environment is a tunnel environment.
By implementing the implementation mode, a method for judging whether the current environment is the tunnel environment or not can be provided for the intelligent glasses, so that the intelligent glasses can judge the scene according to the actual situation, the problem that the traditional scene matching is easy to make mistakes is avoided, and the accuracy of the intelligent glasses in judging the scene is improved.
An obtaining unit 302, configured to obtain a tunnel real scene corresponding to a tunnel environment when a detection result of the detecting unit 301 is yes.
As an optional implementation manner, the obtaining unit 302 may be specifically configured to perform simulation modeling on a tunnel environment in a cache space, and generate a virtual tunnel reality scene, where the virtual tunnel reality scene is identical to the tunnel environment.
By implementing the implementation mode, the method for acquiring the real scene of the tunnel can be provided, so that the intelligent glasses can acquire the tunnel environment by using strong computing power, the traditional step of acquiring data from a server is avoided, and the working efficiency of the intelligent glasses is improved.
Further optionally, the obtaining unit 302 may include:
the analysis modeling subunit is used for analyzing the focusing range of the pupil of the user, determining the priority of the simulation modeling area according to the focusing range of the pupil, and preferentially performing simulation modeling in the focusing range of the pupil of the user;
and the complete modeling subunit is used for carrying out simulation modeling on the intelligent glasses in other areas after the analysis modeling subunit carries out simulation modeling.
By implementing the implementation mode, a hierarchical simulation modeling method can be provided for the intelligent glasses when the intelligent glasses perform simulation modeling on the tunnel environment, so that the instantaneous workload of the intelligent glasses is reduced, the overall working efficiency of the intelligent glasses is improved, and the situation that the instantaneous power of the intelligent glasses is too high (the damage of components and parts caused by the too high instantaneous power and the influence caused by sudden power consumption change are avoided).
Still further optionally, the analysis modeling unit may include:
the rough modeling module is used for determining the priority of the simulation modeling area and preferentially performing rough simulation modeling in the focusing range of the pupil of the user;
and the complete modeling module is used for optimizing the simulation modeling to obtain complete simulation modeling after the rough simulation modeling module completes the rough simulation modeling, wherein the rough simulation modeling is higher than the recognition degree of human eyes (namely, the human eyes cannot distinguish the difference between the simulation modeling and a real scene).
By implementing the implementation mode, the speed of outputting the real scene of the tunnel by the intelligent glasses can be improved, so that the intelligent glasses can obtain the real scene of the tunnel more quickly, and the work of the intelligent glasses is improved.
A mixing unit 303, configured to mix the tunnel real scene acquired by the acquiring unit 302 with the tunnel virtual scene to obtain a tunnel mixed scene.
As an optional implementation manner, the mixing unit 303 may be specifically configured to display a tunnel real scene in the cache space, and cover a tunnel virtual scene in the tunnel real scene, so as to achieve a mixing effect, and obtain a tunnel mixed scene.
By implementing the implementation mode, the intelligent glasses can obtain a tunnel mixed scene on the basis of a tunnel real scene, and the full data is not needed to be compared and selected to obtain the mixed scene, so that the working efficiency of the intelligent glasses is improved.
And an output unit 304, configured to output the tunnel mixed scene mixed by the mixing unit 303 to a wearer of the smart glasses.
For example, when a user wears intelligent glasses to pass through a tunnel on an automatic driving vehicle, the intelligent glasses worn by the user can detect that the current environment is a tunnel environment, and after a determination result is obtained through detection, mural information is searched on a network connected with the intelligent glasses, and the mural information is used as a tunnel virtual scene to be mixed with a tunnel real scene collected by the intelligent glasses, so that the scene seen by the user is not a monotonous tunnel scene but a rich and colorful mural tunnel scene is depicted in the automatic driving vehicle, the user can look bright in front, and the condition that the user is tired is avoided; when the user watches the scene of the mural tunnel, the road and the surrounding traffic flow are the real road and traffic flow conditions (the real information of the tunnel mixed scene is the same as the real height or even the same as the real height), so that when the situation occurs in the automatic driving vehicle, the user can timely master the conditions, and then the vehicle is controlled to avoid traffic accidents.
It can be seen that the smart glasses described in fig. 3 can display other scenes for the occupants of the autonomous vehicle when the autonomous vehicle enters the tunnel, so that the driver is kept awake, and thus when the autonomous vehicle encounters an emergency, the occupants can take emergency measures in time, and the probability of traffic accidents is reduced.
Example four
Referring to fig. 4, fig. 4 is a schematic structural diagram of another smart glasses according to the embodiment of the present invention. The smart glasses shown in fig. 4 are optimized from the smart glasses shown in fig. 3. Compared to the smart glasses shown in fig. 3, in the smart glasses shown in fig. 4, the obtaining unit 302 may include:
a connection subunit 3021 configured to connect the cloud database of the autonomous vehicle when the detection result of the detection unit 301 is yes; the cloud database is a database used by the automatic driving vehicle in real time in the automatic driving process.
In the embodiment of the invention, the automatic driving vehicle is provided with a large number of sensors, and information obtained by the large number of sensors is transmitted to the cloud database in real time, in other words, data in the cloud database is obtained and stored by the automatic driving vehicle through the large number of sensors.
The first acquiring subunit 3022 is configured to acquire a tunnel reality scene corresponding to a tunnel environment from the cloud database connected to the connecting subunit 3021; the tunnel reality scene is a scene which is acquired by a scene acquisition system of the automatic driving vehicle and stored in a cloud database.
In the embodiment of the invention, the cloud database can provide a primary buffer platform for the intelligent glasses, so that the intelligent glasses can be directly connected with the cloud database to execute single acquisition operation, the phenomenon that the real scene of the tunnel is directly acquired through various sensors is avoided, the current directional transmission is acquired from the original variable in the process, and the working process of the intelligent glasses is simplified. Meanwhile, the intelligent glasses can also directly acquire scene information except the automatically driven vehicle, so that when a user watches the vehicle, visual angle blockage caused by the vehicle body can be ignored, and better visual experience is brought to the user.
As an alternative implementation, in the smart glasses shown in fig. 4, the tunnel real scene is composed of a tunnel wall real scene and a tunnel road surface real scene, wherein the mixing unit 303 may include:
and a second obtaining subunit 3031, configured to obtain location information of the tunnel environment.
As an optional implementation manner, the second obtaining subunit 3031 may be specifically configured to detect a current location through GPS positioning, and determine current location information.
By implementing the implementation mode, the method for acquiring the current tunnel environment position information is provided for the intelligent glasses.
In the embodiment of the present invention, the position information of the tunnel environment obtained by the smart glasses may be obtained by GPS positioning, or may be obtained by networking to identify a scene, which is not limited in the embodiment of the present invention.
And a third obtaining subunit 3032, configured to obtain, according to the position information obtained by the second obtaining subunit 3031, an administrative area to which the position information belongs.
The fourth obtaining subunit 3033 is configured to obtain a first virtual scene corresponding to the first sight spot in the administrative area obtained by the third obtaining subunit 3032, and use the first virtual scene as a first tunnel virtual scene.
And a covering subunit 3034, configured to cover the first tunnel virtual scene acquired by the fourth acquiring unit 3033 with the tunnel wall real scene, so as to obtain a tunnel mixed scene.
Therefore, the intelligent glasses described in fig. 4 can output the sight spot information of the current environment for the user, and cover the sight spot information as sight spot introduction in the real scene of the tunnel wall, so as to obtain a tunnel mixed scene, and enable the user to view the sight spot image of the current administrative area in the automatic driving vehicle.
As an alternative implementation, in the smart glasses shown in fig. 4, the mixing unit 303 may further include:
the first determining subunit 3035 is configured to determine whether a scene change instruction is received after the coverage subunit 3034 obtains the tunnel mixing scene.
In this embodiment of the present invention, the tunnel mixed scene obtained by the coverage subunit 3034 is obtained by mixing the first tunnel virtual scene and the tunnel road surface real scene.
And a second judgment sub-unit 3036 configured to judge whether the autonomous vehicle is driven out of the tunnel environment when the judgment result of the first judgment sub-unit 3035 is yes.
As an alternative implementation, the second determining subunit 3036 may be specifically configured to determine whether the autonomous vehicle is driven out of the tunnel environment according to the illumination intensity.
By implementing the implementation mode, whether the user drives away from the tunnel environment or not can be quickly detected, and the subsequent steps are executed according to the detection result, so that the working efficiency of the intelligent glasses is improved.
A fifth obtaining subunit 3037, configured to, if the determination result of the second determining subunit 3036 is negative, obtain a second virtual scene corresponding to a second attraction in the administrative area, as a second tunnel virtual scene.
The covering subunit 3034 is further configured to cover the second tunnel virtual scene acquired by the fifth acquiring subunit 3037 with the first tunnel virtual scene, so as to obtain a tunnel mixed scene.
Therefore, the intelligent glasses described in fig. 4 can acquire the real scene of the tunnel through the cloud database of the automatic driving vehicle, so that only the acquisition operation needs to be executed, the operation of directly acquiring the real scene of the tunnel through various sensors is avoided, the current directional transmission is acquired through the original variable in the process, and the working flow of the intelligent glasses is simplified. Meanwhile, the intelligent glasses can also directly acquire scene information except the automatically driven vehicle, so that when a user watches the vehicle, visual angle blockage caused by the vehicle body can be ignored, and better visual experience is brought to the user.
Therefore, the intelligent glasses described in fig. 4 can output the sight spot information of the current environment for the user, and cover the sight spot information as sight spot introduction in the real scene of the tunnel wall, so as to obtain a tunnel mixed scene, and enable the user to view the sight spot image of the current administrative area in the automatic driving vehicle.
Therefore, the intelligent glasses described in fig. 4 can appropriately switch scenes when receiving a scene switching instruction, so as to meet the experience requirements of multiple scenes of a user.
It can be seen that the smart glasses described in fig. 4 can display other scenes for the occupants of the autonomous vehicle when the autonomous vehicle enters the tunnel, so that the driver is kept awake, and thus when the autonomous vehicle encounters an emergency, the occupants can take emergency measures in time, and the probability of traffic accidents is reduced.
EXAMPLE five
Referring to fig. 5, fig. 5 is a schematic structural diagram of another smart glasses according to the embodiment of the present invention. The smart glasses shown in fig. 5 are optimized from the smart glasses shown in fig. 4. In contrast to the smart glasses shown in fig. 4, in the smart glasses shown in fig. 5, the output unit 304 includes:
a determining subunit 3041 for determining the luminance of the eye of the current wearer according to the pupil size of the wearer.
As an optional implementation manner, the determining subunit 3041 may be specifically configured to search, according to the pupil size of the wearer, a lighting degree corresponding to the pupil size on a network connected to the smart glasses, and determine whether the lighting degree has higher reliability on the network, if so, that the lighting degree is the lighting degree of the eyes of the wearer.
By implementing the embodiment, the intelligent glasses can determine the daylighting degree of the glasses of the wearer according to the network data, so that the accuracy of the daylighting degree determination is improved.
The light adjusting subunit 3042 is configured to perform light adjustment on the tunnel mixed scene obtained by the covering subunit 3034 according to the lighting degree determined by the determining subunit 3041, so as to obtain a non-differential tunnel mixed scene.
The adjusting subunit 3043 is configured to adjust the projection apparatus built in the smart glasses according to the undifferentiated tunnel mixed scene obtained after the light adjustment is performed by the light adjusting subunit 3042, so that the projection range of the projection apparatus is the same as the visual range of the wearer.
A projection subunit 3044, configured to project the undifferentiated tunnel hybrid scene into the eyes of the wearer through the projection apparatus adjusted by the adjustment subunit 3043.
Therefore, the intelligent glasses described in fig. 5 can output the sight spot information of the current environment for the user, and cover the sight spot information as sight spot introduction in the real scene of the tunnel wall, so as to obtain a tunnel mixed scene, and enable the user to view the sight spot image of the current administrative area in the automatic driving vehicle.
Therefore, the intelligent glasses described in fig. 5 can appropriately switch scenes when receiving an instruction for switching scenes, so as to meet the experience requirements of multiple scenes of a user.
It can be seen that the intelligent glasses described in fig. 5 can display famous scenery spot scenes of the current position for the rider of the autonomous vehicle when the autonomous vehicle enters the tunnel, so that the driver keeps a waking state, and switches other scenes for the rider when the rider wants to watch other scenes, thereby providing various interactive experiences for the rider of the autonomous vehicle, and when the autonomous vehicle encounters an emergency, the rider can take emergency measures in time, and the probability of traffic accidents is reduced.
EXAMPLE six
Referring to fig. 6, fig. 6 is a schematic structural diagram of another smart glasses according to the embodiment of the present invention. As shown in fig. 6, the smart glasses may include:
a memory 601 in which executable program code is stored;
a processor 602 coupled to a memory 601;
the processor 602 calls the executable program code stored in the memory 601 to execute any one of the scene display methods for the autonomous vehicle shown in fig. 1 to 2.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by instructions associated with a program, which may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), compact disc-Read-Only Memory (CD-ROM), or other Memory, magnetic disk, magnetic tape, or magnetic tape, Or any other medium which can be used to carry or store data and which can be read by a computer.
The scene display method and the smart glasses for the automatic driving vehicle disclosed by the embodiment of the invention are described in detail, a specific example is applied in the description to explain the principle and the implementation mode of the invention, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (6)

1. A scene display method for an autonomous vehicle, the method comprising:
the intelligent glasses detect whether the current environment where the automatic driving vehicle is located is a tunnel environment;
if yes, the intelligent glasses acquire a tunnel real scene corresponding to the tunnel environment;
the intelligent glasses mix the tunnel real scene with the tunnel virtual scene to obtain a tunnel mixed scene;
the smart glasses output the tunnel hybrid scene to a wearer of the smart glasses;
the intelligent glasses acquire the tunnel reality scene corresponding to the tunnel environment, and the method comprises the following steps:
the intelligent glasses are connected with a cloud database of the automatic driving vehicle; the cloud database is a database used by the automatic driving vehicle in real time in the automatic driving process;
the intelligent glasses acquire a tunnel real scene corresponding to the tunnel environment from the cloud database; the tunnel reality scene is a scene which is acquired by a scene acquisition system of the automatic driving vehicle and stored in the cloud database;
the real scene of tunnel comprises tunnel wall real scene and tunnel road surface real scene, intelligent glasses will tunnel real scene mixes with the virtual scene of tunnel, obtains the mixed scene of tunnel, includes:
the intelligent glasses acquire the position information of the tunnel environment;
the intelligent glasses acquire an administrative region to which the position information belongs according to the position information;
the intelligent glasses acquire a first virtual scene corresponding to a first scenic spot in the administrative area as a first tunnel virtual scene;
the intelligent glasses cover the first tunnel virtual scene with the tunnel wall real scene to obtain a tunnel mixed scene.
2. The method of claim 1, wherein the smart glasses overlay the first tunnel virtual scene over the tunnel wall real scene, after obtaining a tunnel hybrid scene, and before the smart glasses output the tunnel hybrid scene to a wearer of the smart glasses, the method further comprising:
the intelligent glasses judge whether a scene change instruction is received;
if so, the intelligent glasses judge whether the automatic driving vehicle drives away from the tunnel environment, and if not, a second virtual scene corresponding to a second scenic spot in the administrative area is obtained and used as a second tunnel virtual scene;
and the intelligent glasses cover the second tunnel virtual scene with the first tunnel virtual scene to obtain a tunnel mixed scene.
3. The method of claim 1 or 2, wherein the smart glasses outputting the tunnel hybrid scene to a wearer of the smart glasses comprises:
the intelligent glasses determine the current daylighting degree of the eyes of the wearer according to the size of the pupils of the wearer;
the intelligent glasses perform light adjustment on the tunnel mixed scene according to the lighting degree to obtain a non-differential tunnel mixed scene;
the intelligent glasses adjust a projection device built in the intelligent glasses according to the undifferentiated tunnel mixed scene, so that the projection range of the projection device is the same as the visual range of the wearer;
the smart glasses project the undifferentiated tunnel hybrid scene into the wearer's eyes through the projection device.
4. A smart eyewear, comprising:
the detection unit is used for detecting whether the current environment where the automatic driving vehicle is located is a tunnel environment;
the acquisition unit is used for acquiring a tunnel real scene corresponding to the tunnel environment when the detection result of the detection unit is positive;
the mixing unit is used for mixing the tunnel real scene and the tunnel virtual scene to obtain a tunnel mixed scene;
an output unit for outputting the tunnel hybrid scene to a wearer of the smart glasses;
the acquisition unit includes:
a connection subunit for connecting a cloud database of the autonomous vehicle; the cloud database is a database used by the automatic driving vehicle in real time in the automatic driving process;
the first acquiring subunit is configured to acquire a tunnel reality scene corresponding to the tunnel environment from the cloud database; the tunnel reality scene is a scene which is acquired by a scene acquisition system of the automatic driving vehicle and stored in the cloud database;
the tunnel reality scene comprises tunnel wall reality scene and tunnel road surface reality scene, mix the unit and include:
the second acquiring subunit is used for acquiring the position information of the tunnel environment;
the third acquisition subunit is used for acquiring the administrative region to which the position information belongs according to the position information;
the fourth acquiring subunit is configured to acquire a first virtual scene corresponding to a first scenic spot in the administrative area, and use the first virtual scene as a first tunnel virtual scene;
and the covering subunit is used for covering the first tunnel virtual scene with the tunnel wall real scene to obtain a tunnel mixed scene.
5. The smart eyewear of claim 4, wherein the mixing unit further comprises:
the first judgment subunit is used for judging whether a scene change instruction is received or not;
the second judgment subunit is used for judging whether the automatic driving vehicle drives away from the tunnel environment or not when the judgment result of the first judgment subunit is yes;
a fifth acquiring subunit, configured to acquire, when the determination result of the second determining subunit is negative, a second virtual scene corresponding to a second sight spot in the administrative area, as a second tunnel virtual scene;
the covering subunit is further configured to cover the first tunnel virtual scene with the second tunnel virtual scene, so as to obtain a tunnel mixed scene.
6. The smart glasses according to claim 4 or 5, wherein the output unit comprises:
the determining subunit is used for determining the current daylighting degree of the eye of the wearer according to the pupil size of the wearer;
the dimming subunit is used for performing light adjustment on the tunnel mixed scene according to the light collection degree to obtain a non-differential tunnel mixed scene;
the adjusting subunit is configured to adjust a projection device built in the smart glasses according to the undifferentiated tunnel hybrid scene, so that a projection range of the projection device is the same as a visual range of the wearer;
a projection subunit for projecting the undifferentiated tunnel hybrid scene into the wearer's eye by the projection device.
CN201711309717.4A 2017-12-11 2017-12-11 Scene display method for automatic driving vehicle and intelligent glasses Active CN107945284B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201711309717.4A CN107945284B (en) 2017-12-11 2017-12-11 Scene display method for automatic driving vehicle and intelligent glasses
PCT/CN2017/117678 WO2019114013A1 (en) 2017-12-11 2017-12-21 Scene displaying method for self-driving vehicle and smart eyewear

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711309717.4A CN107945284B (en) 2017-12-11 2017-12-11 Scene display method for automatic driving vehicle and intelligent glasses

Publications (2)

Publication Number Publication Date
CN107945284A CN107945284A (en) 2018-04-20
CN107945284B true CN107945284B (en) 2020-03-06

Family

ID=61946524

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711309717.4A Active CN107945284B (en) 2017-12-11 2017-12-11 Scene display method for automatic driving vehicle and intelligent glasses

Country Status (2)

Country Link
CN (1) CN107945284B (en)
WO (1) WO2019114013A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI811043B (en) * 2022-07-28 2023-08-01 大陸商星宸科技股份有限公司 Image processing system and image object superimposition apparatus and method thereof

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102018210390B4 (en) * 2018-06-26 2023-08-03 Audi Ag Method for operating a display device in a motor vehicle and display system for a motor vehicle
CN110864913B (en) * 2019-11-28 2021-09-03 苏州智加科技有限公司 Vehicle testing method and device, computer equipment and storage medium
CN113989466B (en) * 2021-10-28 2022-09-20 江苏濠汉信息技术有限公司 Beyond-the-horizon assistant driving system based on situation cognition
CN114942532B (en) * 2022-05-25 2024-06-21 维沃移动通信有限公司 Glasses clamping piece, glasses clamping piece control method and glasses clamping piece control device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103185963A (en) * 2013-03-29 2013-07-03 南京智真电子科技有限公司 Multifunctional vehicle driving assistant glasses
CN104656257A (en) * 2015-01-23 2015-05-27 联想(北京)有限公司 Information processing method and electronic equipment
CN105045397A (en) * 2015-08-31 2015-11-11 招商局重庆交通科研设计院有限公司 Method for testing influence of illumination environment in tunnel on on-service tunnel operation safety
CN105629515A (en) * 2016-02-22 2016-06-01 宇龙计算机通信科技(深圳)有限公司 Navigation glasses, navigation method and navigation system
CN105704886A (en) * 2014-12-16 2016-06-22 现代自动车株式会社 Vehicle lighting control system using wearable glasses and method for the same
CN106373197A (en) * 2016-09-06 2017-02-01 广州视源电子科技股份有限公司 Augmented reality method and augmented reality device
CN107219920A (en) * 2017-05-15 2017-09-29 北京小米移动软件有限公司 The recognition methods of AR glasses, device and AR glasses based on scene
WO2017166193A1 (en) * 2016-03-31 2017-10-05 深圳多哚新技术有限责任公司 Display screen driving method and device based on vr images
US10403043B2 (en) * 2016-04-14 2019-09-03 The Research Foundation For The State University Of New York System and method for generating a progressive representation associated with surjectively mapped virtual and physical reality image data

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2846756A1 (en) * 2002-11-04 2004-05-07 Pechon Stephane Jean Martin Le Night vision improving method for train, car, bus drivers, involves filtering polarization of light by glasses or helmet, filtering polarized ray again to prevent ray from reaching eye
CN103905805A (en) * 2012-12-24 2014-07-02 天马微电子股份有限公司 Electronic eye glass system for vehicles
DE102015006612B4 (en) * 2015-05-21 2020-01-23 Audi Ag Method for operating data glasses in a motor vehicle and system with data glasses
CN107065183A (en) * 2017-03-21 2017-08-18 广东光阵光电科技有限公司 A kind of driving at night enhancing visibility method and Portable glasses formula device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103185963A (en) * 2013-03-29 2013-07-03 南京智真电子科技有限公司 Multifunctional vehicle driving assistant glasses
CN105704886A (en) * 2014-12-16 2016-06-22 现代自动车株式会社 Vehicle lighting control system using wearable glasses and method for the same
CN104656257A (en) * 2015-01-23 2015-05-27 联想(北京)有限公司 Information processing method and electronic equipment
CN105045397A (en) * 2015-08-31 2015-11-11 招商局重庆交通科研设计院有限公司 Method for testing influence of illumination environment in tunnel on on-service tunnel operation safety
CN105629515A (en) * 2016-02-22 2016-06-01 宇龙计算机通信科技(深圳)有限公司 Navigation glasses, navigation method and navigation system
WO2017166193A1 (en) * 2016-03-31 2017-10-05 深圳多哚新技术有限责任公司 Display screen driving method and device based on vr images
US10403043B2 (en) * 2016-04-14 2019-09-03 The Research Foundation For The State University Of New York System and method for generating a progressive representation associated with surjectively mapped virtual and physical reality image data
CN106373197A (en) * 2016-09-06 2017-02-01 广州视源电子科技股份有限公司 Augmented reality method and augmented reality device
CN107219920A (en) * 2017-05-15 2017-09-29 北京小米移动软件有限公司 The recognition methods of AR glasses, device and AR glasses based on scene

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI811043B (en) * 2022-07-28 2023-08-01 大陸商星宸科技股份有限公司 Image processing system and image object superimposition apparatus and method thereof

Also Published As

Publication number Publication date
WO2019114013A1 (en) 2019-06-20
CN107945284A (en) 2018-04-20

Similar Documents

Publication Publication Date Title
CN107945284B (en) Scene display method for automatic driving vehicle and intelligent glasses
KR102263496B1 (en) Navigation method based on a see-through head-mounted device
KR102118438B1 (en) Head up display apparatus for vehicle and method thereof
US20180232049A1 (en) Gaze and saccade based graphical manipulation
CN109219551A (en) Condition of road surface head up display
EP3691926A1 (en) Display system in a vehicle
US20130208004A1 (en) Display control device, display control method, and program
CN105224272B (en) image display method and automobile display device
US20190064528A1 (en) Information processing device, information processing method, and program
CN111032413A (en) Method for operating a screen of a motor vehicle and motor vehicle
CN112988927B (en) Map data processing method and device, computer equipment and storage medium
CN109084748B (en) AR navigation method and electronic equipment
WO2016058449A1 (en) Smart glasses and control method for smart glasses
JPH0953917A (en) Instrument for measuring direction of line of sight for vehicle
CN206906936U (en) A kind of head-mounted display apparatus that driving behavior monitor detection is provided
US11004273B2 (en) Information processing device and information processing method
CN111872928B (en) Obstacle attribute distinguishing method and system and intelligent robot
CN110696755A (en) Vehicle intelligent service experience method, system, vehicle machine and storage medium
US20190114502A1 (en) Information processing device, information processing method, and program
CN111086518B (en) Display method and device, vehicle-mounted head-up display equipment and storage medium
CN116543266A (en) Automatic driving intelligent model training method and device guided by gazing behavior knowledge
CN111930236B (en) Equipment control method and device, storage medium and electronic equipment
TWI591593B (en) Traffic signs, traffic lights and timers for video detection and identification of voice announcers
CN113492864A (en) Driving support device and data collection system
CN109151446A (en) Control method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant