WO2023202384A1 - 一种增强现实抬头显示方法、装置、终端设备及存储介质 - Google Patents

一种增强现实抬头显示方法、装置、终端设备及存储介质 Download PDF

Info

Publication number
WO2023202384A1
WO2023202384A1 PCT/CN2023/086569 CN2023086569W WO2023202384A1 WO 2023202384 A1 WO2023202384 A1 WO 2023202384A1 CN 2023086569 W CN2023086569 W CN 2023086569W WO 2023202384 A1 WO2023202384 A1 WO 2023202384A1
Authority
WO
WIPO (PCT)
Prior art keywords
target object
vehicle
image
information
augmented reality
Prior art date
Application number
PCT/CN2023/086569
Other languages
English (en)
French (fr)
Inventor
田秀梅
李春玉
孙学龙
孙志伟
孙印飞
程思雨
Original Assignee
长城汽车股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 长城汽车股份有限公司 filed Critical 长城汽车股份有限公司
Publication of WO2023202384A1 publication Critical patent/WO2023202384A1/zh

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/001Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles integrated in the windows, e.g. Fresnel lenses
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0141Head-up displays characterised by optical features characterised by the informative content of the display

Definitions

  • This application relates to the field of image processing technology, and specifically to an augmented reality head-up display method, device, terminal equipment and storage medium.
  • Head Up Display also called a head-up display system
  • HUD Head Up Display
  • AR Augmented Reality
  • head-up display and augmented reality technology have been applied in automobiles.
  • Combining head-up display and augmented reality technology in cars allows drivers to see images that blend virtual images with the real environment without lowering their heads, making it easier for drivers to drive.
  • cars can only use augmented reality head-up display technology to display navigation routes, vehicle speed, etc., and the level of intelligence of the car is low.
  • One of the purposes of the embodiments of the present application is to provide an augmented reality head-up display method, device, terminal equipment and storage medium, which can solve the problem of low automobile intelligence.
  • embodiments of the present application provide an augmented reality head-up display method, including:
  • first information including at least one of visual perception information of the road where the first vehicle is located, the driving state of the first vehicle, and the terrain features of the area where the first vehicle is located; based on the first Information that determines the second target object on the road where the first vehicle is located that needs enhanced display;
  • the augmented reality image of the second target object is projected on the front windshield of the first vehicle.
  • an augmented reality head-up display device including:
  • An information acquisition module configured to acquire first information, which includes at least one of visual perception information of the road where the first vehicle is located, the driving state of the first vehicle, and the terrain features of the area where the first vehicle is located. kind;
  • a target determination module configured to determine, based on the first information, a second target object on the road where the first vehicle is located that requires enhanced display;
  • An image generation module used to generate an augmented reality image of the second target object
  • An image display module is configured to project the augmented reality image of the second target object on the front windshield of the first vehicle.
  • embodiments of the present application provide a terminal device, including: a memory, a processor, and a computer program stored in the memory and executable on the processor.
  • the processor executes the computer program.
  • embodiments of the present application provide a computer-readable storage medium that stores a computer program.
  • the computer program is executed by a processor, any one of the above-mentioned aspects of the first aspect is implemented.
  • An augmented reality heads-up display method is implemented.
  • embodiments of the present application provide a computer program product that, when run on a terminal device, causes the terminal device to execute the augmented reality head-up display method described in any one of the above first aspects.
  • the first beneficial effect provided by the embodiments of the present application is that the present application first obtains the first information, which includes the visual perception information of the road where the first vehicle is located, the driving status of the first vehicle, and the topography of the area where the first vehicle is located. At least one of the characteristics; based on the obtained first information, determine a second target object on the road that needs to be enhanced for display, generate an augmented reality image of the second target object, and finally project the augmented reality image of the second target object in front On the windshield.
  • the first information which includes the visual perception information of the road where the first vehicle is located, the driving status of the first vehicle, and the topography of the area where the first vehicle is located. At least one of the characteristics; based on the obtained first information, determine a second target object on the road that needs to be enhanced for display, generate an augmented reality image of the second target object, and finally project the augmented reality image of the second target object in front On the windshield.
  • this application determines the second target object based on the first information, and the second target object changes with the first information, so that the content of the augmented reality heads-up display Rich and variable, it improves the intelligence of the vehicle.
  • Figure 1 is a schematic diagram of an application scenario of an augmented reality head-up display method provided by an embodiment of the present application
  • Figure 2 is a schematic flowchart of an augmented reality head-up display method provided by an embodiment of the present application
  • Figure 3 is an application schematic diagram of an augmented reality head-up display provided by an embodiment of the present application.
  • Figure 4 is a schematic flowchart of a method for determining a second target object based on visibility provided by an embodiment of the present application
  • Figure 5 is a schematic diagram of the effect of an augmented reality head-up display provided by an embodiment of the present application.
  • Figure 6 is a schematic diagram of the effect of an augmented reality head-up display provided by another embodiment of the present application.
  • Figure 7 is a schematic structural diagram of an augmented reality head-up display device provided by an embodiment of the present application.
  • Figure 8 is a schematic structural diagram of a terminal device provided by an embodiment of the present application.
  • HUD uses the principle of optical reflection to display conventional information such as vehicle speed and fuel level on the front windshield.
  • the driver can see the vehicle speed, fuel level and other information without lowering his head, allowing the driver to concentrate more while driving and improving driving efficiency. security.
  • information is conveyed to the driver through the HUD interface in the most direct and convenient display mode, so that the driver's human eyes, the outside environment and the AR image can reach three points and one line, realizing the integration of digital and reality. Fusion.
  • the information displayed by HUD is relatively single and the content is relatively simple, which cannot meet the entertainment needs of users.
  • This application provides an augmented reality head-up display method that can display different images or information under different circumstances, increases entertainment performance, and brings a better entertainment experience to users.
  • FIG. 1 is a schematic diagram of an application scenario of the augmented reality head-up display method provided by an embodiment of the present application.
  • the above-mentioned augmented reality head-up display method can be used to synthesize augmented reality images and perform head-up display.
  • the information collection device 10 is used to collect the first information and send the first information to the HUD device 20 .
  • the HUD device 20 After receiving the first information, the HUD device 20 generates an augmented reality image of the second target object in the external environment, and projects the augmented reality image of the second target object on the front windshield of the vehicle, so that the user can drive while driving. When driving, you can see through the front windshield an augmented reality image matching the second target object in the external environment.
  • FIG. 2 shows a schematic flow chart of the augmented reality head-up display method provided by this application. Referring to Figure 2, the method is described in detail as follows:
  • the first information includes at least one of visual perception information of the road where the first vehicle is located, the driving state of the first vehicle, and the terrain features of the area where the first vehicle is located.
  • the visual perception information includes visibility and/or light intensity
  • the driving state includes a congestion coefficient and/or a collision coefficient of the first vehicle colliding with the first target object
  • the first target object is A second vehicle and/or a first pedestrian whose detected distance from the first vehicle is less than or equal to the first distance.
  • the first vehicle is the own vehicle.
  • Visual perception information is information that can be perceived by the human eye.
  • visibility can be detected through visibility sensors.
  • Light intensity can be detected by a light intensity sensor.
  • Visibility and light intensity can also be determined based on weather forecasts.
  • the location of the first vehicle is obtained.
  • the location of the first vehicle can be determined using a Global Positioning System (GPS).
  • GPS Global Positioning System
  • the weather forecast of the area where the first vehicle is located is determined.
  • the location is determined based on the weather forecast.
  • Visibility and light intensity can be included in weather forecasts.
  • the congestion coefficient may be determined by one or more of the first vehicle's vehicle speed, acceleration, number of braking times, and travel time of the preset distance. Specifically, if the speed of the first vehicle is less than the preset speed, it is determined that the first vehicle is in a congested road section, and the congestion coefficient is determined according to the interval in which the speed of the first vehicle is located. Different intervals correspond to different congestion coefficients. The method of using acceleration to determine the congestion coefficient is the same as using vehicle speed, and will not be described again here. Detect the number of times the first vehicle brakes within a period of time or a distance. If the number of times the first vehicle brakes is greater than the preset number, it is determined that the first vehicle is in a congested road section.
  • the congestion coefficient is determined according to the braking interval where the number of braking times is located. Different braking intervals correspond to different congestion coefficients.
  • the method of using driving time to determine the congestion coefficient is similar to the method of using the number of braking times, and will not be described again here.
  • the congestion coefficient can also be calculated based on the first vehicle's speed, acceleration, number of braking times and travel time of the preset distance, as well as their corresponding weights. Specifically, the product of the vehicle speed of the first vehicle and the first weight is calculated to obtain the first value. Calculate the product of the acceleration and the second weight to obtain the second value. Calculate the product of the number of braking times and the third weight to obtain the third value. The product of the travel time and the fourth weight is calculated to obtain the fourth value. Calculate the sum of the first value, the second value, the third value and the fourth value to obtain the congestion coefficient.
  • the forward-looking camera installed in the first vehicle to collect the first image, calculate the distance of each vehicle and/or pedestrian in the first image from the first vehicle, and select the vehicles and/or vehicles whose distance from the first vehicle is less than or equal to the first distance. Or pedestrians are recorded as the first target object.
  • the first target object can also be determined through radar.
  • the first distance can be set as needed, for example, the first distance can be 10 meters, 9 meters or 8 meters, etc.
  • the driving speed, steering angle, and turn signal status of the second vehicle are obtained, and the driving speed, steering angle, and turn signal status of the second vehicle are used.
  • the traveling speed of a vehicle and the path planning of the first vehicle determine the collision coefficient between the first vehicle and the second vehicle.
  • the driving speed of the second vehicle, the steering angle of the second vehicle, the turn signal status of the second vehicle, the driving speed of the first vehicle and the path planning of the first vehicle are input into the collision coefficient calculation model to obtain the first The collision coefficient between the vehicle and the second vehicle.
  • the license plate information of the second vehicle can also be obtained, the historical driving behavior of the second vehicle can be searched from the data stored in the cloud, and the collision coefficient between the first vehicle and the second vehicle can be determined.
  • Historical driving behavior can include the number of traffic jams, overtaking times, crashes, etc.
  • the first vehicle can communicate with the cloud.
  • the walking speed and walking direction of the first pedestrian are obtained, and the collision coefficient between the first pedestrian and the first vehicle is determined based on the walking speed and walking direction of the first pedestrian.
  • facial recognition is performed on the first person and the historical pedestrian information of the first person is found from the data stored in the cloud.
  • Historical pedestrian information includes the number of red light running, number of accidents, etc.
  • the collision coefficient between the first pedestrian and the first vehicle is determined based on historical pedestrian information.
  • landform features may include deserts, mountains, forests, plains, hills, grasslands, etc.
  • the landform features can be determined based on the location of the first vehicle by searching data stored in the cloud, or from a preset map. Landform features may also be determined from images captured by a camera mounted on the first vehicle.
  • the first information may be information obtained in real time, or may be information generated by collecting the external environment of the first vehicle at preset time intervals.
  • the second target object can be determined based on the first information.
  • the second target object may be different.
  • the second target object may include lane lines, barriers, road shoulders, buildings, pedestrians, vehicles, etc.
  • Each type of first information corresponds to a group of objects that require enhanced display.
  • objects that require enhanced display corresponding to landform features include mountains, forests, etc.
  • objects that require enhanced display corresponding to congestion coefficient include buildings, pedestrians, vehicles, etc.
  • the method for determining whether the road where the first vehicle is located includes objects that require enhanced display includes:
  • Second information which includes at least one of a first image collected by a forward-looking camera installed on the first vehicle, information collected by a radar installed on the first vehicle, and a preset map. ; Based on the second information, determine whether there are objects on the road that require enhanced display.
  • Radar can detect pedestrians, vehicles, lights, buildings and other information on the road where the first vehicle is located.
  • the preset map can include lane lines, lane directions, building locations, etc.
  • the preset map and the information collected by the radar to jointly determine whether there are objects that require enhanced display on the road can make the objects that require enhanced display included on the determined road more accurate.
  • the second information used may be different. Specifically, if the visual perception information meets the preset requirements, it is determined based on the first image whether there is an object that requires enhanced display on the road where the first vehicle is located.
  • the visual perception information meeting the preset requirements includes: visibility is greater than the first threshold and/or light intensity is greater than the second threshold.
  • the visual perception information not meeting the preset requirements includes: visibility is less than or equal to the first threshold and/or light intensity is less than or equal to the second threshold.
  • the visual perception information does not meet the preset requirements, it is determined whether there are objects that require enhanced display on the road where the first vehicle is located based on the information collected by the radar installed on the first vehicle and/or the preset map.
  • the display mode is automatically matched according to the first information, and different first information is matched with different display modes.
  • the visual perception information matches the visual perception display mode
  • the landform characteristics match the geographical environment display mode
  • the driving status matches the driving mode.
  • Status display mode matches.
  • the display mode is displayed. The purpose of showing a display mode is to let the user determine whether to use that display mode.
  • objects included in the display mode are acquired.
  • the first instruction represents the user's determination to use the display mode.
  • the first instruction may be an instruction generated after the user clicks the OK button.
  • Each display mode includes objects that require enhanced display. Search for objects corresponding to the display mode on the road where the first vehicle is located. If there are objects included in the display mode on the road where the first vehicle is located, then the objects in the display mode that exist on the road where the first vehicle is located are the second target objects.
  • the objects included in the visual perception display mode that require enhanced display are lane lines and road shoulders.
  • the road where the first vehicle is on includes lane lines but does not include road shoulders, then the lane lines on the road where the first vehicle is on is the second target object.
  • Different display modes have different functions. For example, the visual perception display mode is used to reconstruct road display information, the geographical environment display mode is used to enrich road display information, and the driving status display mode is used to optimize road display information.
  • the display mode may also include user-defined display modes, such as simple mode, music mode, navigation mode, performance mode, etc.
  • Simple mode can include displaying vehicle speed and fuel consumption.
  • the music mode can include displaying the song being played, lyrics, and animation corresponding to the song, etc.
  • Navigation modes may include navigation routes, remaining time to destination, etc.
  • Performance modes can include gear information, acceleration information, etc.
  • the augmented reality image of the second target object is projected on the front windshield of the vehicle through the HUD device, so that the user can see the second target through the front windshield when driving the vehicle.
  • An augmented reality image of the second target object, and the enhanced display image of the second target object matches the second target object.
  • the projection needs to be based on the HUD's resolution, display size and other information.
  • the HUD device 20 projects the augmented reality image of the second target object on the front windshield 40 , and the user sees the second target object 60 on the road through the front windshield 40 , and the augmented reality image 50 of the second target object 60 .
  • the second target object 60 seen by the user matches the augmented reality image 50 of the second target object.
  • the first information is first obtained, and the first information includes at least one of the visual perception information of the road where the first vehicle is located, the driving state of the first vehicle, and the geomorphological features of the area where the first vehicle is located; based on the obtained
  • the first information is used to determine the second target object on the road that needs enhanced display, generate an augmented reality image of the second target object, and finally project the augmented reality image of the second target object on the front windshield.
  • this application determines the second target object based on the first information, and the second target object changes with the first information, so that the content of the augmented reality heads-up display Rich and variable, it improves the intelligence of the vehicle and increases the fun of the user experience.
  • the first information includes the visual perception information
  • the visual perception information includes the visibility
  • the first vehicle is determined
  • Secondary targets on the road that require enhanced display include:
  • the first threshold can be set as needed. Different intervals are set in advance, and each interval corresponds to a group of objects that need to be displayed enhanced.
  • the preset intervals include the first interval [0, 20], the second interval (20, 50], and the third interval (50, 100].
  • S202 Determine the first object corresponding to the visibility interval.
  • the first object existing on the road is the second target object.
  • the first object corresponding to the third interval includes one or more of lane lines, road shoulders, and road fences.
  • the first object corresponding to the second interval includes one or more of lane lines, road shoulders, and road fences, and one or more of crosswalks, green belts, telephone poles, and road lights. Various.
  • the first object corresponding to the first interval includes one or more of lane lines, road shoulders, and road fences, and one or more of crosswalks, green belts, telephone poles, and road indicators. species, and pedestrians and/or vehicles in front of the first vehicle whose distance from the first vehicle is less than or equal to the fourth distance.
  • step S102 after determining the first object corresponding to the visibility interval, it is necessary to determine whether the first object exists on the road.
  • the method of determining whether the first object exists on the road please refer to the above-mentioned step S102, and the method of determining whether the road where the first vehicle is located includes an object that requires enhanced display, which will not be described again here.
  • an augmented reality image of basic information is displayed.
  • the basic information may include vehicle speed, acceleration, navigation route, etc.
  • the second target object can be determined according to the visibility interval. Therefore, as the visibility changes, the second target object will change accordingly. The smaller the visibility, the more content will be displayed, and more content can be provided to the user. information to provide convenience for users to drive vehicles and ensure users’ driving safety.
  • the first object is reconstructed based on the geometric image (size and/or direction) of the first object on the road.
  • Image The reconstructed image of the first object is an enhanced display image of the first object.
  • Reconstructing the image of the first object is redrawing the image of the first object along the geometric image of the first object.
  • the redrawn image of the first object is the same as the geometric image of the first object.
  • the geometric image of the first object is obtained based on the preset map and information collected by the radar.
  • the visual perception information includes the illumination intensity
  • the illumination intensity is less than or equal to a second threshold
  • the second The target object includes a lane line within a target area, and the target area is an area in front of the first vehicle that is not covered by the illumination light of the headlight of the first vehicle.
  • the second threshold can be set as needed.
  • the environment where the light intensity is less than or equal to the second threshold may include dark tunnels, rainy weather, rural roads, etc.
  • the lane lines in the target area may be the lane lines of the lane where the first vehicle is located in the target area, and may also include the lane lines of the lanes on both sides of the lane where the first vehicle is located.
  • the lanes on both sides of the lane where the first vehicle is located are lanes adjacent to the lane where the first vehicle is located.
  • reconstructing the road where the headlights cannot illuminate can guide the vehicle to safely drive out of the dark area and improve the user's driving safety.
  • roads that cannot be illuminated by headlights are reconstructed based on the preset map.
  • the preset map includes information such as lane lines and direction of the road where the first vehicle is located.
  • the augmented reality image of the basic information is displayed.
  • the second target object when the first information includes the landform features, the landform features are preset landforms, and the first vehicle is in an autonomous driving state, the second target object includes a third Three targets, the third target includes at least one of mountains, forests, and deserts whose detected distance from the first vehicle is less than or equal to a second distance.
  • the driving state of the first vehicle may include an automatic driving state and a manual driving state.
  • the automatic driving state is a state in which the vehicle drives itself without human control
  • the manual driving state is a state in which human control of the vehicle is required to drive the vehicle.
  • the driving state of the first vehicle can be determined by detecting whether the automatic driving function in the first vehicle is on or off. If the automatic driving function is on, it is determined that the first vehicle is in the automatic driving state. If the automatic driving function is turned off, it is determined that the first vehicle is in a manual driving state.
  • the automatic driving function is turned on when the automatic driving start button is pressed. Determine whether the automatic driving function is turned on by detecting the status of the automatic driving enable button.
  • the method for determining the third target object is similar to the method for determining the first target object. Please refer to the method for determining the first target object, which will not be described again here.
  • the landform features are preset landforms
  • the road where the first vehicle is located is a highway
  • the first vehicle is in an autonomous driving state
  • the second target object includes a third target object
  • the third target object includes at least one of mountains, forests, and deserts where the detected distance from the first vehicle is less than or equal to the second distance.
  • the driving state includes the congestion coefficient, and the congestion coefficient is greater than or equal to a third threshold
  • the second target object A fourth target object is included, and the fourth target object is at least one of a second pedestrian, a building, and a third vehicle whose detected distance from the first vehicle is less than or equal to a third distance.
  • the driving state includes the collision coefficient
  • the collision coefficient is greater than or equal to a fourth threshold
  • the second target object Including the first target object whose collision coefficient is greater than or equal to the fourth threshold value.
  • both the third threshold and the fourth threshold can be set as needed.
  • the method for determining the fourth target object is similar to the method for determining the first target object, and will not be described again here.
  • the collision coefficient corresponding to A is a
  • the collision coefficient corresponding to B includes b
  • the collision coefficient corresponding to C is c.
  • a and b are greater than the fourth threshold
  • c is less than the fourth threshold
  • the collision coefficient when the collision coefficient is greater than or equal to the fourth threshold, the first target object whose collision coefficient is greater than or equal to the fourth threshold is enhanced and displayed, which is helpful to remind the user to pay attention to avoidance and keep a distance. This ensures users’ driving safety.
  • step S103 may include:
  • S1031 Obtain the geometric image of the second target object based on the second information.
  • the geometric image of the second target object can be determined from the first image.
  • the direction of the lane line can be obtained from the first image. Size, location, etc.
  • the second information includes information collected by the radar and a preset map
  • a geometric image of the second target object can be generated based on the information collected by the radar and the preset map.
  • a method for obtaining the rendered image of the second target object is obtained based on the determination method of the second target object. If the second target object is determined based on visual perception information, the rendering image of the second target object is obtained by: reconstructing the outline of the second target object based on the geometric image of the second target object, and the reconstructed outline of the second target object Rendered image matched to the second target.
  • the outline of the lane line is reconstructed according to the geometric image of the lane line (the geometric image includes the direction and position of the lane line, etc.).
  • the rendering image of the second target object may be obtained by: obtaining a rendering image matching the second target object from a pre-stored image.
  • the pre-stored images may include text images, animation character images, animation animal images, landscape images, mythological story character images, etc.
  • the above method may further include:
  • Step S1033 may specifically include:
  • the geometric image of the second target object is fused with the rendering image matching the second target object to obtain an augmented reality image of the second target object.
  • the geometric image of the second target object is fused with the rendering image matching the second target object.
  • the fused image is grayscaled to obtain an augmented reality image of the second target.
  • the rendered image of the mountainous area includes a flowing water image.
  • the rendered image of the forest includes a flying bird image.
  • the rendered image of the desert includes an oasis image.
  • the rendered image of the second pedestrian includes an animated character. image.
  • the rendering image of the building includes a flying saucer image or a swimming fish image.
  • the rendered image of the third vehicle includes a cartoon car. image.
  • the rendered image of the first target object includes a warning image.
  • Warning images may include animated images or text images, etc.
  • the portion marked with a triangle in Figure 5 is a warning image, which is used to remind the user to pay attention to the vehicle under the warning sign.
  • Animation character 1 and animation character 2 marked in Figure 6 add animation character images to pedestrians.
  • Feiyu Image 3 adds flying fish images between buildings, which can relieve users' driving fatigue and make vehicle driving more interesting.
  • the above method may also include:
  • the driving status of the fifth vehicle whose distance from the first vehicle is less than the preset distance, and the driving status includes the line crossing situation. If the fifth vehicle presses the line, determine the duration of the fifth vehicle's continuous line pressing. If the fifth vehicle continuously presses the line for longer than the preset time, an alarm message will be sent.
  • sequence number of each step in the above embodiment does not mean the order of execution.
  • the execution order of each process should be determined by its function and internal logic, and should not constitute any limitation on the implementation process of the embodiment of the present application.
  • FIG. 7 shows a structural block diagram of the augmented reality head-up display device provided by the embodiment of the present application. For convenience of explanation, only the parts related to the embodiment of the present application are shown. part.
  • the device 300 may include: an information acquisition module 310 , a target determination module 320 , an image generation module 330 and an image display module 340 .
  • the information acquisition module 310 is used to acquire the first information, which includes the visual perception information of the section of the road where the first vehicle is located, the driving state of the first vehicle, and the topography of the area where the first vehicle is located. At least one of the characteristics, the visual perception information includes visibility and/or light intensity, the driving state includes a congestion coefficient and/or a collision coefficient of a collision between the first vehicle and the first target object, and the first The target object is a second vehicle and/or a first pedestrian whose detected distance from the first vehicle is less than or equal to the first distance;
  • the target determination module 320 is configured to determine, based on the first information, a second target object on the road where the first vehicle is located that requires enhanced display;
  • Image generation module 330 used to generate an augmented reality image of the second target object
  • the image display module 340 is configured to project the augmented reality image of the second target object on the front windshield of the first vehicle.
  • the first information includes the visual perception information
  • the visual perception information includes the visibility.
  • the target determination module 320 may be used to:
  • the first object existing on the road is the second target object.
  • the visual perception information includes the illumination intensity
  • the illumination intensity is less than or equal to a second threshold
  • the second The target object includes a lane line within a target area, and the target area is an area in front of the first vehicle that is not covered by the illumination light of the headlight of the first vehicle.
  • the second target object when the first information includes the landform features, the landform features are preset landforms, and the first vehicle is in an autonomous driving state, the second target object includes a third Three targets, the third target includes at least one of mountains, forests, and deserts whose detected distance from the first vehicle is less than or equal to a second distance.
  • the second target object when the first information includes the driving state, the driving state includes the congestion coefficient, and the congestion coefficient is greater than or equal to a third threshold, the second target object Includes a fourth target object, the fourth target object being at least one of a second pedestrian, a building and a third vehicle whose detected distance from the first vehicle is less than or equal to a third distance;
  • the driving state includes the collision coefficient
  • the collision coefficient is greater than or equal to a fourth threshold
  • the second target object includes a collision in the first target object. The first target whose coefficient is greater than or equal to the fourth threshold.
  • the target determination module 320 can be used to:
  • Obtain second information which includes at least one of a first image collected by a forward-looking camera installed on the first vehicle, information collected by a radar installed on the first vehicle, and a preset map. ;
  • the image generation module 330 can be specifically used for:
  • the geometric image of the second target object is fused with the matched rendering image of the second target object to obtain an augmented reality image of the second target object.
  • the image generation module 330 may be specifically configured to:
  • the display characteristic of the HUD device is the color display, fuse the geometric image of the second target object with the rendering image matching the second target object to obtain an augmented reality image of the second target object;
  • the display characteristic of the HUD device is the monochrome display, fuse the geometric image of the second target object with the rendering image matching the second target object;
  • the fused image is grayscaled to obtain an augmented reality image of the second target.
  • the rendered image of the mountainous area includes a flowing water image
  • the rendered image of the forest includes a flying bird image
  • the rendered image of the desert includes an oasis image
  • the rendered image of the second pedestrian includes an animation character image
  • the rendering image of the building includes a flying saucer image or a swimming fish image
  • the rendered image of the third vehicle includes a cartoon car image
  • the rendered image of the first target object includes a warning image.
  • Module completion means dividing the internal structure of the device into different functional units or modules to complete all or part of the functions described above.
  • Each functional unit and module in the embodiment can be integrated into one processing unit, or each unit can exist physically alone, or two or more units can be integrated into one unit.
  • the above-mentioned integrated unit can be hardware-based. It can also be implemented in the form of software functional units.
  • the specific names of each functional unit and module are only for the convenience of distinguishing each other and are not used to limit the scope of protection of the present application.
  • For the specific working processes of the units and modules in the above system please refer to the corresponding processes in the foregoing method embodiments, and will not be described again here.
  • the terminal device 400 may include: at least one processor 410, a memory 420, and a computer stored in the memory 420 and available on the at least one processor 410.
  • the processor 410 executes the computer program, it implements the steps in any of the above method embodiments, such as steps S101 to S104 in the embodiment shown in FIG. 2 .
  • the processor 410 executes the computer program, it implements the functions of each module/unit in each of the above device embodiments, such as the functions of modules 310 to 340 shown in FIG. 7 .
  • the terminal device 400 may be a device having components such as a front windshield for display of augmented reality images, such as a car.
  • the computer program may be divided into one or more modules/units, and one or more modules/units are stored in the memory 420 and executed by the processor 410 to complete the present application.
  • the one or more modules/units may be a series of computer program segments capable of completing specific functions.
  • the program segments are used to describe the execution process of the computer program in the terminal device 400 .
  • Figure 8 is only an example of a terminal device and does not constitute a limitation on the terminal device. It may include more or fewer components than shown in the figure, or combine certain components, or different components, such as Input and output devices, network access devices, buses, etc.
  • the processor 410 may be a Central Processing Unit (CPU), or other general-purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), or an off-the-shelf processor. Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general-purpose processor may be a microprocessor or the processor may be any conventional processor, etc.
  • the memory 420 can be an internal storage unit of the terminal device or an external storage device of the terminal device, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, or a flash memory card. (Flash Card) etc.
  • the memory 420 is used to store the computer program and other programs and data required by the terminal device.
  • the memory 420 can also be used to temporarily store data that has been output or is to be output.
  • the bus can be an Industry Standard Architecture (Industry Standard Architecture, ISA) bus, a Peripheral Component Interconnect (PCI) bus, or an Extended Industry Standard Architecture (EISA) bus, etc.
  • ISA Industry Standard Architecture
  • PCI Peripheral Component Interconnect
  • EISA Extended Industry Standard Architecture
  • the bus can be divided into address bus, data bus, control bus, etc.
  • the bus in the drawings of this application is not limited to only one bus or one type of bus.
  • the augmented reality head-up display method provided by the embodiments of the present application can be applied to terminal devices such as computers, tablets, laptops, netbooks, personal digital assistants (PDAs), etc.
  • terminal devices such as computers, tablets, laptops, netbooks, personal digital assistants (PDAs), etc.
  • PDAs personal digital assistants
  • the embodiments of the present application do not make any specific types of terminal devices. Any restrictions.
  • the disclosed terminal equipment, devices and methods can be implemented in other ways.
  • the terminal device embodiments described above are only illustrative.
  • the division of modules or units is only a logical function division.
  • there may be other division methods, such as multiple units or components. can be combined or can be integrated into another system, or some features can be ignored, or not implemented.
  • the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, indirect coupling or communication connection of devices or units, which may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application can be integrated into one processing unit, each unit can exist physically alone, or two or more units can be integrated into one unit.
  • the above integrated units can be implemented in the form of hardware or software functional units.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium.
  • the present application can implement all or part of the processes in the methods of the above embodiments, which can also be completed by instructing relevant hardware through a computer program.
  • the computer program can be stored in a computer-readable storage medium, and the computer can When the program is executed by one or more processors, the steps of each of the above method embodiments can be implemented.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium.
  • the present application can implement all or part of the processes in the methods of the above embodiments, which can also be completed by instructing relevant hardware through a computer program.
  • the computer program can be stored in a computer-readable storage medium, and the computer can When the program is executed by one or more processors, the steps of each of the above method embodiments can be implemented.
  • the computer program includes computer program code, which may be in the form of source code, object code, executable file or some intermediate form.
  • the computer-readable medium may include: any entity or device capable of carrying the computer program code, recording media, U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory) , Random Access Memory (RAM, Random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media, etc.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • electrical carrier signals telecommunications signals
  • software distribution media etc.
  • the content contained in the computer-readable medium can be appropriately added or deleted according to the requirements of legislation and patent practice in the jurisdiction.
  • the computer-readable medium Excluded are electrical carrier signals and telecommunications signals.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Mechanical Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Transportation (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)

Abstract

本申请公开一种增强现实抬头显示方法、装置、终端设备及存储介质,该方法包括:获取第一信息,第一信息包括第一车辆所在道路的视觉感知信息、第一车辆的行驶状态和第一车辆所在区域的地貌特征中的至少一种;基于获取到的第一信息,确定道路上需要增强显示的第二目标物,生成第二目标物的增强现实图像,最后将第二目标物的增强现实图像投射在前挡风玻璃上。相较于现有技术中仅显示车速、导航线等简单信息,本申请根据第一信息确定第二目标物,第二目标物随着第一信息的不同而变化,使增强现实抬头显示的内容丰富、可变,提高了车辆的智能化程度,增加了用户的趣味性体验。

Description

一种增强现实抬头显示方法、装置、终端设备及存储介质
本申请要求于2022年4月18日在中国专利局提交的、申请号为202210404377.8、发明名称为“一种增强现实抬头显示方法、装置及终端设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,具体涉及一种增强现实抬头显示方法、装置、终端设备及存储介质。
背景技术
抬头显示(Head Up Display,HUD)又被叫做平视显示***,是把时速、导航等重要的行车信息,投影到驾驶员前面的挡风玻璃上,让驾驶员尽量做到不低头、不转头就能看到时速、导航等重要的驾驶信息。增强现实(Augmented Reality,AR)是一种将虚拟信息与真实世界巧妙融合的技术,将计算机生成的文字、图像、三维模型、音乐、视频等虚拟信息模拟仿真后,应用到真实世界中,两种信息互为补充,从而实现对真实世界的“增强”。
随着汽车的发展,抬头显示和增强现实技术已经在汽车中得到应用。将抬头显示和增强现实技术相结合应用在汽车上,可以使驾驶员在不低头的情况下,看到虚拟图像与真实环境相融合的图像,为驾驶员开车提供便利。但是,目前汽车上仅能利用增强现实抬头显示技术显示导航路线、车速等,汽车的智能化程度低。
技术问题
本申请实施例的目的之一在于:提供一种增强现实抬头显示方法、装置、终端设备及存储介质,可以解决汽车智能化程度低的问题。
技术解决方案
本申请实施例采用的技术方案是:
第一方面,本申请实施例提供了一种增强现实抬头显示方法,包括:
获取第一信息,所述第一信息包括第一车辆所在道路的视觉感知信息、所述第一车辆的行驶状态和所述第一车辆所在区域的地貌特征中的至少一种;基于所述第一信息,确定所述第一车辆所在道路上需要增强显示的第二目标物;
生成所述第二目标物的增强现实图像;
将所述第二目标物的增强现实图像投射在所述第一车辆的前挡风玻璃上。
第二方面,本申请实施例提供了一种增强现实抬头显示装置,包括:
信息获取模块,用于获取第一信息,所述第一信息包括第一车辆所在道路的视觉感知信息、所述第一车辆的行驶状态和所述第一车辆所在区域的地貌特征中的至少一种;
目标确定模块,用于基于所述第一信息,确定所述第一车辆所在道路上需要增强显示的第二目标物;
图像生成模块,用于生成所述第二目标物的增强现实图像;
图像显示模块,用于将所述第二目标物的增强现实图像投射在所述第一车辆的前挡风玻璃上。
第三方面,本申请实施例提供了一种终端设备,包括:存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述第一方面中任一项所述的增强现实抬头显示方法。
第四方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现上述第一方面中任一项所述的增强现实抬头显示方法。
第五方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品在终端设备上运行时,使得终端设备执行上述第一方面中任一项所述的增强现实抬头显示方法。
有益效果
本申请实施例提供的第一方面的有益效果在于:本申请先获取第一信息,第一信息包括第一车辆所在道路的视觉感知信息、第一车辆的行驶状态和第一车辆所在区域的地貌特征中的至少一种;基于获取到的第一信息,确定道路上需要增强显示的第二目标物,生成第二目标物的增强现实图像,最后将第二目标物的增强现实图像投射在前挡风玻璃上。相较于现有技术中仅显示车速、导航线等简单信息,本申请根据第一信息确定第二目标物,第二目标物随着第一信息的不同而变化,使增强现实抬头显示的内容丰富、可变,提高了车辆的智能化程度。
可以理解的是,上述第二方面至第五方面的有益效果可以参见上述第一方面中的相关描述,在此不再赘述。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或示范性技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
图1是本申请一实施例提供的增强现实抬头显示方法的应用场景示意图;
图2是本申请一实施例提供的增强现实抬头显示方法的流程示意图;
图3是本申请一实施例提供的增强现实抬头显示的应用示意图;
图4是本申请一实施例提供的基于能见度确定第二目标物的方法的流程示意图;
图5是本申请一实施例提供的增强现实抬头显示的效果示意图;
图6是本申请另一实施例提供的增强现实抬头显示的效果示意图;
图7是本申请一实施例提供的增强现实抬头显示装置的结构示意图;
图8是本申请一实施例提供的终端设备的结构示意图。
本发明的实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本发明,并不用于限定本申请。
术语“第一”、“第二”、“第三”等仅用于便于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明技术特征的数量。“多个”的含义是两个或两个以上,除非另有明确具体的限定。
HUD是利用光学反射原理,在前挡风玻璃上显示车速、油量等常规信息,驾驶员不低头即可看到车速、油量等信息,使驾驶员在开车时精力更集中,提高了驾车的安全性。随着AR技术的发展,通过HUD界面以最直接、最便捷的显示模式将信息传达给驾驶员,使驾驶员的人眼、车外环境和AR图像达到三点一线,实现数字与现实的融合。
目前HUD显示的信息比较单一,内容比较简单,不能满足用户的娱乐需求。本申请提供一种增强现实抬头显示方法,可以在不同情况下显示不同的图像或信息,增加了娱乐性能,为用户带来更好的娱乐体验。
图1为本申请实施例提供的增强现实抬头显示方法的应用场景示意图,上述增强现实抬头显示方法可以用于合成增强现实图像并进行抬头显示。其中,信息采集设备10用于采集第一信息,并向HUD设备20发送第一信息。HUD设备20在接收到第一信息后,生成外部环境中的第二目标物的增强现实图像,并将第二目标物的增强现实图像投射在车辆的前挡风玻璃上,以使得用户在驾驶车辆时可以透过前挡风玻璃看到外部环境中第二目标物匹配的增强现实图像。
图2示出了本申请提供的增强现实抬头显示方法的示意性流程图,参照图2,对该方法的详述如下:
S101,获取第一信息,所述第一信息包括第一车辆所在道路的视觉感知信息、所述第一车辆的行驶状态和所述第一车辆所在区域的地貌特征中的至少一种。
在本实施例中,视觉感知信息包括能见度和/或光照强度,所述行驶状态包括拥堵系数和/或所述第一车辆与第一目标物发生碰撞的碰撞系数,所述第一目标物为检测到的与所述第一车辆的距离小于或等于第一距离的第二车辆和/或第一行人。第一车辆为自身车辆。
视觉感知信息为人眼所能感知的信息。其中,能见度可以通过能见度传感器进行检测。光照强度可以通过光照强度传感器进行检测。能见度和光照强度还可以根据天气预报确定。具体的,获取第一车辆的位置,第一车辆的位置可以采用全球定位***(Global Positioning System,GPS)确定,根据第一车辆的位置,确定第一车辆所在区域的天气预报,根据天气预报确定能见度和光照强度。天气预报中可以包括能见度和光照强度。
拥堵系数可以通过第一车辆的车速、加速度、制动次数和预设距离的行驶时间中的一种或多种确定。具体的,若第一车辆的车速小于预设车速,则确定第一车辆处于拥堵路段,根据第一车辆的车速所在的区间,确定拥堵系数,不同的区间对应不同的拥堵系数。利用加速度确定拥堵系数的方法与利用车速确定拥堵系数的方法相同,在此不再赘述。检测第一车辆在一段时间或一段距离内的制动次数,若制动次数大于预设次数,则确定第一车辆处于拥堵路段。再根据制动次数所在的制动区间确定拥堵系数,不同的制动区间对应不同的拥堵系数。利用行驶时间确定拥堵系数的方法与利用制动次数确定拥堵系数的方法相似,在此不再赘述。
另外,还可以根据第一车辆的车速、加速度、制动次数和预设距离的行驶时间,以及各自对应的权重计算拥堵系数。具体的,计算第一车辆的车速与第一权重的乘积,得到第一值。计算加速度与第二权重的乘积,得到第二值。计算制动次数与第三权重的乘积,得到第三值。计算行驶时间与第四权重的乘积,得到第四值。计算第一值、第二值、第三值和第四值之和,得到拥堵系数。
利用第一车辆中安装的前视摄像头采集第一图像,计算第一图像中各个车辆和/或行人距离第一车辆的距离,将与第一车辆的距离小于或等于第一距离的车辆和/或行人记为第一目标物。另外,还可以通过雷达确定第一目标物。第一距离可以根据需要进行设置,例如,第一距离可以为10米、9米或8米等。
在第一目标物包括第二车辆时,获取第二车辆的行驶速度、转向角、转向灯状态,利用第二车辆的行驶速度、第二车辆的转向角、第二车辆的转向灯状态、第一车辆的行驶速度和第一车辆的路径规划确定第一车辆与第二车辆的碰撞系数。具体的,将第二车辆的行驶速度、第二车辆的转向角、第二车辆的转向灯状态、第一车辆的行驶速度和第一车辆的路径规划输入至碰撞系数计算模型中,得到第一车辆与第二车辆的碰撞系数。可选的,还可以获取第二车辆的车牌信息,从云端存储的数据中查找第二车辆的历史驾驶行为,确定第一车辆与第二车辆的碰撞系数。历史驾驶行为可以包括加塞次数、超车次数、撞车次数等。第一车辆与云端可以进行通信。
在第一目标物包括第一行人时,获取第一行人的行走速度和行走方向,根据第一行人的行走速度和行走方向,确定第一行人与第一车辆的碰撞系数。可选的,对第一行人进行面部识别,从云端存储的数据中查找第一行人的历史行人信息。历史行人信息包括闯红灯数、出事故次数等。根据历史行人信息确定第一行人与第一车辆的碰撞系数。
在本实施例中,地貌特征可以包括荒漠、山地、森林、平原、丘陵、草原等。地貌特征可以根据第一车辆所在位置查找云端存储的数据确定,或者从预设地图中确定。地貌特征还可以通过第一车辆上安装的摄像头采集的图像确定。
在本实施例中,第一信息可以为实时获得的信息,还可以是按照预设时间间隔采集第一车辆的外部环境生成的信息。
S102,基于所述第一信息,确定所述第一车辆所在道路上需要增强显示的第二目标物。
在本实施例中,根据第一信息可以确定第二目标物,随着第一信息的不同,第二目标物可能不同。第二目标物可以包括车道线、围挡、路肩、楼宇、行人、车辆等。每种第一信息对应一组需要增强显示的物体,例如,地貌特征对应的需要增强显示的物体包括:山地、森林等,拥堵系数对应的需要增强显示的物体包括楼宇、行人、车辆等。在确定第一信息对应的增强显示的物体后,确定第一车辆所在道路上是否包括需要增强显示的物体。若第一车辆所在道路上包括需要增强显示的物体,则第一车辆所在道路上包括的需要增强显示的物体为第二目标物。
具体的,确定第一车辆所在道路上是否包括需要增强显示的物体的方法包括:
获取第二信息,所述第二信息包括所述第一车辆上安装的前视摄像头采集的第一图像、安装在所述第一车辆上的雷达采集的信息和预设地图中的至少一种;基于所述第二信息,确定所述道路上是否存在需要增强显示的物体。
雷达可以检测第一车辆所在道路上的行人、车辆、指示灯、建筑等信息。预设地图中可以包括车道线、车道走向、建筑位置等。
使用第一图像、预设地图和雷达采集的信息共同确定道路上是否存在需要增强显示的物体,可以使确定的道路上包括的需要增强显示的物体更准确。
根据不同的视觉感知信息,使用的第二信息可以不同。具体的,若在视觉感知信息满足预设要求时,根据第一图像确定第一车辆所在道路上是否存在需要增强显示的物体。视觉感知信息满足预设要求包括:能见度大于第一阈值和/或光照强度大于第二阈值。视觉感知信息不满足预设要求包括:能见度小于或等于第一阈值和/或光照强度小于或等于第二阈值。
若视觉感知信息不满足预设要求,根据安装在所述第一车辆上的雷达采集的信息和/或预设地图,确定第一车辆所在道路上是否存在需要增强显示的物体。
具体的,根据第一信息自动匹配显示模式,不同的第一信息匹配不同的显示模式,例如,视觉感知信息与视觉感知显示模式相匹配,地貌特征与地理环境显示模式相匹配,行驶状态与行驶状态显示模式相匹配。在确定显示模式后,展示显示模式。展示显示模式的作用是为了让用户确定是否使用该显示模式。在检测到第一指令后,获取显示模式包括的物体。第一指令表征用户确定使用该显示模式。第一指令可以为用户点击确定按键后生成的指令。
各个显示模式中包括需要增强显示的物体。在第一车辆所在道路上查找显示模式对应的物体,若第一车辆所在道路上存在显示模式包括的物体,则第一车辆所在道路上存在的显示模式中的物体为第二目标物。
作为举例,若视觉感知显示模式包括的需要增强显示的物体为车道线、路肩。第一车辆所在道路中包括车道线,不包括路肩,则第一车辆所在道路中的车道线为第二目标物。不同的显示模式具有不同的作用,例如,视觉感知显示模式的作用为重建道路显示信息,地理环境显示模式的作用为丰富道路显示信息,行驶状态显示模式的作用为优化道路显示信息。
在本实施例中,显示模式还可以包括用户定义的显示模式,例如,简约模式、音乐模式、导航模式和性能模式等。简约模式可以包括显示车速和耗油量等。音乐模式可以包括显示正在播放的歌曲、歌词,以及该歌曲对应的动画等。导航模式可以包括导航路线、距离目的地的剩余时间等。性能模式可以包括档位信息、加速度信息等。
S103,生成所述第二目标物的增强现实图像。
S104,将所述第二目标物的增强现实图像投射在第一车辆的前挡风玻璃上。
在本实施例中,通过HUD设备将所述第二目标物的增强现实图像投射在所述车辆的前挡风玻璃上,以使得用户在驾驶车辆时,透过前挡风玻璃可以看到第二目标物的增强现实图像,且第二目标物的增强显示图像与第二目标物相匹配。在通过HUD设备投射增强显示图像时,需要根据HUD的分辨率、显示尺寸等信息进行投射。
作为举例,如图3所示,图中HUD设备20将第二目标物的增强现实图像投射在前挡风玻璃40上,用户透过前挡风玻璃40看到道路上的第二目标物60,以及第二目标物60的增强现实图像50。用户看到的第二目标物60和第二目标物的增强现实图像50相匹配。
本申请实施例中,先获取第一信息,第一信息包括第一车辆所在道路的视觉感知信息、第一车辆的行驶状态和第一车辆所在区域的地貌特征中的至少一种;基于获取到的第一信息,确定道路上需要增强显示的第二目标物,生成第二目标物的增强现实图像,最后将第二目标物的增强现实图像投射在前挡风玻璃上。相较于现有技术中仅显示车速、导航线等简单信息,本申请根据第一信息确定第二目标物,第二目标物随着第一信息的不同而变化,使增强现实抬头显示的内容丰富、可变,提高了车辆的智能化程度,同时增加了用户体验的趣味性。
如图4所示,在一种可能的实现方式中,所述第一信息包括所述视觉感知信息,所述视觉感知信息包括所述能见度,基于所述视觉感知信息,确定所述第一车辆所在道路上需要增强显示的第二目标物,包括:
S201,在所述能见度小于或等于第一阈值时,确定所述能见度所在的区间。
在本实施例中,第一阈值可以根据需要进行设置。预先设置不同的区间,每个区间对应一组需要增强显示的物体。例如,预先设置的区间包括第一区间[0,20]、第二区间(20,50]和第三区间(50,100]。
S202,确定所述能见度所在的区间对应的第一物体,所述道路上存在所述第一物体时,所述道路上存在的所述第一物体为所述第二目标物。
在本实施例中,在能见度在第三区间内时,第三区间对应的第一物体包括车道线、路肩和道路围栏中的一种或多种。
在能见度在第二区间内时,第二区间对应的第一物体包括车道线、路肩和道路围栏中的一种或多种,以及人行横道、绿化带、电线杆、道路指示灯中的一种或多种。
在能见度在第一区间内时,第一区间对应的第一物体包括车道线、路肩和道路围栏中的一种或多种,人行横道、绿化带、电线杆、道路指示灯中的一种或多种,以及在第一车辆前方、与第一车辆的距离小于或等于第四距离的行人和/或车辆。
在本实施例中,在确定所述能见度所在的区间对应的第一物体之后,需要确定道路上是否存在第一物体。具体的,确定道路上是否存在第一物体方法请参照上述步骤S102中,确定第一车辆所在道路上是否包括需要增强显示的物体的方法,在此不再赘述。
在本实施例中,若能见度大于第一阈值,则显示基本信息的增强现实图像,基本信息可以包括车速、加速度、导航路线等。
本申请实施例中,通过能见度所在区间可以确定第二目标物,因此,随着能见度的改变,第二目标物会随之改变,能见度越小显示的内容越多,可以为用户提供更多的信息,为用户驾驶车辆提供便利,保证用户的行车安全。
在一种可能的实现方式中,若第二目标物包括第一车辆所在道路上的第一物体时,根据道路上该第一物体的几何图像(尺寸和/或走向),重建该第一物体的图像。重建的第一物体的图像为第一物体的增强显示图像。重建该第一物体的图像为沿着第一物体的几何图像重新绘制第一物体的图像。重新绘制的第一物体的图像与第一物体的几何图像相同。根据预设地图和雷达采集的信息得到第一物体的几何图像。
在一种可能的实现方式中,在所述第一信息包括所述视觉感知信息、所述视觉感知信息包括所述光照强度、且所述光照强度小于或等于第二阈值时,所述第二目标物包括目标区域内的车道线,所述目标区域为在所述第一车辆的前方、所述第一车辆的前车灯的照明灯光未覆盖的区域。
在本实施例中,第二阈值可以根据需要进行设置。例如,光照强度小于或等于第二阈值的环境可以包括黑暗的隧道、阴雨天气、乡村道路等。
目标区域内的车道线可以为目标区域内第一车辆所在车道的车道线,还可以包括第一车辆所在车道的两侧车道的车道线。第一车辆所在车道的两侧车道为与第一车辆所在车道相邻的车道。
在光照强度小于或等于第二阈值时,对前车灯照不到的地方的道路进行重建,可以指引车辆安全行驶出黑暗区域,提高用户的行车安全性。具体的,基于预设地图对前车灯照不到的道路进行重建。预设地图中包括第一车辆所在道路的车道线、走向等信息。
在本实施例中,若光照强度大于第二阈值,则显示基本信息的增强现实图像。
在一种可能的实现方式中,在所述第一信息包括所述地貌特征、所述地貌特征为预设地貌、且所述第一车辆处于自动驾驶状态时,所述第二目标物包括第三目标物,所述第三目标物包括检测到的与所述第一车辆的距离小于或等于第二距离的山地、森林和沙漠中的至少一种。
第一车辆的驾驶状态可以包括自动驾驶状态和手动驾驶状态,自动驾驶状态为不需要人为操控车辆自己行驶的状态,手动驾驶状态为需要人为操控车辆使车辆行驶的状态。第一车辆的驾驶状态可以通过检测第一车辆中的自动驾驶功能的开启或关闭确定,若自动驾驶功能为开启状态,则确定第一车辆为自动驾驶状态。若自动驾驶功能为关闭状态,则确定第一车辆为手动驾驶状态。自动驾驶功能为开启状态可以为自动驾驶开启按钮按下的状态。通过检测自动驾驶开启按钮的状态确定自动驾驶功能是否开启。
在本实施例中,第三目标物的确定方法与第一目标物的确定方法相似,请参照第一目标物的确定方法,在此不再赘述。
本申请实施例中,在第一车辆处于自动驾驶状态时,为用户显示更丰富的内容,可以缓解用户的疲劳,改善单调的驾驶环境。
在一种可能的实现方式中,在所述第一信息包括所述地貌特征、所述地貌特征为预设地貌、第一车辆所在道路为高速道路、且所述第一车辆处于自动驾驶状态时,所述第二目标物包括第三目标物,所述第三目标物包括检测到的与所述第一车辆的距离小于或等于第二距离的山地、森林和沙漠中的至少一种。
在一种可能的实现方式中,在所述第一信息包括所述行驶状态、所述行驶状态包括所述拥堵系数、且所述拥堵系数大于或等于第三阈值时,所述第二目标物包括第四目标物,所述第四目标物为检测到的与所述第一车辆的距离小于或等于第三距离的第二行人、建筑和第三车辆中的至少一种。
在一种可能的实现方式中,在所述第一信息包括所述行驶状态、所述行驶状态包括所述碰撞系数、且所述碰撞系数大于或等于第四阈值时,所述第二目标物包括第一目标物中碰撞系数大于或等于第四阈值的第一目标物。
在本实施例中,第三阈值和第四阈值均可以根据需要进行设置。第四目标物的确定方法与第一目标物的确定方法相似,在此不再赘述。
作为举例,若第一目标物包括A、B和C,A对应的碰撞系数为a,B对应的碰撞系数包括b,C对应的碰撞系数为c。a和b大于第四阈值,c小于第四阈值,则第二目标物包括A和B。
    在本实施例中,在碰撞系数大于或等于第四阈值时,将第一目标物中碰撞系数大于或等于第四阈值的第一目标物进行增强显示,有利于提醒用户注意避让,保持距离,进而保证用户驾车的安全。
在一种可能的实现方式中,步骤S103的实现过程可以包括:
S1031,根据所述第二信息,得到所述第二目标物的几何图像。
具体的,若第二信息包括第一图像,从第一图像中可以确定第二目标物的几何图像,例如,第二目标物为车道线时,从第一图像中可以得到车道线的走向、尺寸、位置等。
若第二信息包括雷达采集的信息和预设地图,根据雷达采集的信息和预设地图可以生成第二目标物的几何图像。
S1032,获得所述第二目标物匹配的渲染图像。
在本实施例中,根据第二目标物的确定方式,得到第二目标物的渲染图像的获得途径。若第二目标物是根据视觉感知信息确定的,第二目标物的渲染图像的获得途径为:根据第二目标物的几何图像,重建第二目标物的轮廓,重建的第二目标物的轮廓为第二目标物匹配的渲染图像。
作为举例,若第二目标物是根据视觉感知信息确定的车道线,按照车道线的几何图像(几何图像包括车道线的走向、位置等)重建车道线的轮廓。
若第二目标物是根据行驶状态和地貌特征确定的,第二目标物的渲染图像的获得途径可以为:从预存图像中获取与第二目标物匹配的渲染图像。
预存图像可以包括文字图像、动漫人物图像、动漫动物图像、山水图像、神话故事人物图像等。
S1033,将所述第二目标物的几何图像与所述第二目标物匹配的渲染图像进行融合,得到所述第二目标物的增强现实图像。
在一种可能的实现方式中,在所述第二目标物的几何图像和/或所述第二目标物匹配的渲染图像为彩色图像时,在步骤S1033之前,上述方法还可以包括:
获取所述第一车辆上安装的HUD设备的显示特性,其中,所述显示特性包括单色显示或彩色显示。
步骤S1033具体可以包括:
若所述HUD设备的显示特性为所述彩色显示,将所述第二目标物的几何图像与所述第二目标物匹配的渲染图像进行融合,得到所述第二目标物的增强现实图像。
若所述HUD设备的显示特性为所述单色显示,将所述第二目标物的几何图像与所述第二目标物匹配的渲染图像进行融合。对融合后的图像进行灰度化处理,得到所述第二目标的增强现实图像。
在一种可能的实现方式中,在所述第二目标物包括所述第三目标物、且所述第三目标物包括山地时,所述山地的渲染图像包括流水图像。
在一种可能的实现方式中,在所述第二目标物包括所述第三目标物、且所述第三目标物包括所述森林时,所述森林的渲染图像包括飞鸟图像。
在一种可能的实现方式中,在所述第二目标物包括所述第三目标物、且所述第三目标物包括所述沙漠时,所述沙漠的渲染图像包括绿洲图像。
在一种可能的实现方式中,在所述第二目标物包括所述第四目标物、且所述第四目标物包括所述第二行人时,所述第二行人的渲染图像包括动漫人物图像。
在一种可能的实现方式中,在所述第二目标物包括所述第四目标物、且所述第四目标物包括所述建筑时,所述建筑的渲染图像包括飞碟图像或游鱼图像。
在一种可能的实现方式中,在所述第二目标物包括所述第四目标物、且所述第四目标物包括所述第三车辆时,所述第三车辆的渲染图像包括卡通车图像。
在一种可能的实现方式中,在所述第二目标物包括所述第一目标物时,所述第一目标物的渲染图像包括警示图像。
警示图像可以包括动画图像或文字图像等。
作为举例,如图5中三角形标注的部分为警示图像,用于提醒用户注意警示标志下方的车辆。
如图6中标注的动漫人物1和动漫人物2为给行人添加了动漫人物图像。飞鱼图像3为在楼宇间添加了飞鱼图像,可以缓解用户的驾驶疲劳,使车辆行驶过程中更具有趣味性。
在一种可能的实现方式中,上述方法还可以包括:
获取与第一车辆的距离小于预设距离的第五车辆的行车状态,行车状态包括压线情况。若第五车辆存在压线情况,确定所述第五车辆的连续压线的时长。若第五车辆连续压线的时长大于预设时长,发送报警信息。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
对应于上文实施例所述的增强现实抬头显示方法,图7示出了本申请实施例提供的增强现实抬头显示装置的结构框图,为了便于说明,仅示出了与本申请实施例相关的部分。
参照图7,该装置300可以包括:信息获取模块310、目标确定模块320、图像生成模块330和图像显示模块340。
其中,信息获取模块310,用于获取第一信息,所述第一信息包括第一车辆所在道路本段的视觉感知信息、所述第一车辆的行驶状态和所述第一车辆所在区域的地貌特征中的至少一种,所述视觉感知信息包括能见度和/或光照强度,所述行驶状态包括拥堵系数和/或所述第一车辆与第一目标物发生碰撞的碰撞系数,所述第一目标物为检测到的与所述第一车辆的距离小于或等于第一距离的第二车辆和/或第一行人;
目标确定模块320,用于基于所述第一信息,确定所述第一车辆所在道路上需要增强显示的第二目标物;
图像生成模块330,用于生成所述第二目标物的增强现实图像;
图像显示模块340,用于将所述第二目标物的增强现实图像投射在所述第一车辆的前挡风玻璃上。
在一种可能的实现方式中,所述第一信息包括所述视觉感知信息,所述视觉感知信息包括所述能见度,目标确定模块320具体可以用于:
在所述能见度小于或等于第一阈值时,确定所述能见度所在的区间;
确定所述能见度所在的区间对应的第一物体,所述道路上存在所述第一物体时,所述道路上存在的所述第一物体为所述第二目标物。
在一种可能的实现方式中,在所述第一信息包括所述视觉感知信息、所述视觉感知信息包括所述光照强度、且所述光照强度小于或等于第二阈值时,所述第二目标物包括目标区域内的车道线,所述目标区域为在所述第一车辆的前方、所述第一车辆的前车灯的照明灯光未覆盖的区域。
在一种可能的实现方式中,在所述第一信息包括所述地貌特征、所述地貌特征为预设地貌、且所述第一车辆处于自动驾驶状态时,所述第二目标物包括第三目标物,所述第三目标物包括检测到的与所述第一车辆的距离小于或等于第二距离的山地、森林和沙漠中的至少一种。
在一种可能的实现方式中,在所述第一信息包括所述行驶状态、所述行驶状态包括所述拥堵系数、且所述拥堵系数大于或等于第三阈值时,所述第二目标物包括第四目标物,所述第四目标物为检测到的与所述第一车辆的距离小于或等于第三距离的第二行人、建筑和第三车辆中的至少一种;
在所述第一信息包括所述行驶状态、所述行驶状态包括所述碰撞系数、且所述碰撞系数大于或等于第四阈值时,所述第二目标物包括所述第一目标物中碰撞系数大于或等于第四阈值的第一目标物。
在一种可能的实现方式中,目标确定模块320具体可以用于:
获取第二信息,所述第二信息包括所述第一车辆上安装的前视摄像头采集的第一图像、安装在所述第一车辆上的雷达采集的信息和预设地图中的至少一种;
基于所述第二信息,确定所述道路上是否存在所述第一物体。相应的,图像生成模块330具体可以用于:
根据所述第二信息,得到所述第二目标物的几何图像;
获得所述第二目标物匹配的渲染图像;
将所述第二目标物的几何图像与所述第二目标物匹配的渲染图像进行融合,得到所述第二目标物的增强现实图像。
在一种可能的实现方式中,在所述第二目标物的几何图像和/或所述第二目标物匹配的渲染图像为彩色图像时,图像生成模块330具体可以用于:
获取所述第一车辆上安装的HUD设备的显示特性,其中,所述显示特性包括单色显示或彩色显示;
若所述HUD设备的显示特性为所述彩色显示,将所述第二目标物的几何图像与所述第二目标物匹配的渲染图像进行融合,得到所述第二目标物的增强现实图像;
若所述HUD设备的显示特性为所述单色显示,将所述第二目标物的几何图像与所述第二目标物匹配的渲染图像进行融合;
对融合后的图像进行灰度化处理,得到所述第二目标的增强现实图像。
在一种可能的实现方式中,在所述第二目标物包括所述第三目标物、且所述第三目标物包括山地时,所述山地的渲染图像包括流水图像;
在所述第二目标物包括所述第三目标物、且所述第三目标物包括所述森林时,所述森林的渲染图像包括飞鸟图像;
在所述第二目标物包括所述第三目标物、且所述第三目标物包括所述沙漠时,所述沙漠的渲染图像包括绿洲图像;
在所述第二目标物包括所述第四目标物、且所述第四目标物包括所述第二行人时,所述第二行人的渲染图像包括动漫人物图像;
在所述第二目标物包括所述第四目标物、且所述第四目标物包括所述建筑时,所述建筑的渲染图像包括飞碟图像或游鱼图像;
在所述第二目标物包括所述第四目标物、且所述第四目标物包括所述第三车辆时,所述第三车辆的渲染图像包括卡通车图像;
在所述第二目标物包括所述第一目标物时,所述第一目标物的渲染图像包括警示图像。
需要说明的是,上述装置/单元之间的信息交互、执行过程等内容,由于与本申请方法实施例基于同一构思,其具体功能及带来的技术效果,具体可参见方法实施例部分,此处不再赘述。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述***中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
本申请实施例还提供了一种终端设备,参见图8,该终端设备400可以包括:至少一个处理器410、存储器420以及存储在所述存储器420中并可在所述至少一个处理器410上运行的计算机程序,所述处理器410执行所述计算机程序时实现上述任意各个方法实施例中的步骤,例如图2所示实施例中的步骤S101至步骤S104。或者,处理器410执行所述计算机程序时实现上述各装置实施例中各模块/单元的功能,例如图7所示模块310至模块340的功能。终端设备400可以是具有诸如前挡风玻璃等的用于供增强现实图像显示的部件的设备,例如汽车。
示例性的,计算机程序可以被分割成一个或多个模块/单元,一个或者多个模块/单元被存储在存储器420中,并由处理器410执行,以完成本申请。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机程序段,该程序段用于描述计算机程序在终端设备400中的执行过程。
本领域技术人员可以理解,图8仅仅是终端设备的示例,并不构成对终端设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如输入输出设备、网络接入设备、总线等。
处理器410可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器 (Digital Signal Processor,DSP)、专用集成电路 (Application Specific Integrated Circuit,ASIC)、现成可编程门阵列 (Field-Programmable Gate Array,FPGA) 或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
存储器420可以是终端设备的内部存储单元,也可以是终端设备的外部存储设备,例如插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。所述存储器420用于存储所述计算机程序以及终端设备所需的其他程序和数据。所述存储器420还可以用于暂时地存储已经输出或者将要输出的数据。
总线可以是工业标准体系结构(Industry Standard Architecture,ISA)总线、外部设备互连(Peripheral Component,PCI)总线或扩展工业标准体系结构(Extended Industry Standard Architecture,EISA)总线等。总线可以分为地址总线、数据总线、控制总线等。为便于表示,本申请附图中的总线并不限定仅有一根总线或一种类型的总线。
本申请实施例提供的增强现实抬头显示方法可以应用于计算机、平板电脑、笔记本电脑、上网本、个人数字助理(personal digital assistant,PDA)等终端设备上,本申请实施例对终端设备的具体类型不作任何限制。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的实施例中,应该理解到,所揭露的终端设备、装置和方法,可以通过其它的方式实现。例如,以上所描述的终端设备实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被一个或多个处理器执行时,可实现上述各个方法实施例的步骤。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被一个或多个处理器执行时,可实现上述各个方法实施例的步骤。
同样,作为一种计算机程序产品,当计算机程序产品在终端设备上运行时,使得终端设备执行时实现可实现上述各个方法实施例中的步骤。
其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。需要说明的是,所述计算机可读介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读介质不包括是电载波信号和电信信号。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (18)

  1.  一种增强现实抬头显示方法,其特征在于,包括:
    获取第一信息,所述第一信息包括第一车辆所在道路的视觉感知信息、所述第一车辆的行驶状态和所述第一车辆所在区域的地貌特征中的至少一种;
    基于所述第一信息,确定所述第一车辆所在道路上需要增强显示的第二目标物;
    生成所述第二目标物的增强现实图像;
    将所述第二目标物的增强现实图像投射在所述第一车辆的前挡风玻璃上。
  2.  如权利要求1所述的增强现实抬头显示方法,其特征在于,所述视觉感知信息包括能见度和/或光照强度,所述行驶状态包括拥堵系数和/或所述第一车辆与第一目标物发生碰撞的碰撞系数,所述第一目标物为检测到的与所述第一车辆的距离小于或等于第一距离的第二车辆和/或第一行人。
  3.  如权利要求2所述的增强现实抬头显示方法,其特征在于,所述第一信息包括所述视觉感知信息,所述视觉感知信息包括所述能见度,基于所述视觉感知信息,确定所述第一车辆所在道路上需要增强显示的第二目标物,包括:
    在所述能见度小于或等于第一阈值时,确定所述能见度所在的区间;
    确定所述能见度所在的区间对应的第一物体,所述道路上存在所述第一物体时,所述道路上存在的所述第一物体为所述第二目标物。
  4.  如权利要求2所述的增强现实抬头显示方法,其特征在于,在所述第一信息包括所述视觉感知信息、所述视觉感知信息包括所述光照强度、且所述光照强度小于或等于第二阈值时,所述第二目标物包括目标区域内的车道线,所述目标区域为在所述第一车辆的前方、所述第一车辆的前车灯的照明灯光未覆盖的区域。
  5.  如权利要求1至4任一项所述的增强现实抬头显示方法,其特征在于,在所述第一信息包括所述地貌特征、所述地貌特征为预设地貌、且所述第一车辆处于自动驾驶状态时,所述第二目标物包括第三目标物,所述第三目标物包括检测到的与所述第一车辆的距离小于或等于第二距离的山地、森林和沙漠中的至少一种。
  6.  如权利要求2所述的增强现实抬头显示方法,其特征在于,在所述第一信息包括所述行驶状态、所述行驶状态包括所述拥堵系数、且所述拥堵系数大于或等于第三阈值时,所述第二目标物包括第四目标物,所述第四目标物为检测到的与所述第一车辆的距离小于或等于第三距离的第二行人、建筑和第三车辆中的至少一种,在所述第一信息包括所述行驶状态、所述行驶状态包括所述碰撞系数、且所述碰撞系数大于或等于第四阈值时,所述第二目标物包括所述第一目标物中碰撞系数大于或等于第四阈值的第一目标物。
  7.  如权利要求3所述的增强现实抬头显示方法,其特征在于,在所述确定所述能见度所在的区间对应的第一物体之后,包括:
    获取第二信息,所述第二信息包括所述第一车辆上安装的前视摄像头采集的第一图像、安装在所述第一车辆上的雷达采集的信息和预设地图中的至少一种;
    基于所述第二信息,确定所述道路上是否存在所述第一物体;
    所述生成所述第二目标物的增强现实图像,包括:
    根据所述第二信息,得到所述第二目标物的几何图像;
    获得所述第二目标物匹配的渲染图像;
    将所述第二目标物的几何图像与所述第二目标物匹配的渲染图像进行融合,得到所述第二目标物的增强现实图像。
  8.  如权利要求7所述的增强现实抬头显示方法,其特征在于,所述第一信息包括所述视觉感知信息,所述基于所述第二信息,确定所述道路上是否存在所述第一物体,包括:
    在所述视觉感知信息满足预设要求时,根据第一图像确定第一车辆所在道路上是否存在所述第一物体;
    若所述视觉感知信息不满足预设要求,根据安装在所述第一车辆上的雷达采集的信息和/或预设地图,确定所述第一车辆所在道路上是否存在所述第一物体。
  9.  如权利要求7所述的增强现实抬头显示方法,其特征在于,在所述第二目标物的几何图像和/或所述第二目标物匹配的渲染图像为彩色图像时,在所述将所述第二目标物的几何图像与所述第二目标物匹配的渲染图像进行融合,得到所述第二目标物的增强现实图像之前,所述方法还包括:
    获取所述第一车辆上安装的HUD设备的显示特性,其中,所述显示特性包括单色显示或彩色显示;
    所述将所述第二目标物的几何图像与所述第二目标物匹配的渲染图像进行融合,得到所述第二目标物的增强现实图像,包括:
    若所述HUD设备的显示特性为所述彩色显示,将所述第二目标物的几何图像与所述第二目标物匹配的渲染图像进行融合,得到所述第二目标物的增强现实图像;
    若所述HUD设备的显示特性为所述单色显示,将所述第二目标物的几何图像与所述第二目标物匹配的渲染图像进行融合;
    对融合后的图像进行灰度化处理,得到所述第二目标的增强现实图像。
  10.  如权利要求6所述的增强现实抬头显示方法,其特征在于,在所述第二目标物包括所述第三目标物、且所述第三目标物包括山地时,所述山地的渲染图像包括流水图像,
    在所述第二目标物包括所述第三目标物、且所述第三目标物包括所述森林时,所述森林的渲染图像包括飞鸟图像,
    在所述第二目标物包括所述第三目标物、且所述第三目标物包括所述沙漠时,所述沙漠的渲染图像包括绿洲图像,
    在所述第二目标物包括所述第四目标物、且所述第四目标物包括所述第二行人时,所述第二行人的渲染图像包括动漫人物图像,
    在所述第二目标物包括所述第四目标物、且所述第四目标物包括所述建筑时,所述建筑的渲染图像包括飞碟图像或游鱼图像,
    在所述第二目标物包括所述第四目标物、且所述第四目标物包括所述第三车辆时,所述第三车辆的渲染图像包括卡通车图像,
    在所述第二目标物包括所述第一目标物时,所述第一目标物的渲染图像包括警示图像。
  11.  如权利要求1至10任一项所述的增强现实抬头显示方法,其特征在于,所述拥堵系数通过所述第一车辆的车速、加速度、制动次数和预设距离的行驶时间中的一种或多种确定。
  12.  如权利要求1至10任一项所述的增强现实抬头显示方法,其特征在于,所述方法还包括:
    在所述第一目标物包括第二车辆时,获取所述第二车辆的行驶速度、转向角和转向灯状态;
    基于所述第二车辆的行驶速度、所述第二车辆的转向角、所述第二车辆的转向灯状态、所述第一车辆的行驶速度和所述第一车辆的路径规划确定所述第一车辆与所述第二车辆的碰撞系数。
  13.  如权利要求1至12任一项所述的增强现实抬头显示方法,其特征在于,所述方法还包括:
    在所述第一目标物包括第一行人时,获取所述第一行人的行走速度和行走方向;
    根据所述第一行人的行走速度和行走方向,确定所述第一行人与所述第一车辆的碰撞系数。
  14.  如权利要求1所述的增强现实抬头显示方法,其特征在于,所述基于所述第一信息,确定所述第一车辆所在道路上需要增强显示的第二目标物,包括:
    根据所述第一信息自动匹配显示模式,不同的所述第一信息匹配不同的显示模式;
    获取显示模式包括的物体,第一指令表征用户确定使用所述显示模式;
    若所述第一车辆所在道路上存在所述显示模式包括的物体,则所述第一车辆所在道路上存在的显示模式中的物体为第二目标物。
  15.  如权利要求1至14任一项所述的增强现实抬头显示方法,其特征在于,所述方法还包括:
    获取与所述第一车辆的距离小于预设距离的第五车辆的行车状态;
    若所述第五车辆的行车状态中存在压线情况,确定所述第五车辆的连续压线的时长;
    若所述第五车辆连续压线的时长大于预设时长,发送报警信息。
  16.  一种增强现实抬头显示装置,其特征在于,包括:
    信息获取模块,用于获取第一信息,所述第一信息包括第一车辆所在道路的视觉感知信息、所述第一车辆的行驶状态和所述第一车辆所在区域的地貌特征中的至少一种;
    目标确定模块,用于基于所述第一信息,确定所述第一车辆所在道路上需要增强显示的第二目标物;
    图像生成模块,用于生成所述第二目标物的增强现实图像;
    图像显示模块,用于将所述第二目标物的增强现实图像投射在所述第一车辆的前挡风玻璃上。
  17.  一种终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至15任一项所述的增强现实抬头显示方法。
  18. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至15任一项所述的增强现实抬头显示方法。
PCT/CN2023/086569 2022-04-18 2023-04-06 一种增强现实抬头显示方法、装置、终端设备及存储介质 WO2023202384A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210404377.8A CN115220227A (zh) 2022-04-18 2022-04-18 一种增强现实抬头显示方法、装置及终端设备
CN202210404377.8 2022-04-18

Publications (1)

Publication Number Publication Date
WO2023202384A1 true WO2023202384A1 (zh) 2023-10-26

Family

ID=83606920

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/086569 WO2023202384A1 (zh) 2022-04-18 2023-04-06 一种增强现实抬头显示方法、装置、终端设备及存储介质

Country Status (2)

Country Link
CN (1) CN115220227A (zh)
WO (1) WO2023202384A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115220227A (zh) * 2022-04-18 2022-10-21 长城汽车股份有限公司 一种增强现实抬头显示方法、装置及终端设备
CN115540898B (zh) * 2022-12-02 2023-04-28 泽景(西安)汽车电子有限责任公司 前方车辆的标记方法、装置、抬头显示器和存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106915302A (zh) * 2015-12-24 2017-07-04 Lg电子株式会社 用于车辆的显示装置及其控制方法
CN107554422A (zh) * 2016-07-01 2018-01-09 华为终端(东莞)有限公司 汽车安全警示装置和汽车安全警示的方法
CN108515909A (zh) * 2018-04-04 2018-09-11 京东方科技集团股份有限公司 一种汽车抬头显示***及其障碍物提示方法
CN113063418A (zh) * 2020-01-02 2021-07-02 三星电子株式会社 用于显示3d增强现实导航信息的方法和装置
CN113401054A (zh) * 2021-07-22 2021-09-17 上汽通用五菱汽车股份有限公司 车辆的防碰撞方法、车辆和可读存储介质
CN115220227A (zh) * 2022-04-18 2022-10-21 长城汽车股份有限公司 一种增强现实抬头显示方法、装置及终端设备

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111886154A (zh) * 2018-03-12 2020-11-03 三菱电机株式会社 驾驶辅助装置、驾驶辅助方法和驾驶辅助程序
CN113474787A (zh) * 2019-01-22 2021-10-01 阿达姆认知科技有限公司 驾驶员的认知状态的检测
CN109668575A (zh) * 2019-01-29 2019-04-23 苏州车萝卜汽车电子科技有限公司 用于增强现实抬头显示装置的导航信息处理方法及装置、设备、***
CN110775063B (zh) * 2019-09-25 2021-08-13 华为技术有限公司 一种车载设备的信息显示方法、装置及车辆
CN113109941B (zh) * 2020-01-10 2023-02-10 未来(北京)黑科技有限公司 一种分层成像的抬头显示***
CN113183758A (zh) * 2021-04-28 2021-07-30 昭通亮风台信息科技有限公司 一种基于增强现实的辅助驾驶方法及***

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106915302A (zh) * 2015-12-24 2017-07-04 Lg电子株式会社 用于车辆的显示装置及其控制方法
CN107554422A (zh) * 2016-07-01 2018-01-09 华为终端(东莞)有限公司 汽车安全警示装置和汽车安全警示的方法
CN108515909A (zh) * 2018-04-04 2018-09-11 京东方科技集团股份有限公司 一种汽车抬头显示***及其障碍物提示方法
CN113063418A (zh) * 2020-01-02 2021-07-02 三星电子株式会社 用于显示3d增强现实导航信息的方法和装置
CN113401054A (zh) * 2021-07-22 2021-09-17 上汽通用五菱汽车股份有限公司 车辆的防碰撞方法、车辆和可读存储介质
CN115220227A (zh) * 2022-04-18 2022-10-21 长城汽车股份有限公司 一种增强现实抬头显示方法、装置及终端设备

Also Published As

Publication number Publication date
CN115220227A (zh) 2022-10-21

Similar Documents

Publication Publication Date Title
WO2023202384A1 (zh) 一种增强现实抬头显示方法、装置、终端设备及存储介质
US11854393B2 (en) Road hazard communication
JP6796798B2 (ja) イベント予測システム、イベント予測方法、プログラム、及び移動体
JP2021530820A (ja) 車両を含む3次元拡張現実
US11996018B2 (en) Display control device and display control program product
GB2536770A (en) Virtual sensor testbed
GB2536549A (en) Virtual autonomous response testbed
CN110789533B (zh) 一种数据呈现的方法及终端设备
WO2021227520A1 (zh) 可视化界面的显示方法、装置、电子设备和存储介质
CN111477030B (zh) 车辆协同避险方法、车端平台、云端平台及存储介质
JP2008269178A (ja) 交通情報表示装置
US7286930B2 (en) Ghost following
CN114385005B (zh) 个性化虚拟试驾装置、方法和存储介质
CN110675476A (zh) 一种直观传达自动驾驶场景定义的方法和装置
CN115298520A (zh) 显示装置、显示方法及车辆
US11392738B2 (en) Generating a simulation scenario
JP7176098B2 (ja) 自律型車両のための行列の検出および行列に対する応答
JPH11272158A (ja) 道路交通システム評価シミュレーション装置
KR20230009338A (ko) 차량-인프라 협력 정보를 처리하는 방법, 장치 및 시스템
KR102625688B1 (ko) 혼합 현실에 기반한 디스플레이 장치 및 경로 안내 시스템
US20210209949A1 (en) Roadside apparatus and vehicle-side apparatus for road-to-vehicle communication, and road-to-vehicle communication system
US20230316900A1 (en) Reproduction system, reproduction method, and storage medium
Silvéria Virtual windshields: merging reality and digital content to improve the driving experience
JP7353323B2 (ja) 検証装置、方法及びプログラム
CN118334934A (zh) 一种模拟驾驶方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23791040

Country of ref document: EP

Kind code of ref document: A1