WO2024037363A1 - 显示方法、电子设备、存储介质及程序产品 - Google Patents

显示方法、电子设备、存储介质及程序产品 Download PDF

Info

Publication number
WO2024037363A1
WO2024037363A1 PCT/CN2023/111325 CN2023111325W WO2024037363A1 WO 2024037363 A1 WO2024037363 A1 WO 2024037363A1 CN 2023111325 W CN2023111325 W CN 2023111325W WO 2024037363 A1 WO2024037363 A1 WO 2024037363A1
Authority
WO
WIPO (PCT)
Prior art keywords
icon
screen
pedestrian
target obstacle
display
Prior art date
Application number
PCT/CN2023/111325
Other languages
English (en)
French (fr)
Inventor
刘敏
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2024037363A1 publication Critical patent/WO2024037363A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units

Definitions

  • the present application relates to the field of assisted driving technology, and in particular, to a display method, electronic device, storage medium and program product.
  • ADAS Advanced Driving Assistance System
  • sensors installed on the car millimetre-wave radar, lidar, single and binocular cameras, and satellite navigation
  • the surrounding environment collects data to identify, detect and track static and dynamic objects, and combines it with navigation map data to perform systematic calculations and analysis, so that drivers can be aware of possible dangers in advance and effectively increase the safety of car driving.
  • ADAS usually includes Lane Departure Warning System (LDWS), Adaptive Cruise Control (ACC), Forward Collision Warning (FCW), etc.
  • ADAS information can be displayed on the center console or instrument.
  • ADAS information can also be projected onto the front windshield through Head Up Display (HUD) technology or Augmented Reality Head Up Display (AR-HUD) technology, thereby focusing on the HUD virtual display in front of the front windshield. on the screen.
  • HUD Head Up Display
  • AR-HUD Augmented Reality Head Up Display
  • content of the screen size can be displayed, and content that exceeds the screen size, that is, content outside the screen display range, cannot be displayed. displayed on the screen.
  • an embodiment of the present application provides a display method, which is applied to an electronic device.
  • the electronic device is a vehicle or is provided in a vehicle.
  • the method includes: obtaining the position of the first target obstacle and the display of the screen of the electronic device. range; wherein, the position of the first target obstacle is located outside the display range of the screen; the first display position of the first preset warning icon on the screen is determined according to the position of the first target obstacle and the display range of the screen.
  • the pre-warning icon is set to prompt the first target obstacle; the first pre-set pre-warning icon is displayed on the screen according to the first display position.
  • the first display of the first preset warning icon on the screen is determined based on the position of the first target obstacle and the display range of the screen. position, and displays a first preset warning icon on the screen according to the first display position, where the first preset warning icon is used to prompt the first target obstacle, so that the first preset warning icon outside the display range of the screen can be displayed on the screen.
  • the display position of objects on the screen can directly provide the driver with intuitive spatial position information of target obstacles outside the display range of the screen.
  • the first preset warning icon includes a first icon and a second icon.
  • the first icon is an indicator icon.
  • the indicator icon is used to indicate the direction of the first target obstacle.
  • the second icon is related to the first target. Obstacle response.
  • the first preset early warning icon includes an indicator icon, and the second icon of the first preset early warning icon corresponds to the first target obstacle, so that it can be displayed on the screen outside the display range of the screen. information of the first target obstacle, and can indicate the direction of the first target obstacle outside the display range of the screen in real time.
  • the display range of the screen is the size of the visual range that can be seen through the screen.
  • This application defines the display range of the screen as the size of the visual range that can be seen through the screen, so that the first target is outside the size of the visual range that can be seen through the screen. Obstacle information can be displayed on the screen.
  • the method before determining the first display position of the first preset warning icon on the screen according to the position of the first target obstacle and the display range of the screen, the method further includes: acquiring a plurality of first target obstacles. information; the information of multiple first target obstacles includes the distance between the multiple first target obstacles, the movement direction of each first target obstacle, and the type of each first target obstacle; if there are multiple If the information of the first target obstacle meets the preset conditions, the types of the plurality of first target obstacles are determined to be group types; the first preset early warning icon is determined according to the group type; wherein the first preset early warning icon is the preset Group icon.
  • This application determines these first target obstacles as a group if the information of multiple first target obstacles meets the preset conditions, and displays the corresponding group icon.
  • the group icon is used to remind the driver of the distance relationship, movement direction and type between multiple first target obstacles as a group, and can also remind the driver of the spatial location information of this group.
  • the preset group icon is a second icon with a subscript value added, and the subscript value is the number of first target obstacles; or, the preset group icon is a group icon.
  • the application may represent a group by adding a second icon or group icon with a subscript value.
  • displaying the first preset warning icon on the screen according to the first display position includes: if the plurality of first target obstacles are multiple separate individuals, and the plurality of first preset warning icons The first icons overlap, then a combination icon composed of multiple first preset warning icons is displayed on the screen according to the first display position; wherein the combination icon includes a first icon, and a plurality of second icons in the combination icon Arrange according to the actual spatial position of the first target obstacle.
  • the first icons of the first preset warning icons corresponding to the first target obstacles of multiple individual individuals overlap, the multiple first preset warning icons are arranged according to the actual spatial position of the first target obstacle.
  • the combined icon can be used to prompt the driver of the actual spatial position information between the multiple first target obstacles, and can avoid icons mutual interference between them.
  • displaying the first preset warning icon on the screen according to the first display position includes: if the plurality of first target obstacles are multiple separate individuals, and the plurality of first preset warning icons If the first icons do not overlap but the second icons overlap, a plurality of first preset warning icons separated from each other according to the actual spatial position of the first target obstacle are displayed on the screen according to the first display position.
  • This application can display multiple first preset warning icons separately when the first icons of the first preset warning icons corresponding to the first target obstacles of multiple individual individuals do not overlap but the second icons overlap. Avoid interference between icons.
  • displaying a first preset warning icon on the screen according to the first display position includes: displaying a directional first preset warning icon on the screen according to the first display position; the method further includes: When the position of a target obstacle is within the display range of the screen, the second display position of the first preset warning icon on the screen is determined based on the position of the first target obstacle, and the first preset warning icon is displayed at the second display position of the screen without pointing.
  • the first default warning icon for sex This application uses changes in directivity to express whether the first target obstacle in the real space is within the display range of the screen or outside the display range of the screen, thereby expressing the distance between the first target obstacle and the vehicle in the real space.
  • displaying the first preset warning icon on the screen according to the first display position includes: when the distance between the first target obstacle and the vehicle is the first distance, displaying the first preset warning icon on the screen according to the first display position.
  • displaying the first preset warning icon on the screen according to the first display position includes: when the distance between the first target obstacle and the vehicle is the first distance, displaying the first preset warning icon on the screen according to the first display position.
  • the first preset warning icon of the first color is displayed on the screen: when the distance between the first target obstacle and the vehicle is the second distance, the first preset warning icon of the second color is displayed on the screen according to the first display position.
  • This application uses changes in the color of the first preset warning icon to express the distance between the first target obstacle and the vehicle in the real space.
  • displaying a first preset warning icon on the screen according to the first display position includes: when the first target obstacle is moving, displaying a dynamic first preset on the screen according to the first display position. Alert icon.
  • This application expresses the motion state of the first target obstacle in the real space through a dynamic first preset warning icon.
  • displaying the first preset warning icon on the screen according to the first display position includes: displaying the first preset warning icon with the first movement direction on the screen according to the first display position, A running direction is determined based on the second moving direction of the first target obstacle. This application expresses the movement direction of the first target obstacle in the real space through the movement direction of the first preset warning icon.
  • obtaining the position of the first target obstacle and the display range of the screen of the electronic device includes: obtaining a first plan view of the first target obstacle and the display range of the screen;
  • the position and display range of the screen determine the first preset Setting the first display position of the warning icon on the screen includes: determining the first display position of the first preset warning icon on the screen based on the position of the first target obstacle and the display range of the screen in the first plan view.
  • This application obtains the location of the first target obstacle and the display range of the screen of the electronic device through a first plan view of the first target obstacle and the display range of the screen, and determines the first display of the first preset warning icon on the screen. position, so that the first display position of the first target obstacle outside the display range of the screen in the first plan view can be determined.
  • the method before obtaining the first plan view of the first target obstacle and the display range of the screen, the method further includes: obtaining the first spatial coordinates of the first target obstacle and the second spatial coordinates of the screen;
  • the first spatial coordinate is the first spatial coordinate under the human eye coordinate system
  • the second spatial coordinate is the second spatial coordinate under the human eye coordinate system
  • obtaining the first planar view of the first target obstacle and the display range of the screen includes: A first plan view of the first target obstacle and the display range of the screen from a human eye perspective is obtained according to the first spatial coordinates and the second spatial coordinates.
  • This application can obtain the first plan view of the first target obstacle and the display range of the screen through the first spatial coordinate of the first target obstacle in the human eye coordinate system and the second spatial coordinate of the screen in the human eye coordinate system, thereby The first display position of the first target obstacle may be further determined according to the first plan view.
  • obtaining the first plan view of the first target obstacle and the display range of the screen from the human eye perspective according to the first spatial coordinates and the second spatial coordinates includes: according to the first spatial coordinates and the second spatial coordinates Obtaining a first plan view of the mark of the first target obstacle and the display range of the screen from the human eye perspective; wherein, in the first plan view, the center point of the mark of the first target obstacle is located outside the display range of the screen; according to the second Determining the first display position of the first preset warning icon on the screen based on the position of a target obstacle and the display range of the screen in the first plan view includes: based on the marked center point of the first target obstacle and the display range of the screen.
  • the position in the first plan view determines the first display position of the first preset warning icon on the screen; the center point includes the center of gravity or the geometric center.
  • This application determines that the first target obstacle is located outside the display range of the screen by using the marked center of gravity or geometric center of the first target obstacle to be outside the display range of the screen. and the position of the display range of the screen in the first plan view to determine the display position of the first preset warning icon.
  • the first target obstacle can be located on the screen through the marked center or geometric center of the first target obstacle. outside the display range and determine the first display position of the first preset warning icon on the screen.
  • the method further includes: obtaining the orientation of the second target obstacle relative to the vehicle; and determining the third display position of the second preset warning icon on the screen based on the orientation of the second target obstacle relative to the vehicle.
  • the second preset warning icon is used to prompt the second target obstacle; the second preset warning icon is displayed on the screen according to the third display position.
  • This application displays a second preset warning icon on the screen according to the position of the second target obstacle relative to the vehicle, so as to provide an early warning for the second target obstacle.
  • determining the third display position of the second preset warning icon on the screen based on the orientation of the second target obstacle relative to the vehicle includes: obtaining the second display position based on the orientation of the second target obstacle relative to the vehicle.
  • This application determines the third display position of the second preset warning icon on the screen through the second plan view of the azimuth distribution of the second target obstacle and the display range of the screen, thereby realizing the second display position of the vehicle in a certain direction. Determination of the third display position of the target obstacle.
  • the first target obstacle is located in front of the vehicle, and the second target obstacle is located in front of the vehicle.
  • this application can provide early warning of target obstacles in front of the vehicle and not in front of the vehicle.
  • an embodiment of the present application provides an electronic device.
  • the electronic device is a vehicle or is provided in a vehicle.
  • the electronic device includes a processor and a memory.
  • the memory is used to store program instructions.
  • the processor calls the stored instructions, The display method of any possible embodiment of the first aspect above.
  • an embodiment of the present application provides a computer-readable storage medium, which is characterized in that the computer-readable storage medium stores a program, and the program enables the electronic device to implement the display of any of the possible embodiments of the first aspect. method.
  • an embodiment of the present application provides a computer program product, which is characterized in that the computer program product includes computer execution instructions, and the computer execution instructions are stored in a computer-readable storage medium; at least one processor of the electronic device can execute the program from Computer-executable instructions are read from the computer-readable storage medium, and at least one processor executes the computer-executable instructions so that the electronic device performs the display method of any possible embodiment of the first aspect above.
  • Figure 1 is a schematic diagram of the human eye coordinate system provided by an embodiment of the present application.
  • Figure 2 is a schematic diagram of an image coordinate system provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of the hardware structure of an electronic device provided by an embodiment of the present application.
  • Figure 4 is a schematic diagram of an AR-HUD architecture provided by an embodiment of the present application.
  • Figure 5 is a schematic diagram of determining the spatial coordinates of a HUD virtual screen in a human eye coordinate system according to an embodiment of the present application.
  • Figure 6 is an architecture diagram for determining the spatial coordinates of a target obstacle in the human eye coordinate system provided by an embodiment of the present application.
  • Figure 7 is a position distribution diagram of the elevation angles of the vehicle and pedestrians provided by the embodiment of the present application.
  • Figure 8 is a schematic diagram of the scene described in Figure 7 in the human eye coordinate system.
  • Figure 9 is a schematic diagram of determining whether the pedestrian shown in Figure 7 is inside or outside the HUD virtual screen provided by an embodiment of the present application.
  • Figure 10A is a schematic diagram of the center point of the two-dimensional frame provided by the embodiment of the present application located within the HUD virtual screen
  • Figure 10B is a schematic diagram of the center point of the two-dimensional frame provided by the embodiment of the present application located outside the HUD virtual screen.
  • Figure 11A is a schematic diagram of a trapezoidal mark of a vehicle provided by an embodiment of the present application
  • Figure 11B is a schematic diagram of a two-dimensional frame mark of a cyclist provided by an embodiment of the present application
  • Figure 11C is a schematic diagram of an unrecognizable object provided by an embodiment of the present application. Labeled diagram.
  • Figure 12 is a schematic diagram of determining the position of the preset warning icon when the center point of the two-dimensional frame is located in the edge area outside the HUD virtual screen provided by the embodiment of the present application.
  • Figure 13 is a schematic diagram of determining the position of the preset warning icon when the center point of the two-dimensional frame is located in a corner area outside the HUD virtual screen provided by the embodiment of the present application.
  • Figure 14 is a schematic diagram of a preset warning icon provided by an embodiment of the present application.
  • FIG. 15A is a schematic diagram when the movement direction of the pedestrian icon is toward the left according to an embodiment of the present application
  • FIG. 15B is a schematic diagram when the movement direction of the pedestrian icon is toward the right according to an embodiment of the present application.
  • Figure 16 is a schematic diagram of a scene between pedestrians and the vehicle provided by the embodiment of the present application.
  • Figure 17 is a schematic diagram of the effect of HUD imaging in the scene described in Figure 16.
  • Figure 18 is a schematic diagram of another scene between pedestrians and the vehicle provided by the embodiment of the present application.
  • Figure 19 is a schematic diagram of the effect of HUD imaging in the scene described in Figure 18.
  • Figure 20 is a schematic diagram of another scene between pedestrians and the vehicle provided by the embodiment of the present application.
  • Figure 21 is a schematic diagram of the effect of HUD imaging in the scene described in Figure 20.
  • Figure 22 is a schematic diagram of a scene between a cyclist and the vehicle provided by the embodiment of the present application.
  • Figure 23 is a schematic diagram of the effect of HUD imaging in the scene described in Figure 22.
  • Figure 24 is a schematic diagram of the human eye perspective principle provided by an embodiment of the present application.
  • Figure 25 is a schematic diagram of another scene between a cyclist and the vehicle provided by an embodiment of the present application.
  • Figure 26 is a schematic diagram of the effect of HUD imaging in the scene described in Figure 25.
  • Figure 27 is a schematic diagram of the two-dimensional border marking of the crowd provided by the embodiment of the present application.
  • Figure 28 is a schematic diagram of crowd icons and combination icons provided by the embodiment of the present application.
  • Figure 29A is a schematic diagram of the effect when the directional pedestrian icons are individually identified according to the embodiment of the present application, in which the pedestrian icons in the directional pedestrian icons overlap but the indicator icons do not overlap.
  • FIG. 29B is a schematic diagram of the effect when the directional pedestrian icons shown in FIG. 29A are displayed separately.
  • Figure 30 is a schematic diagram of the effect of displaying crowd icons with boxes provided by the embodiment of the present application.
  • Figure 31 is a schematic diagram of a scene between the crowd and the vehicle provided by the embodiment of the present application.
  • Figure 32 is a schematic diagram of the effect of HUD imaging in the scene described in Figure 31.
  • Figure 33 is a schematic diagram of a scene of multiple individual pedestrians and the vehicle provided by the embodiment of the present application.
  • Figure 34 is a schematic diagram of the effect of HUD imaging in the scene described in Figure 33.
  • Figure 35 is a schematic diagram of a scene of pedestrians, cyclists and the vehicle provided by the embodiment of the present application.
  • Figure 36 is a schematic diagram of the effect of HUD imaging in the scene described in Figure 35.
  • Figure 37 is a schematic diagram of a scene of a crowd, multiple individual pedestrians and the vehicle provided by the embodiment of the present application.
  • Figure 38 is a schematic diagram of the effect of HUD imaging in the scene described in Figure 37.
  • Figure 39 is a schematic diagram of a scene of crowds, individual pedestrians and the vehicle provided by the embodiment of the present application.
  • Figure 40 is a schematic diagram of the effect of HUD imaging in the scene described in Figure 39.
  • FIG. 41 is a schematic diagram of a scene of the vehicle and the vehicle on the left rear of the vehicle provided by the embodiment of the present application.
  • Figure 42 is a plan view of the HUD virtual screen and the orientation distribution of the vehicle provided by the embodiment of the present application.
  • Figure 43 is a schematic diagram of the effect of HUD imaging in the scene described in Figure 41.
  • Figure 44 is a flow chart of a display method provided by an embodiment of the present application.
  • words such as “such as” are used to represent examples, illustrations or illustrations. Any embodiment or design described as “such as” in the embodiments of the application is not to be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as “such as” is intended to present the relevant concept in a concrete way.
  • the existing technology provides a display method that can project ADAS information onto the front windshield of the vehicle for display through AR-HUD technology.
  • the displayed ADAS information is reflected by the front windshield, and a virtual image can be displayed on the HUD virtual screen in front of the front windshield, where the virtual image will be superimposed on the real thing.
  • the HUD virtual screen coverage is related to the Field of View (FOV) of the HUD optical machine. It can be seen that the horizontal range covered by the HUD virtual screen is directly proportional to the FOV of the HUD light machine. However, limited by the size of the FOV of the HUD light machine, the size of the HUD virtual screen presented in front of the user can often only cover one lane, reducing the user's experience.
  • FOV Field of View
  • the existing technology provides a display method that can provide early warning to cyclists.
  • the existing display method is to display an early warning icon for early warning when the rider is within the HUD virtual screen display range of the HUD light machine.
  • the display range is the visual range that can be seen through the HUD virtual screen.
  • the display range is not the size of the display area on the screen, but the range of the real scenery in front of the vehicle that can be presented through the screen.
  • embodiments of the present application propose a display method, electronic device, storage medium and program product, which can directly provide the driver with intuitive spatial position information of target obstacles outside the screen display range.
  • Human eye coordinate system is a coordinate system established on the human eye. It is defined to describe the position of an object from the perspective of the human eye.
  • the unit is meters, and (Xe, Ye, Ze) is used to represent its coordinate value.
  • the human eye coordinate system takes the human eye as the coordinate origin, the line of sight of the human eye as the Z-axis, the horizontal direction perpendicular to the line of sight of the human eye as the X-axis, and the vertical direction perpendicular to the line of sight of the human eye as the X-axis.
  • the direction of the eye's line of sight is the Y-axis.
  • the camera coordinate system is also called the optical center coordinate system. It is a coordinate system established on the camera. It is defined to describe the position of the object from the perspective of the camera.
  • the unit is meters and is represented by (Xc, Yc, Zc). its coordinate value. Taking the optical center of the camera lens as the coordinate origin, the X-axis and Y-axis are parallel to the X-axis and Y-axis of the image coordinate system respectively, and the optical axis of the camera is the Z-axis.
  • the spatial coordinates of the object in the camera coordinate system can be converted into the spatial coordinates in the human eye coordinate system.
  • Image coordinate system is a two-dimensional rectangular coordinate system on the image plane.
  • the origin 0 of the image coordinate system is the intersection point of the lens optical axis and the image plane (also called the principal point).
  • the x-axis and y-axis of the image coordinate system are parallel to the X-axis and Y-axis of the camera coordinate system respectively. Use (x, y) to represent the coordinate values, as shown in Figure 2.
  • the image coordinate system expresses the position of the pixel in the image in physical units (such as millimeters).
  • the coordinates of an object in the image coordinate system can be converted into spatial coordinates in the camera coordinate system.
  • Eye box The eye box usually refers to the range within which the driver's eyes can see the entire displayed image.
  • the general eye box size is 130mm x 50mm. Due to different driver heights, the eye box needs to have a movement range of approximately ⁇ 50 mm in the vertical direction. In this application, the human eye can see a clear HUD virtual image in the eye box range.
  • FIG. 3 is a schematic diagram of the hardware structure of an electronic device according to an embodiment of the present application.
  • the electronic device 3 may be a vehicle machine, a vehicle-mounted computer, a vehicle, and other equipment.
  • the electronic device 3 may also be a terminal such as a mobile phone or tablet computer connected to the vehicle.
  • the electronic device 3 may include a memory 31, a processor 32 and a communication interface 33. It can be understood that the structure shown in FIG. 3 does not constitute a limitation on the electronic device 3.
  • the electronic device 3 may include more or fewer components than shown in the figure, or some components may be combined or some components may be decomposed. parts, or different parts arrangements.
  • the memory 31 may be used to store software programs and/or modules/units.
  • the processor 32 implements various functions of the electronic device 3 by running or executing software programs and/or modules/units stored in the memory 31 and calling data stored in the memory 31 .
  • the memory 31 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), etc.; the storage data area may store Data created based on the use of the electronic device 3 (such as image data, etc.) and the like are stored.
  • the memory 31 may include non-volatile computer-readable memory, such as hard disk, memory, plug-in hard disk, smart memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card, flash memory card (Flash) Card), at least one disk storage device, flash memory device, or other non-volatile solid-state storage device.
  • non-volatile computer-readable memory such as hard disk, memory, plug-in hard disk, smart memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card, flash memory card (Flash) Card), at least one disk storage device, flash memory device, or other non-volatile solid-state storage device.
  • the processor 32 may be a central processing unit (Central Processing Unit, CPU), or other general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or an on-site processor. Programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • the processor 32 can be a microprocessor or a processor or any conventional processor, etc.
  • the processor 32 is the control center of the electronic device 3 and uses various interfaces and lines to connect various parts of the entire electronic device 3 .
  • the processor 32 may also be provided with a memory 31 for storing instructions and data.
  • memory 31 in processor 32 is a cache memory.
  • Memory 31 may store instructions or data that have been recently used or recycled by processor 32 . If the processor 32 needs to use instructions or data again, it can be directly called from the memory 31 . Repeated access is avoided and the waiting time of the processor 32 is reduced, thus improving the efficiency of the system.
  • processor 32 may include one or more interfaces. Interfaces may include integrated circuit (inter-integrated circuit, I2C) interface, integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, pulse code modulation (pulse code modulation, PCM) interface, universal asynchronous receiver and transmitter (universal asynchronous receiver/transmitter (UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, SIM interface, and/or USB interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • UART universal asynchronous receiver and transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM interface SIM interface
  • USB interface universal asynchronous receiver/transmitter
  • the communication interface 33 may include a standard wired interface, a wireless interface, etc.
  • the communication interface 33 is used for the electronic device 3 to communicate with external devices, such as a camera.
  • the display method of the present application is applied to the electronic device 3 shown in FIG. 3 .
  • the display method can be applied not only to AR-HUD scenes, but also to early warning scenes displayed on screens in vehicles such as center consoles or instruments, early warning scenes in terminal map applications, etc. This application does not limit this.
  • the AR-HUD scene will be explained first.
  • the AR-HUD architecture 4 includes an image projection device 41 and a front windshield 42 .
  • the image projection device 41 may be called a HUD light machine.
  • the image projection device 41 may be disposed in a center console or the like below the front windshield 42 , or may be disposed at other locations near the front windshield 42 , which is not limited in this application.
  • the image projection device 41 includes an image generation module (Picture Generation Unit, PGU) 411 and an optical lens assembly 412.
  • the PGU 411 may include an LED light source, etc., which is not limited in this application.
  • the optical lens assembly 412 may include aspherical lenses, etc., which is not limited in this application.
  • the PGU 411 is used to generate a projection image
  • the optical lens group 412 is used to project the projection image to the front windshield 42. Therefore, through the reflection of the front windshield 42, a virtual image of the target obstacle can be presented on the HUD virtual screen outside the front windshield 42, and the virtual image can be superimposed on the real environment outside the vehicle. As a result, early warning of target obstacles can be provided and the realism of the image in the driver's field of view can be improved.
  • the AR-HUD architecture may also include an electronic device, and the electronic device is used to control the PGU to generate a projected image, which is not limited in this application.
  • FIG. 5 is a schematic diagram of determining the spatial coordinates of a HUD virtual screen in the human eye coordinate system according to an embodiment of the present application.
  • the human eye is located within the eye box.
  • the horizontal distance between the human eye and the HUD virtual screen is the virtual image distance.
  • the virtual image distance of different vehicles can be determined by zooming. It is understandable that the virtual image distance can also be determined through other methods, which is not limited in this application.
  • the angle from the human eye as the center to the horizontal edge of the virtual image is the horizontal FOV
  • the angle from the human eye as the center to the vertical edge of the virtual image is the vertical FOV.
  • the horizontal FOV of the HUD light engine and the vertical FOV of the HUD light engine are related to the type of HUD light engine.
  • the spatial coordinates of the HUD virtual screen in the human eye coordinate system can be determined based on the human eye position, virtual image distance, horizontal FOV of the HUD light machine, and vertical FOV of the HUD light machine.
  • the four dimensions of the HUD virtual screen in the human eye coordinate system can be determined based on the position of the human eye, the virtual image distance, the horizontal FOV of the HUD optical machine and the vertical FOV of the HUD optical machine.
  • the spatial coordinates of the vertex angles for example, the spatial coordinates of the vertex angle A, the spatial coordinates of the vertex angle B, the spatial coordinates of the vertex angle C, and the spatial coordinates of the vertex angle D in Figure 5 .
  • the spatial coordinates of the HUD virtual screen in the human eye coordinate system can also be the spatial coordinates of the HUD virtual screen in the human eye coordinate system calculated by other systems in the electronic device, or other than the electronic device.
  • the spatial coordinates of the HUD virtual screen calculated by the device in the human eye coordinate system, or the spatial coordinates of the HUD virtual screen pre-existing in the electronic device in the human eye coordinate system, are not limited by this application.
  • ADAS 601 can use various sensors (millimeter wave radar, lidar, single/binary cameras, and satellite navigation) installed on the vehicle to sense the environment around the vehicle at any time. Collect environmental data and perform technical processing such as identification, detection and tracking of static and dynamic objects, so as to estimate the movement route of the object, and at the same time calculate the distance, orientation and relative speed between the vehicle and the object, etc. Key information is used to determine whether there is a potential collision risk between the vehicle and the object ahead. When there is a potential collision risk between the vehicle and an object ahead, ADAS 601 can determine that the object is an obstacle.
  • the objects can be pedestrians, vehicles, cyclists, and unrecognizable objects.
  • the electronic device 602 may obtain the spatial coordinates of the target obstacle, which are the spatial coordinates of the target obstacle in the camera coordinate system.
  • the electronic device 602 may be a vehicle machine, a vehicle-mounted computer, a vehicle, or other equipment.
  • the electronic device 602 may also be a terminal such as a mobile phone or tablet computer connected to the vehicle.
  • the present application will be described below by taking the electronic device 602 as a vehicle machine as an example.
  • the camera 603 is an advanced camera with computing capabilities.
  • the camera 603 can directly or indirectly obtain obstacle information from ADAS, obtain the captured forward image, preprocess and feature extract the forward image, and perform target recognition based on the result of feature extraction.
  • the preprocessing includes framing, color adjustment, white balance, contrast equalization, image distortion, etc.
  • the feature extraction is to extract feature points in the front image based on preprocessing.
  • the target recognition is based on extracted feature points and uses algorithms such as machine learning and neural networks to identify objects in the image ahead, such as pedestrians, vehicles, cyclists, unrecognizable objects, etc.
  • the camera 603 also determines the coordinates of the target obstacle in the image coordinate system based on the target obstacle matching the obstacle information among the objects recognized in the front image, and determines the coordinates of the target obstacle in the image coordinate system. Convert to the spatial coordinates of the target obstacle in the camera coordinate system.
  • the camera 603 also sends the spatial coordinates of the target obstacle in the camera coordinate system to the vehicle. Therefore, the vehicle-machine can obtain the spatial coordinates of the target obstacle in the camera coordinate system.
  • the camera 603 is an ordinary camera and does not have computing power.
  • the vehicle can obtain obstacle information from ADAS and obtain the front image captured by camera 603 from camera 603 .
  • the vehicle-machine also detects the target obstacle in the front image that matches the obstacle information, and determines the coordinates of the target obstacle in the image coordinate system. It can be understood that before detecting the target obstacle in the forward image that matches the obstacle information, the vehicle machine can also perform operations such as preprocessing, feature extraction, and target recognition on the forward image. This application does not limit this.
  • the vehicle-machine also converts the coordinates of the target obstacle in the image coordinate system into the spatial coordinates of the target obstacle in the camera coordinate system.
  • the vehicle-machine After the vehicle-machine obtains the spatial coordinates of the target obstacle in the camera coordinate system, it also converts the spatial coordinates of the target obstacle in the camera coordinate system into the spatial coordinates of the target obstacle in the human eye coordinate system. As a result, the vehicle-machine can determine the spatial coordinates of the target obstacle in the human eye coordinate system.
  • this application is not only limited to determining obstacle information through ADAS, and then determining the target obstacle in front of the vehicle through the vehicle computer or camera 603. It can also be used for vehicle computer or camera 603 based on the data sensed by the sensor and the first warning range. Determine target obstacle information in front of the vehicle, etc. This application does not limit this.
  • the spatial coordinates of the target obstacle in the human eye coordinate system can be the spatial coordinates of the target obstacle in the human eye coordinate system calculated by other systems in the vehicle, or other devices outside the vehicle.
  • the calculated spatial coordinates of the target obstacle in the human eye coordinate system are not limited by this application.
  • FIG. 7 is a position distribution diagram of the bird's-eye view of the user and the target obstacle provided by an embodiment of the present application.
  • the target obstacle is a pedestrian, but it can be understood that the target obstacle can also be other objects, such as vehicles, cyclists, unrecognizable objects, etc. This application does not limit this.
  • the user is located in the vehicle, the HUD virtual screen is located in front of the user, and pedestrians are also located in front of the user.
  • pedestrians In the coordinate system of the human eye, as shown in FIG. 8 , pedestrians have a perspective pattern of small in the distance and large in the near. The pedestrians can be marked by the two-dimensional frame 8 .
  • a plan view 9 of the HUD virtual screen 901 and pedestrian signs 902 seen by the human eye can be drawn, that is, a plan view of the HUD virtual screen and the two-dimensional frame, as shown in Figure 9 .
  • the plan view in Figure 9 is converted from a three-dimensional space in the human eye coordinate system to a two-dimensional plane that can be seen by the human eye, removing the Z-axis information of the HUD virtual screen and pedestrians in the human eye coordinate system.
  • plan view 9 may be the X-axis in the human eye coordinate system, and the Y-axis in the plan view may be the Y-axis in the human eye coordinate system.
  • the plan view in Figure 9 includes the plane coordinates of the HUD virtual screen and the plane coordinates of the two-dimensional frame. It can be understood that the pedestrians can also be marked by ellipses, pedestrian icons, etc., and this application does not limit this.
  • the pedestrian may be located inside the HUD virtual screen or outside the HUD virtual screen.
  • the center point P of the two-dimensional frame 902 corresponding to the pedestrian can first be determined, and then it is determined whether the center point P of the two-dimensional frame 902 is located outside the HUD virtual screen 901, so that the pedestrian can be determined Whether located outside the HUD virtual screen 901.
  • the center point of the two-dimensional border may be the intersection point of the diagonals of the two-dimensional border, that is, the center of gravity of the two-dimensional border, such as the dot P of the two-dimensional border in Figure 8 .
  • the dotted line in Figure 8 is the diagonal line of the two-dimensional frame.
  • the plane coordinates of the center point of the two-dimensional frame can be determined based on the plane coordinates of the two-dimensional frame, and whether the center point of the two-dimensional frame is located on the HUD can be determined based on the plane coordinates of the center point of the two-dimensional frame. Off virtual screen.
  • the plane coordinates of the center point of the two-dimensional frame are within the plane coordinate range of the HUD virtual screen, it is determined that the center point of the two-dimensional frame is located within the HUD virtual screen, thereby determining that the pedestrian is located on the HUD virtual screen.
  • the plane coordinates of the center point t1 of the two-dimensional frame C1 are (7, 8)
  • the plane coordinate range of the HUD virtual screen S1 is (Xs1, Ys1), where Xs1 ⁇ [0, 10 ], Ys1 ⁇ [0, 10].
  • the plane coordinates (7, 8) of the center point t1 of the two-dimensional frame C1 are within the plane coordinate range (Xs1, Ys1) of the HUD virtual screen S1, and it can be determined that the center point t1 of the two-dimensional frame C1 is located where In the HUD virtual screen S1, it can be determined that the pedestrian is located in the HUD virtual screen. If the plane coordinates of the center point of the two-dimensional frame are outside the plane coordinate range of the HUD virtual screen, it is determined that the center point of the two-dimensional frame is located outside the HUD virtual screen, thereby determining that the pedestrian is located on the HUD virtual screen. outside.
  • the plane coordinates of the center point t2 of the two-dimensional frame C2 are (12, 8), and the plane coordinate range of the HUD virtual screen S2 is (Xs2, Ys2), where Xs2 ⁇ [0, 10 ], Ys2 ⁇ [0, 10].
  • the plane coordinates (12, 8) of the center point t2 of the two-dimensional frame C2 are outside the plane coordinate range (Xs2, Ys2) of the HUD virtual screen S2, and it can be determined that the center point t2 of the two-dimensional frame C2 is located where outside the HUD virtual screen S2, so it can be determined that the pedestrian is outside the HUD virtual screen.
  • the center point P of the two-dimensional frame 902 is located outside the HUD virtual screen 901 , and the pedestrian is located outside the HUD virtual screen 901 .
  • the center point of the two-dimensional frame can also be the geometric center of the two-dimensional frame, etc., and this application does not limit this.
  • the target obstacle may be a common obstacle or an uncommon obstacle.
  • Common obstacles can be, for example, pedestrians, vehicles, cyclists, etc. Different common obstacles may have the same or different markings.
  • the vehicle can be marked by a two-dimensional frame, a trapezoid (as shown in Figure 11A), or a vehicle icon. Cyclists can be marked by two-dimensional borders (as shown in Figure 11B), trapezoids, or cyclist icons.
  • Unusual obstacles can be unrecognizable objects such as mud piles on the roadside. Mud piles on the roadside can be marked by the outline of the object (as shown in Figure 11C). The center point of the uncommon obstacle may be the marked geometric center of the uncommon obstacle.
  • the pedestrian may be located in a side area outside the HUD virtual screen, or in a corner area outside the HUD virtual screen.
  • the sub-region outside the HUD virtual screen 901 where the center point P of the two-dimensional frame 902 corresponding to the pedestrian is located can be first determined, and then it is determined based on the sub-region whether the center point P of the two-dimensional frame 902 is located there.
  • the side area outside the HUD virtual screen 901 can be determined to determine whether the pedestrian is located in the side area outside the HUD virtual screen 901 .
  • the coordinates of vertex A, vertex B, vertex C, and vertex D of the HUD virtual screen are (0, 10), (10, 10) respectively. , (0,0), and (10,0). Taking the four sides of the HUD virtual screen as boundaries, divide the area outside the HUD virtual screen into 8 sub-areas, namely the first sub-area z1, the second sub-area z2, the third sub-area z3, and the fourth sub-area. z4, the fifth sub-region c1, the sixth sub-region c2, the seventh sub-region c3, and the eighth sub-region c4.
  • the first sub-area z1, the second sub-area z2, the third sub-area z3, and the fourth sub-area z4 are the side areas outside the HUD virtual screen
  • the fifth sub-area c1, the sixth sub-area c2, and the seventh sub-area c3 and the eighth sub-area c4 are the corner areas outside the HUD virtual screen.
  • the X-axis coordinate X1 of the first sub-region z1 is within the interval (0, 10)
  • the Y-axis coordinate Y1 is greater than 10.
  • the X-axis coordinate X2 of the second sub-region z2 is greater than 10, and the Y-axis coordinate Y2 is within the interval (0, 10).
  • the X-axis coordinate X3 of the third sub-region z3 is within the interval (0, 10), and the Y-axis coordinate Y3 is less than 0.
  • the X-axis coordinate X4 of the fourth sub-region z4 is less than 0, and the Y-axis coordinate Y4 is within the interval (0, 10).
  • the X-axis coordinate X5 of the fifth sub-region c1 is greater than 10, and the Y-axis coordinate Y5 is greater than 10.
  • the X-axis coordinate X6 of the sixth sub-region c2 is greater than 10, and the Y-axis coordinate Y6 is less than 0.
  • the X-axis of the seventh sub-region c3 The coordinate X7 is less than 0, and the Y-axis coordinate Y7 is less than 0.
  • the X-axis coordinate X8 of the eighth sub-region c4 is less than 0, and the Y-axis coordinate Y8 is greater than 10.
  • the target obstacle is marked as a two-dimensional border
  • the target sub-area where the center point O of the two-dimensional border is located can be queried according to the plane coordinates of the center point O of the two-dimensional border, and based on The target sub-area determines whether the center point O of the two-dimensional frame is located in the edge area outside the HUD virtual screen or in the corner area outside the HUD virtual screen.
  • the plane coordinates of the center point O of the two-dimensional border are located within the coordinate interval of the target sub-region.
  • the plane coordinates (6, 12) of the center point O1 of the two-dimensional frame R1 are located in the (X1, Y1) interval of the first sub-region z1 of the HUD virtual screen, then the two-dimensional The center point O1 of the frame R1 is located in the first sub-region z1, and it can be determined that the center point O1 of the two-dimensional frame R1 is located in the side area outside the HUD virtual screen, so it can be determined that the pedestrian is located in the side area outside the HUD virtual screen.
  • the plane coordinates (12, 12) of the center point O2 of the two-dimensional frame R2 are located in the (X5, Y5) interval of the fifth sub-region c1 of the HUD virtual screen, then the two-dimensional The center point O2 of the frame R2 is located in the fifth sub-region c1, and it is determined that the center point O2 of the two-dimensional frame R2 is located in the corner area outside the HUD virtual screen, so it can be determined that the pedestrian is located in the corner area outside the HUD virtual screen.
  • the center point P of the two-dimensional frame 902 is located in the z2 sub-area outside the HUD virtual screen 901, then it can be determined that the center point P of the two-dimensional frame 902 is located in the side area outside the HUD virtual screen 901, so that It can be determined that the pedestrian is located in a side area outside the HUD virtual screen 902 .
  • This application can determine the display position of the preset warning icon in the HUD virtual screen 902 based on the above-mentioned determined pedestrian area outside the HUD virtual screen 902 .
  • the center point O1 of the two-dimensional frame R1 is located in the edge area outside the HUD virtual screen.
  • the target edge AB closest to the z1 sub-area among the four sides of the HUD virtual screen can be determined, and then through The center point O1 of the two-dimensional frame R1 is drawn as a perpendicular line perpendicular to the target side AB, and the vertical line intersects the target side at the intersection E, then it can be determined that the preset warning icon is displayed on the HUD virtual screen at or near the intersection E. .
  • the preset warning icon is a preset warning icon without directivity.
  • the preset warning icon is used to directly provide the driver with the spatial location information of the target obstacle inside and outside the intuitive HUD virtual screen.
  • the preset warning icon may be determined according to the type of target obstacle.
  • the types of target obstacles include pedestrians, vehicles, cyclists, unrecognizable objects, etc.
  • the preset warning icon corresponding to a pedestrian is a pedestrian icon
  • the preset warning icon corresponding to a vehicle is a vehicle icon
  • the preset warning icon corresponding to a cyclist is a cyclist icon
  • the preset warning icon corresponding to an unrecognizable object is Preset information, such as stars, as shown in Figure 14.
  • the unrecognizable objects may also be exclamation marks, warning marks, etc., and this application does not limit this.
  • the preset warning icons may be preset information, such as stars, exclamation points, warning signs, etc., and the preset warning icons corresponding to different types of target obstacles may be the same or different.
  • the position of the preset warning icon display can be adjusted in real time according to the position of the target obstacle and the HUD virtual screen.
  • the preset warning icon is a directional preset warning icon, such as a directional star as shown in FIG. 12 .
  • the directional preset warning icon may include an indicator icon.
  • the indicator icon is used to indicate the direction of the target obstacle.
  • the indication icon includes an arrow, and the arrow of the indication icon points in the direction of a center point of the target obstacle.
  • the center points of the arrow and the preset warning icon are both on the extension line of the vertical line.
  • the position of the preset warning icon display can be adjusted in real time according to the position of the target obstacle and the HUD virtual screen, and the indicator icon can be adjusted in real time.
  • the arrow is pointed in a direction so that it always points toward the center of the target obstacle.
  • the preset warning icon can also add a motion effect (or called: motion special effect).
  • the motion effect may be that when the target obstacle is outside the HUD virtual screen, a directional preset warning icon or a non-directional preset warning icon may be displayed; when the target obstacle is within the HUD virtual screen , because the user can intuitively see the location of the target obstacle, preset warning icons without directivity can be displayed.
  • the motion effect may be to enlarge or reduce the preset warning icon on a two-dimensional plane. Among them, the preset warning icon can change according to the distance of the target obstacle from the vehicle. The closer the distance between the target obstacle and the vehicle, the larger the preset warning icon will be.
  • the preset warning icon is the initial size.
  • the preset warning icon is reduced.
  • the preset warning icon is enlarged. For example, when a vehicle is waiting at a red street light and a pedestrian walks in front of the vehicle on the sidewalk, the pedestrian icon will be enlarged.
  • the preset warning icon can gradually become larger, and as the target obstacle gradually moves away from the vehicle, the preset warning icon can gradually become smaller. Therefore, the size of the preset warning icon can be used to remind pedestrians of their distance from the vehicle.
  • the motion effect may be to change the color of the preset warning icon.
  • the colors of the preset warning icons can be different when the target obstacle and the vehicle are within different distance ranges. The closer the distance between the target obstacle and the vehicle, the more eye-catching the color of the preset warning icon will be. For example, if the distance between the target obstacle and the own vehicle is within the first distance range, the color of the preset warning icon is the first color.
  • the color of the preset warning icon is the second color.
  • the first color may be yellow and the second color may be red, which is not limited in this application. Therefore, the color of the preset warning icon can be used to remind pedestrians of their distance from the vehicle.
  • the movement effect can also be that the movement direction of the preset warning icon is determined according to the movement direction of the target obstacle. For example, if a pedestrian faces left, then the movement direction of the pedestrian icon is toward the left, as shown in Figure 15A If the pedestrian is facing the right direction, the movement direction of the pedestrian icon is to the right direction, as shown in Figure 15B.
  • the movement effect when the target obstacle is moving, for example, when the target obstacle in Figure 12 is moving in front of the own vehicle and toward the own vehicle, the movement effect may be that the preset warning icon is a dynamic preset warning icon, For example, dynamic pedestrian icons walking, dynamic cyclists riding, dynamic vehicles moving, etc.
  • the movement direction of the dynamic preset warning icon is the same as the movement direction of the target obstacle.
  • the movement direction of the walking pedestrian icon is the same as the walking direction of the pedestrian, that is, the direction in which the pedestrian is facing.
  • the preset warning icon may also include distance information.
  • the distance information is the distance between the target obstacle and the own vehicle. During the movement of the target obstacle or the vehicle, the displayed distance information can be adjusted in real time according to the distance between the target obstacle and the vehicle. It can be understood that the distance between the target obstacle and the vehicle can be obtained through the sensor, which is not limited in this application. It can be understood that the position of the distance information can be information such as numbers, Chinese characters, etc.; the distance information can be displayed at any suitable position on the HUD virtual screen, and this application does not limit this.
  • the center point P of the two-dimensional frame 902 is located in the z2 sub-area outside the HUD virtual screen 901.
  • a vertical line passing through the center point P of the two-dimensional frame 902 and perpendicular to the target side a1 intersects the target side a1.
  • the preset warning icon can be displayed on the HUD virtual screen at the intersection point F.
  • the position of the pedestrian can be located not only in the edge area outside the HUD virtual screen, but also in the corner area outside the HUD virtual screen.
  • any information about the pedestrian will not be displayed on the HUD virtual screen, so it is impossible to provide early warning to the driver for the pedestrian.
  • This application can determine the display position of the preset warning icon on the HUD virtual screen based on the above-mentioned determined corner area where the pedestrian is outside the HUD virtual screen.
  • the center point O2 of the two-dimensional frame R2 is located in the corner area outside the HUD virtual screen.
  • the target angle B closest to the c1 sub-area among the four corners of the HUD virtual screen can be determined, and
  • the connection line O2B between the center point O2 of the two-dimensional frame R2 and the target angle B can determine that the preset warning icon is displayed at the target angle B on the HUD virtual screen.
  • the direction of the indicator icon eg, the direction of the arrow
  • the center point of the preset warning icon are both on the extension line of connection line O2B.
  • the process of displaying the preset warning icon in Figure 13 is similar to the process of displaying the preset warning icon in Figure 12, and will not be described again.
  • the vehicle machine can also control the display of the preset warning icon in the display area according to the display position of the preset warning icon on the HUD virtual screen.
  • the vehicle machine can send the display position of the preset warning icon on the HUD virtual screen to the HUD, AR-HUD or other device with display function.
  • the HUD, AR-HUD or other devices with display functions can control the display of the preset warning icon in the display area according to the display position of the preset warning icon on the HUD virtual screen, so that the vehicle can control the HUD, AR- A HUD or other device with a display function displays the preset warning icon in the display area.
  • HUD, AR-HUD or other devices with display functions can be installed above or inside the center console of the vehicle, and are mainly used to display preset warning icons in the display area. It is understandable that HUD, AR-HUD or other devices with display functions can also be installed at other locations, and this application does not limit this.
  • the display area can be the front windshield of the vehicle, or it can also be an independently displayed transparent screen to reflect the light of preset warning icons emitted by HUD, AR-HUD or other devices with display functions and then enter the user's eyes. When the user looks out of the car through the front windshield or transparent screen, he can see the preset warning icon corresponding to the position of the target obstacle outside the vehicle to indicate the spatial position information of the target obstacle outside the HUD virtual screen. , which can improve driving safety.
  • the pedestrian When the vehicle is waiting for a red light at an intersection, the pedestrian may be on the far right side of the sidewalk in front of the vehicle, preparing to cross the road along the sidewalk, as shown in Figure 16.
  • the vehicle-machine can obtain the spatial coordinates of the pedestrian in the human eye coordinate system and the spatial coordinates of the HUD virtual screen in the human eye coordinate system, and calculate the spatial coordinates of the pedestrian in the human eye coordinate system and the HUD virtual screen in the human eye coordinate system.
  • Spatial coordinate drawing under coordinate system The plan view of pedestrians and the HUD virtual screen as seen by the human eye is shown in Figure 17. Although the sidewalk lines are shown in FIG.
  • the sidewalk lines are only to increase the sense of reality, and the sidewalk lines can be omitted in the plan view of the HUD virtual screen.
  • the vehicle-machine can determine that the pedestrian is outside the HUD virtual screen based on the floor plan. At this time, the vehicle-machine can determine the movement special effects of the preset warning icon, and can determine the preset warning icon on the HUD virtual screen based on the plane coordinates of the pedestrian and the plane coordinates of the HUD virtual screen.
  • the display position on the screen The vehicle machine can also control HUD, AR-HUD or other devices with display functions according to the display position to display the preset warning icon in the display area according to the motion special effects.
  • a reduced, first color directional preset warning icon such as a reduced, yellow, directional pedestrian icon
  • the arrow of the directional pedestrian icon points in the direction of the center point of the pedestrian on the right side outside the HUD virtual screen. Therefore, it can be indicated that there is a pedestrian at the corresponding position on the right side outside the HUD virtual screen and that he is facing the left direction, and the spatial position information of the pedestrian on the right side outside the HUD virtual screen can be directly provided to the driver.
  • the motion special effects in Figure 17 can omit at least one of reduction, yellow, motion direction, and directivity, and this application does not limit this.
  • the preset warning icon in Figure 17 may also include distance information, which is not limited in this application.
  • the vehicle-machine can obtain the spatial coordinates of the pedestrian in the human eye coordinate system and the spatial coordinates of the HUD virtual screen in the human eye coordinate system, and calculate the spatial coordinates of the pedestrian in the human eye coordinate system and the HUD virtual screen in the human eye coordinate system.
  • the spatial coordinates under the coordinate system draw the plan view of pedestrians and the HUD virtual screen seen by the human eye, as shown in Figure 19.
  • the sidewalk lines are only for increasing the sense of reality, and the sidewalk lines may be omitted in the plan view.
  • the vehicle-machine can determine that the pedestrian is located in the HUD virtual screen based on the floor plan. At this time, the vehicle-machine can determine the movement special effects of the preset warning icon and determine the display position of the preset warning icon on the HUD virtual screen.
  • the vehicle machine can also control HUD, AR-HUD or other devices with display functions according to the display position to display the preset warning icon in the display area according to the motion special effects. Therefore, as shown in FIG. 19 , an enlarged, non-directional preset warning icon of a second color, such as an enlarged, red pedestrian icon, can be displayed in the HUD virtual screen.
  • the enlarged, red pedestrian icon seen by the user can overlap with the location of the pedestrian in the real world. This can prompt the driver that the pedestrian is very close to the vehicle and is facing the left direction, and can improve the reality of the icon in the driver's field of vision.
  • the motion special effects in Figure 19 can omit at least one of amplification, red, and motion direction, and this application does not limit this. It can be understood that before the pedestrian reaches the position shown in Figure 18, as the distance between the pedestrian and the vehicle changes, the color of the pedestrian icon in Figure 17 can also change, and this application does not limit this.
  • the pedestrian icon in Figure 19 changes with the distance between the pedestrian and the vehicle in the HUD virtual screen, moves as the pedestrian walks, and always overlaps with the pedestrian's position.
  • the vehicle-machine can obtain the spatial coordinates of the pedestrian in the human eye coordinate system and the spatial coordinates of the HUD virtual screen in the human eye coordinate system, and calculate the spatial coordinates of the pedestrian in the human eye coordinate system and the HUD virtual screen in the human eye coordinate system.
  • the spatial coordinates under the coordinate system draw the plan view of pedestrians and the HUD virtual screen seen by the human eye, as shown in Figure 21.
  • the sidewalk lines are only to increase the sense of reality, and the sidewalk lines may be omitted in the plan view.
  • the vehicle-machine can determine that the pedestrian is outside the HUD virtual screen based on the floor plan. At this time, the vehicle-machine can determine the movement special effects of the preset warning icon, and can determine the preset warning icon on the HUD virtual screen based on the plane coordinates of the pedestrian and the plane coordinates of the HUD virtual screen.
  • the vehicle machine can also control HUD, AR-HUD or other devices with display functions according to the display position to display the preset warning icon in the display area according to the motion special effects.
  • a reduced, first-color directional preset warning icon such as a reduced, yellow, directional pedestrian icon
  • the arrow of the directional pedestrian icon points in the direction of the center point of the pedestrian on the left side outside the HUD virtual screen. Therefore, it can be indicated that there is a pedestrian at the corresponding position on the left side outside the HUD virtual screen and that he is facing the left direction, thereby directly providing the driver with intuitive spatial position information of the pedestrian on the left side outside the HUD virtual screen.
  • the motion special effects in Figure 21 can omit at least one of reduction, yellow, motion direction, and directivity, and this application does not limit this.
  • the pedestrian icon displayed on the left edge of the HUD virtual screen near the pedestrian in Figure 21 will become larger and larger due to the motion special effects. becomes smaller and smaller, and is a walking pedestrian icon, until the pedestrian is no longer recognized as the target obstacle, then the pedestrian icon in Figure 21 disappears at this time.
  • the preset warning icon in Figure 21 may also include distance information, which is not limited in this application.
  • pedestrians can not only move from the right side outside the HUD virtual screen to the HUD virtual screen, but also move from the upper, lower or left side outside the HUD virtual screen to the HUD virtual screen.
  • the display process is the same as moving from outside the HUD virtual screen.
  • the display process when the right side moves into the HUD virtual screen is similar and will not be described again here.
  • Figure 16-21 describes the display method by taking the target obstacle moving from outside the HUD virtual screen to inside the HUD virtual screen and then outside the HUD virtual screen as an example.
  • the target obstacle may also be an off-screen movement relative to the vehicle close to the vehicle, such as a vehicle passing by a cyclist.
  • the following is an example of a scenario in which a vehicle passes by a cyclist.
  • the vehicle machine can obtain the spatial coordinates of the cyclist in the human eye coordinate system and the spatial coordinates of the HUD virtual screen in the human eye coordinate system, and use the cyclist's spatial coordinates in the human eye coordinate system and the HUD virtual screen to The spatial coordinates of the screen in the human eye coordinate system draw the plan view of the cyclist and the HUD virtual screen seen by the human eye, as shown in Figure 23.
  • icons of highway lines and driving directions are shown in FIG.
  • the icons of highway lines and driving directions are only to increase the sense of reality, and the icons of highway lines and driving directions may be omitted in the plan view.
  • the vehicle and computer can determine based on the floor plan that the cyclist is outside the HUD virtual screen. At this time, the vehicle and computer can determine the movement special effects of the preset early warning icon, and determine the preset warning icon based on the plane coordinates of the cyclist in the floor plan and the plane coordinates of the HUD virtual screen. Set the display position of the warning icon on the HUD virtual screen.
  • the vehicle machine can also control HUD, AR-HUD or other devices with display functions according to the display position to display the preset warning icon in the display area according to the motion special effects.
  • a reduced, first-color directional preset warning icon can be displayed on the upper right edge of the HUD virtual screen close to the cyclist, such as a reduced, yellow, directional cyclist warning icon. row people icon.
  • the arrow of the directional cyclist icon points in the direction of the center point of the cyclist on the upper right side outside the HUD virtual screen. Therefore, it can be indicated that there is a cyclist at the corresponding position on the right side outside the HUD virtual screen, and that the rider is facing away from the vehicle, and the spatial location information of the cyclist on the right side outside the HUD virtual screen can be directly provided to the driver.
  • the motion special effect in FIG. 23 can omit at least one of reduction, yellow, motion direction, and directivity, and this application does not limit this.
  • the vehicle may gradually get closer to the cyclist.
  • the horizon will intersect the horizon at infinity, so objects of equal height below the horizon will be farther away and upward, as shown in Figure 24.
  • cyclists have a perspective rule of small distance and large distance. Then, when the vehicle gradually approaches the cyclist from a distance, the cyclist visually gradually becomes larger and moves downward, as shown in Figure 25.
  • the vehicle machine can obtain the spatial coordinates of the cyclist in the human eye coordinate system and the spatial coordinates of the HUD virtual screen in the human eye coordinate system, and use the cyclist's spatial coordinates in the human eye coordinate system and the HUD virtual screen to
  • the spatial coordinates of the screen in the human eye coordinate system draw the plan view of the cyclist and the HUD virtual screen seen by the human eye, as shown in Figure 26.
  • icons of highway lines and driving directions are shown in FIG. 26 , it can be understood that the icons of highway lines and driving directions are only to increase the sense of reality, and the icons of highway lines and driving directions may be omitted in the plan view.
  • the vehicle and machine can determine based on the floor plan that the rider is outside the HUD virtual screen. At this time, the vehicle machine can determine the motion effects of the preset warning icon, and can determine the display position of the preset warning icon on the HUD virtual screen based on the plane coordinates of the rider and the plane coordinates of the HUD virtual screen.
  • the vehicle machine can also control HUD, AR-HUD or other devices with display functions according to the display position to display the preset warning icon in the display area according to the motion special effects.
  • an enlarged, first-color directional preset warning icon such as an enlarged, yellow, directional cyclist icon
  • an enlarged, first-color directional preset warning icon can be displayed at the lower end of the right edge of the HUD virtual screen close to the cyclist. row people icon.
  • the arrow of the directional cyclist icon points in the direction of the center point of the cyclist at the lower right end of the HUD virtual screen. Therefore, it can be indicated that there is a cyclist at the corresponding position on the right side outside the HUD virtual screen and facing away from the vehicle, thereby directly providing the driver with intuitive spatial location information of the cyclist on the right side outside the HUD virtual screen.
  • the motion special effects in Figure 26 can omit at least one of amplification, yellow, motion direction, and directivity, and this application does not limit this. It can be understood that before the vehicle travels to the position shown in Figure 25, as the vehicle gradually approaches the cyclist, the cyclist icon in Figure 23 gradually moves downward along the right edge of the HUD virtual screen until it moves to Figure 26 position shown, and the cyclist icon in Figure 23 is gradually enlarged to the size of the cyclist icon shown in Figure 26. It can be understood that after the vehicle travels to the position shown in Figure 25, as the vehicle continues to travel, the cyclist icon in Figure 26 will continue to enlarge, become a dynamic cyclist icon, and continue to move along the HUD virtual screen.
  • the target obstacle may also be a vehicle or an unrecognizable object.
  • the display process of the vehicle or the unrecognizable object is similar to the above-mentioned display process of pedestrians and cyclists, and will not be described again here.
  • the display method of the present application can not only provide the driver with the spatial position information of a single target obstacle, but also provide the driver with the spatial position information of multiple target obstacles.
  • the types of the multiple target obstacles can be the same or different. .
  • multiple The target obstacles can be taken as a whole to form a group, such as a crowd. Multiple target obstacles may not be treated as a whole, but may form multiple separate entities.
  • each target obstacle in multiple target obstacles is the same as the type of other target obstacles, and within a preset time (such as 2 seconds, etc.), every two adjacent target obstacles in the multiple target obstacles If the distance between target obstacles is less than a preset threshold (such as 0.5 meters, etc.) and they move in the same direction, multiple target obstacles can be treated as a whole to form a group, such as a crowd.
  • a preset threshold such as 0.5 meters, etc.
  • each target obstacle in the multiple target obstacles is the same as the type of other target obstacles, and every two adjacent target obstacles in the multiple target obstacles within a preset time (such as 2 seconds, etc.)
  • a preset threshold such as 0.5 meters, etc.
  • multiple target obstacles can be treated as a whole to form a group, such as a crowd.
  • the type of target obstacles may also include crowds.
  • Said groups of people are common obstacles.
  • the crowd can be marked by a two-dimensional border (as shown in Figure 27), a trapezoid, or a group icon.
  • the preset warning icon corresponding to the crowd may be a crowd icon.
  • the crowd icon may be a pedestrian icon with an added subscript value, that is, an icon subscript.
  • the index value may be the number of people, as shown in Figure 28.
  • the crowd icon can also be a group icon, as shown in Figure 28.
  • multiple target obstacles may not be treated as a whole, but may form multiple separate individuals. For example, if any two adjacent target obstacles among multiple target obstacles have different types, or the distance between them is greater than a preset threshold (such as 0.5 meters, etc.), or the movement directions of the two are opposite, then Multiple target obstacles may not be treated as a whole and form multiple separate entities.
  • a preset threshold such as 0.5 meters, etc.
  • Each individual individual is identified individually, for example each individual pedestrian is identified individually with a pedestrian icon. If the indicator icons of individually identified pedestrian icons overlap, the pedestrian icons with overlapping indicator icons can be combined to form a combined icon.
  • the combination icon can combine pedestrian icons according to the pedestrian's actual spatial position, for example, according to the pedestrian's movement direction, left and right position, and far and near position (front and back position).
  • the combination icon can truly reflect the distance between pedestrians through distance information. In some embodiments, the combination icon does not truly reflect the distance between pedestrians. For example, pedestrian 1 is located to the left of pedestrian 2, and pedestrian 1 is closer to the vehicle than pedestrian 2. Then pedestrian icon 1 in the combination icon is located to the left of pedestrian icon 2, and pedestrian icon 1 is closer to the driver than pedestrian icon 2, that is, the pedestrian icon 1 is in front of pedestrian icon 2.
  • the pedestrian icons in the combination icon can be stacked front to back, as shown in Figure 28. At this time, the movement direction of the pedestrians is opposite and the distance between the pedestrians is less than the preset distance (such as 0.05 meters, etc.), for example, two pedestrians pass by each other.
  • the pedestrian icons in the combination icon can be separated from each other and displayed according to the actual spatial position, as shown in Figure 28. At this time, the distance between pedestrians is any distance greater than the preset distance (such as 0.05 meters, etc.).
  • the combination icon may be a directional combination icon, that is, the combination icon includes an indicator icon.
  • the combination icon may be a combination icon without directivity. For example, if the pedestrian icon before combination is a directional pedestrian icon, the combined icon can be a combination icon that can be directional; if the pedestrian icon before combination is a pedestrian icon without directionality, the combined icon can be a non-directional pedestrian icon.
  • Directional combination icon if the pedestrian icon before combination is a directional pedestrian icon, the combined icon can be a combination icon that can be directional; if the pedestrian icon before combination is a pedestrian icon without directionality, the combined icon can be a non-directional pedestrian icon.
  • Directional combination icon if the pedestrian icon before combination is a directional pedestrian icon, the combined icon can be a combination icon that
  • the pedestrian icons in the separately marked pedestrian icons overlap but the indicator icons do not overlap
  • the pedestrian icons can be displayed separately according to the actual spatial positions of the pedestrians, so that the pedestrian icons in the individually marked pedestrian icons do not overlap and the indicator icons do not overlap.
  • the pedestrian icon 3 among the directional pedestrian icons 3 of pedestrian 3 in the HUD virtual screen overlaps with the pedestrian icon 4 among the directional pedestrian icons 4 of pedestrian 4, but the indicator icons do not overlap.
  • the directional pedestrian icon 3 and the directional pedestrian icon 4 can be separated from each other until the pedestrian icon 3 in the directional pedestrian icon 3 and the pedestrian icon 4 in the directional pedestrian icon 4 are separated.
  • the directional pedestrian icon 3 and the directional pedestrian icon 4 that are separated from each other can then be displayed without overlapping and the indicator icons do not overlap, as shown in Figure 29B, thereby making the display clearer and indicating the HUD
  • the pedestrian icons in the pedestrian icons overlap but the indicator icons do not overlap. This may be a directional pedestrian icon in which the pedestrian icons overlap but the indicator icons do not overlap. The pedestrian icons in the pedestrian icons overlap but the indicator icons do not overlap. Alternatively, a directional pedestrian icon and a pedestrian icon without a directional pedestrian icon overlap. In some embodiments, the pedestrian icons in the pedestrian icons do not overlap and the indicator icons do not overlap may be a directional pedestrian icon in which the pedestrian icons do not overlap and the indicator icons do not overlap. The pedestrian icons in the pedestrian icons do not overlap and the indicator icons do not overlap. The pedestrian icons may also be directional pedestrian icons and the pedestrian icons without directional pedestrian icons do not overlap.
  • the pedestrian icons can be stacked one after another according to the actual spatial position of the pedestrian.
  • a plurality of target obstacles may be located within the HUD virtual screen, or located outside the HUD virtual screen. If multiple target obstacles form a group, first determine the center point of the group's logo, such as the center point of the crowd's two-dimensional frame, and then determine whether the center point of the crowd's two-dimensional frame is located within the HUD virtual screen. If the center point of the two-dimensional frame of the crowd is located within the HUD virtual screen, then the crowd is located within the HUD virtual screen. If the center point of the two-dimensional frame of the crowd is outside the HUD virtual screen, then the crowd is outside the HUD virtual screen.
  • the center point of the group's logo such as the center point of the crowd's two-dimensional frame
  • the display of the crowd icon corresponding to the crowd on the HUD virtual screen is determined based on the center point of the two-dimensional border of the crowd.
  • the display position is similar to the display position of the pedestrian icon corresponding to the pedestrian on the HUD virtual screen based on the center point of the pedestrian's two-dimensional frame in the above single target obstacle scenario, which will not be described again here.
  • the multiple target obstacles are separate individuals, you can determine the center point of each individual individual mark, such as the center point of each pedestrian's two-dimensional border, and then determine the center point of each pedestrian's two-dimensional border. Whether the center point is within the HUD virtual screen. If the center point of a pedestrian's two-dimensional frame is located within the HUD virtual screen, then the pedestrian is located within the HUD virtual screen. If the center point of a pedestrian's two-dimensional frame is located outside the HUD virtual screen, the pedestrian is located outside the HUD virtual screen.
  • the display position of the crowd icon corresponding to the crowd on the HUD virtual screen is determined based on the center point of the pedestrian's two-dimensional frame, which is the same as the above-mentioned single target obstacle scenario based on the pedestrian's two-dimensional frame.
  • the display position of the pedestrian icon corresponding to the pedestrian icon determined by the center point on the HUD virtual screen is similar and will not be described again here.
  • the crowd icon corresponding to the crowd also has a movement effect, and the movement effect of the crowd icon is similar to the movement effect of the preset warning icon corresponding to a single target obstacle.
  • the crowd when the crowd is outside the HUD virtual screen, it is a directional crowd icon or a non-directional crowd icon.
  • the crowd when the crowd is inside the HUD virtual screen, it is a non-directional crowd icon.
  • the crowd when the crowd is in the HUD virtual screen, it can be a non-directional crowd icon with a box, as shown in Figure 30. The box selects the crowd as a whole, and the box can indicate The actual size of the area where the population as a whole is located.
  • the display method can also display the crowd icon and the pedestrian icon separately according to the actual spatial position of the pedestrian; the display method can also display the crowd icon and another crowd icon separately according to the actual spatial position of the pedestrian. ; The display method can also display the combination icon and the pedestrian icon away from each other according to the actual spatial position of the pedestrian; the display method can also display one combination icon and the other combination icon separately according to the actual spatial position of the pedestrian. This application does not limit this.
  • the group can also be a group of cyclists, a group of vehicles, etc.
  • the type of the target obstacle can also include a group of cyclists, a group of vehicles, etc., which is not limited in this application.
  • icons in the combined icons may include different types of icons, such as pedestrian icons and cyclist icons, which is not limited in this application.
  • the following will first introduce the display method in a scene where multiple target obstacles are a group.
  • Pedestrian 5 and Pedestrian 6 may walk to the far right of the sidewalk within a preset time (e.g., 2 seconds, etc.) with the distance between them less than a preset threshold (e.g., 0.5 meters, etc.) and moving in the same direction. side, and are preparing to cross the road along the sidewalk from the rightmost side of the sidewalk, pedestrian 7 and pedestrian 8 may cross the road in a preset time (for example, 2 seconds, etc.) with the distance between them less than the preset threshold (for example, 0.5 meters). etc.) and move in the same direction from the far right side of the sidewalk to the middle of the sidewalk.
  • a preset time e.g. 2 seconds, etc.
  • a preset threshold e.g., 0.5 meters, etc.
  • the vehicle-machine can obtain the spatial coordinates of pedestrian 5, pedestrian 6, pedestrian 7, and pedestrian 8 in the human eye coordinate system, and the spatial coordinates of the HUD virtual screen in the human eye coordinate system, and based on pedestrian 5, pedestrian 6,
  • the spatial coordinates of pedestrian 7 and pedestrian 8 in the human eye coordinate system and the spatial coordinates of the HUD virtual screen in the human eye coordinate system are drawn.
  • Pedestrian 5, pedestrian 6, pedestrian 7, pedestrian 8 and the HUD virtual screen are drawn as seen by the human eye.
  • Plan view as shown in Figure 32.
  • the sidewalk lines are shown in FIG. 32 , it can be understood that the sidewalk lines are only to increase the sense of reality, and the sidewalk lines may be omitted in the plan view.
  • the vehicle can determine that pedestrians 5 and 6 can be used as a whole to form crowd 1, and that pedestrians 7 and 8 can be used as a whole to form crowd 2.
  • the car machine can also determine that crowd 1 is outside the HUD virtual screen based on the floor plan, and determine that crowd 2 is inside the HUD virtual screen.
  • the vehicle can determine the special motion effects of crowd icon 1 corresponding to crowd 1 and the special motion effects of crowd icon 2 corresponding to crowd 2, and can determine the motion special effects of crowd 1, crowd 2, and the HUD virtual screen based on the plane coordinates of crowd 1, the plane coordinates of crowd 2, and the plane coordinates of the HUD virtual screen.
  • the vehicle machine can also control HUD, AR-HUD or other devices with display functions according to the display position to display the crowd icon 1 and the crowd icon 2 in the display area according to the motion special effects.
  • a reduced, first-color directional crowd icon 1, such as a reduced, yellow, directional crowd icon 1, can be displayed on the right edge of the HUD virtual screen close to the pedestrian; and
  • An enlarged, second-color, non-directional, boxed crowd icon 2 that overlaps with the position of the crowd in the real world can be displayed on the HUD virtual screen, for example, an enlarged, red, and rectangular crowd icon 2
  • crowd icon 2 with a square frame that overlaps with the position of crowd 2 in the real world. Therefore, it can indicate to the driver that there is crowd 1 at the corresponding position on the right side outside the virtual HUD screen.
  • Crowd 1 is facing the left direction, and prompts the driver about the crowd. 2 is very close to the vehicle and facing the left direction, which can directly provide the driver with the spatial location information of the crowd inside and outside the intuitive HUD virtual screen.
  • crowd icon 1 is a pedestrian icon with an added subscript value of 2
  • crowd icon 2 is a pedestrian icon with an added subscript value of 2. It is understandable that crowd icon 1 and crowd icon 2 can also be group icons. This document There are no restrictions on applications.
  • the movement special effects of the crowd icon 1 in Figure 32 can omit at least one of the reduced, yellow, movement direction, and directivity, and the movement special effects of the crowd icon 2 can omit the enlarged, red, movement direction, and At least one of the boxes, this application does not limit this. It can be understood that crowd icon 1 and crowd icon 2 in Figure 32 may also include distance information respectively, and this application does not limit this.
  • the crowd can also be located on the left side of the HUD virtual screen and walk left.
  • the crowd can also be located on the left side of the HUD virtual screen and walk right.
  • the crowd can also be located on the right side of the HUD virtual screen and walk right.
  • the crowd can also be located on the HUD virtual screen. Go right within the virtual screen, this application does not limit this.
  • the number of pedestrians in the crowd is not limited to two, but can also be other numbers, such as three, four, etc.; the number of pedestrians in different crowds can be the same or different, and this application does not limit this.
  • the group can also be a group of cyclists, a group of vehicles, etc., and this application does not limit this.
  • pedestrian 9 and pedestrian 10 may be moving in opposite directions and be on the rightmost side of the sidewalk, and the distance between them is greater than a preset threshold (such as 0.5 meters, etc.).
  • Pedestrian 11 and pedestrian 12 may be moving from The leftmost side of the sidewalk and the rightmost side of the sidewalk move in opposite directions to the middle of the sidewalk, and the distance between them is less than the preset distance (such as 0.05 meters, etc.).
  • Pedestrian 13 and pedestrian 14 may be moving in opposite directions and are on the sidewalk. on the leftmost side, and the distance between the two is less than the preset distance (such as 0.05 meters, etc.).
  • the vehicle-machine can obtain the spatial coordinates of pedestrian 9, pedestrian 10, pedestrian 11, pedestrian 12, pedestrian 13, and pedestrian 14 in the human eye coordinate system, and the spatial coordinates of the HUD virtual screen in the human eye coordinate system, and based on The spatial coordinates of pedestrian 9, pedestrian 10, pedestrian 11, pedestrian 12, pedestrian 13, and pedestrian 14 in the human eye coordinate system and the spatial coordinates of the HUD virtual screen in the human eye coordinate system are drawn to draw pedestrian 9 and pedestrian seen by the human eye. 10.
  • the plan view of pedestrian 11, pedestrian 12, pedestrian 13, pedestrian 14 and the HUD virtual screen is shown in Figure 34.
  • the sidewalk lines are shown in FIG. 34 , it can be understood that the sidewalk lines are only to increase the sense of reality, and the sidewalk lines may be omitted in the plan view.
  • the vehicle and machine can determine that pedestrian 9, pedestrian 10, pedestrian 11, pedestrian 12, pedestrian 13 and pedestrian 14 are multiple separate individuals.
  • the vehicle-machine can also determine based on the floor plan that pedestrians 9, 10, 13 and 14 are outside the HUD virtual screen, and determine that pedestrians 11 and 12 are inside the HUD virtual screen.
  • the vehicle and machine can determine the movement special effects of the pedestrian icon corresponding to each pedestrian among pedestrian 9, pedestrian 10, pedestrian 11, pedestrian 12, pedestrian 13 and pedestrian 14, and can determine the movement effect of the pedestrian icon corresponding to pedestrian 9, pedestrian 10, pedestrian 11 and pedestrian 12.
  • the plane coordinates of each pedestrian among pedestrian 13 and pedestrian 14 and the plane coordinates of the HUD virtual screen determine the display of the respective pedestrian icons of pedestrian 9, pedestrian 10, pedestrian 11, pedestrian 12, pedestrian 13 and pedestrian 14 on the HUD virtual screen. Location.
  • the vehicle and machine can also determine based on the motion effects that the pedestrian icon 9 of pedestrian 9 overlaps with the indicator icon of pedestrian icon 10 of pedestrian 10, the pedestrian icon 11 of pedestrian 11 overlaps with the pedestrian icon 12 of pedestrian 12, and the pedestrian icon 13 of pedestrian 13 overlaps with pedestrian 14.
  • the indicator icons of the pedestrian icon 14 overlap, and the pedestrian icon 9 and the pedestrian icon 10 can be combined to form the combination icon 1, the pedestrian icon 11 and the pedestrian icon 12 are stacked front and back according to the actual spatial position of the pedestrian, and the pedestrian icon 13 and the pedestrian icon 14 combine to form combination icon 2.
  • the pedestrian icon 9 and the pedestrian icon 10 in the combination icon 1 are separated from each other and displayed according to the actual spatial position, and the pedestrian icon 13 and the pedestrian icon 14 in the combination icon 2 are stacked one after another.
  • the vehicle machine can also determine the display position of combination icon 1 on the HUD virtual screen, the display position of the stacked pedestrian icon 11 and 12 on the HUD virtual screen, and the display position of combination icon 2 on the HUD virtual screen.
  • the vehicle can also control the HUD, AR-HUD or other devices with display functions to display the combined icon according to the display position of the combination icon 1, the display positions of the pedestrian icon 11 and the pedestrian icon 12 stacked front and back, and the display position of the combination icon 2. 1.
  • the pedestrian icon 11, the pedestrian icon 12, and the combination icon 2 stacked front and back are displayed in the display area. Therefore, as shown in FIG.
  • a reduced, first-color directional combination icon 1 for example, a reduced, yellow directional combination icon, can be displayed on the right edge of the HUD virtual screen close to pedestrians 9 and 10 1.
  • the arrows of the directional combination icon 1 point to pedestrian 9 and pedestrian 10; on the HUD virtual screen, the enlarged and second-color images are displayed and overlapped with the positions of pedestrians 11 and 12 in the real world.
  • Pedestrian icons 11 and 12 for example, enlarged, red and stacked pedestrian icons 11 and 12 that overlap with the positions of pedestrians 11 and 12 in the real world; and are close to each other on the left edge of the HUD virtual screen
  • Pedestrians 11 and 12 display a reduced directional combination icon 2 of the first color, such as a reduced, yellow directional combination icon 2, wherein the arrows of the directional combination icon 2 point to pedestrians 11 and 12 . Therefore, it can be indicated to the driver that there are pedestrians 9 and 10 at the corresponding positions on the right side outside the virtual HUD screen, and that pedestrian 9 is facing the right direction and pedestrian 10 is facing the left direction. At the same time, the driver can be reminded that pedestrians 11 and 12 are very close to the vehicle.
  • the movement special effects of the pedestrian icon 9 and the pedestrian icon 10 in the combined icon 1 in Figure 34 can omit at least one of the reduced, yellow, and directional effects, and the pedestrian icons 11 and 12 stacked one after another can be omitted.
  • the movement special effects can omit at least one of enlargement and red.
  • the movement special effects of the pedestrian icon 13 and pedestrian icon 14 in the combination icon 2 in Figure 34 can omit at least one of reduction, yellow, and directivity.
  • the pedestrian icon 9 and the pedestrian icon 10 in the combined icon 1 in FIG. 34 and the pedestrian icon 13 and the pedestrian icon 14 in the combined icon 2 may also include distance information respectively, which is not limited in this application.
  • the pedestrian icons corresponding to multiple individual pedestrians can also be overlapped outside the HUD virtual screen but the indicator icons do not overlap.
  • the pedestrian icons corresponding to multiple individual pedestrians can also be within the HUD virtual screen and on the HUD virtual screen.
  • the layman icons overlap but the indicator icons do not overlap.
  • multiple individual pedestrian icons need to be displayed separately and far away from each other according to the actual spatial position of the pedestrian. This application does not limit this.
  • pedestrian 15 and cyclist 1 When the vehicle is waiting for a red light at an intersection, pedestrian 15 and cyclist 1 may be on the sidewalk in front of the vehicle, as shown in Figure 35. In Figure 35, pedestrian 15 and cyclist 1 may be moving in opposite directions and be on the leftmost side of the sidewalk, and the distance between them is less than a preset threshold (eg, 0.5 meters, etc.).
  • a preset threshold eg, 0.5 meters, etc.
  • the vehicle-machine can obtain the spatial coordinates of the pedestrian 15 and the cyclist 1 in the human eye coordinate system, and the spatial coordinates of the HUD virtual screen in the human eye coordinate system, and based on the spatial coordinates of the pedestrian 15 and the cyclist 1 in the human eye coordinate system,
  • the spatial coordinates in the coordinate system and the spatial coordinates of the HUD virtual screen in the human eye coordinate system are used to draw the plan view of pedestrian 15, cyclist 1 and the HUD virtual screen as seen by the human eye, as shown in Figure 36.
  • the sidewalk lines are shown in FIG. 36 , it can be understood that the sidewalk lines are only to increase the sense of reality, and the sidewalk lines may be omitted in the plan view.
  • the vehicle and machine can determine that the pedestrian 15 and the cyclist 1 are multiple separate individuals.
  • the vehicle machine can also determine that pedestrian 15 and cyclist 1 are located outside the HUD virtual screen based on the floor plan. At this time, the vehicle machine can determine the motion effects of the pedestrian icon 15 corresponding to the pedestrian 15 and the cyclist icon 1 corresponding to the cyclist 1, and can determine the movement effects of the pedestrian icon 15 corresponding to the pedestrian 15 and the cyclist 1 according to the plane coordinates of the pedestrian 15 and the cyclist 1 and the plane coordinates of the HUD virtual screen The display positions of the pedestrian icon 15 of the pedestrian 15 and the cyclist icon 1 of the cyclist 1 on the HUD virtual screen are determined. The vehicle and machine can also determine the overlap of the indicator icons of the pedestrian icon 15 and the cyclist icon 1 based on the motion effects, and can combine the pedestrian icon 15 and the cyclist icon 1 to form a combined icon 3.
  • the pedestrian icon 15 and the cyclist icon 1 in the combination icon 3 are separated from each other and displayed according to their actual spatial positions.
  • the vehicle machine can also determine the display position of the combination icon 3 on the HUD virtual screen, and can control the HUD, AR-HUD or other devices with display functions to display the combination icon 3 in the display area according to the display position of the combination icon 3. Therefore, as shown in Figure 36, a reduced, first-color directional combination icon 3, such as a reduced, yellow directional icon, can be displayed on the left edge of the HUD virtual screen close to the pedestrian 15 and the cyclist 1.
  • the movement special effects of the pedestrian icon 15 and the cyclist icon 1 in the combined icon 1 in Figure 36 can omit at least one of reduction, yellow, and directivity, and this application does not limit this. It can be understood that the pedestrian icon 15 and the cyclist 1 in the combined icon 1 in Figure 36 may also include distance information respectively, which is not limited by this application. It can be understood that the icons in the combined icon can be pedestrian icons and vehicle icons, and the icons in the combined icon can also be cyclists and vehicle icons, which is not limited in this application.
  • the walking speeds of different pedestrians may be different, and the multiple pedestrians may walk to the positions shown in Figure 37.
  • pedestrian 5 and pedestrian 6 may move from the rightmost side of the sidewalk within a preset time (for example, 2 seconds, etc.) with the distance between them less than a preset threshold (for example, 0.5 meters, etc.) and moving in the same direction.
  • a preset threshold for example, 0.5 meters, etc.
  • Pedestrian 7 and pedestrian 8 may walk along the sidewalk from the middle to the leftmost side of the sidewalk, and the distance between them is greater than the preset threshold (for example, 0.5 meters)
  • the vehicle-machine can obtain the spatial coordinates of pedestrian 5, pedestrian 6, pedestrian 7, and pedestrian 8 in the human eye coordinate system, and the spatial coordinates of the HUD virtual screen in the human eye coordinate system, and based on pedestrian 5, pedestrian 6,
  • the spatial coordinates of pedestrian 7 and pedestrian 8 in the human eye coordinate system and the spatial coordinates of the HUD virtual screen in the human eye coordinate system are drawn.
  • Pedestrian 5, pedestrian 6, pedestrian 7, pedestrian 8 and the HUD virtual screen are drawn as seen by the human eye.
  • Plan view as shown in Figure 38.
  • sidewalk lines are shown in Figure 38, it can be understood that the sidewalk lines are only to increase the sense of reality. The sidewalk lines may be omitted from the plan view.
  • the vehicle can determine that pedestrian 5 and pedestrian 6 can be treated as a whole to form crowd 3, and determine that pedestrian 7 and pedestrian 8 are two separate individuals.
  • the vehicle-machine can also determine that crowd 3 is located within the HUD virtual screen based on the floor plan, and determine that pedestrians 7 and 8 are located outside the HUD virtual screen.
  • the car and machine can determine the special motion effects of crowd icon 3 corresponding to crowd 3, the special motion effects of pedestrian icon 7 corresponding to pedestrian 7, and the special motion effects of pedestrian icon 8 corresponding to pedestrian 8, and can determine the motion special effects of crowd 3 according to the plane coordinates of crowd 3 and the pedestrian icon 8.
  • the plane coordinates of 7, the plane coordinates of pedestrian 8 and the plane coordinates of the HUD virtual screen respectively determine the display positions of the crowd icon 3 corresponding to crowd 3, the pedestrian icon 7 corresponding to pedestrian 7 and the pedestrian icon 8 corresponding to pedestrian 8 on the HUD virtual screen.
  • the vehicle-machine can also determine that the pedestrian icon 7 of pedestrian 7 overlaps with the indicator icon of pedestrian icon 8 of pedestrian 8 based on the motion effects, combine pedestrian icon 7 and pedestrian icon 8 to form a combination icon 4, and determine that the combination icon 4 is on the HUD virtual screen display position on.
  • the vehicle can also control the HUD, AR-HUD or other devices with display functions according to the display position of the crowd icon 3 on the HUD virtual screen to display the crowd icon 3 in the display area according to the motion special effects, and display the crowd icon 3 in the display area according to the combined icon 4
  • the display position on the HUD virtual screen controls the HUD, AR-HUD or other device with display function to display the combination icon 4 in the display area. Therefore, as shown in Figure 38, an enlarged, second-color, non-directional, framed crowd icon 3 that overlaps with the position of the crowd in the real world can be displayed on the HUD virtual screen.
  • an enlarged, red, boxed crowd icon 3 that overlaps with the position of crowd 3 in the real world; and a reduced, smaller, third icon can be displayed on the left edge of the HUD virtual screen near pedestrians 7 and 8.
  • the crowd icon 3 is a pedestrian icon with an added corner value of 2. It is understandable that the crowd icon 3 can also be a group icon, and this application does not limit this.
  • the movement special effects of the crowd icon 3 in Figure 38 can omit at least one of the enlarged, red, and boxed ones, and the movement special effects of the pedestrian icons 7 and 8 in the combined icon 4 can omit the reduced, red, and boxed ones.
  • At least one of yellow color and directivity is not limited in this application.
  • the pedestrian icon 7 and the pedestrian icon 8 in the crowd icon 3 and the combination icon 4 in Figure 38 may also include distance information respectively, and this application does not limit this.
  • switching from a group scene to a scene including multiple individual individuals may also be, for example, as multiple pedestrians in Figure 31 walk along the sidewalk, the walking direction of the pedestrians in the crowd may change to the opposite direction. This application does not limit this.
  • Multiple target obstacles can also be switched from a scene of multiple individual individuals of the same type to a scene including a group.
  • the following will take the scenario of multiple separate individuals of the same type shown in Figure 33 as an example for explanation.
  • the walking speed and walking direction of the pedestrians may change.
  • pedestrian 9 speeds up pedestrian 11 changes the walking direction
  • pedestrian 14 speeds up and multiple pedestrians may walk to the position shown in Figure 39.
  • pedestrian 9, pedestrian 11 and pedestrian 12 may walk to the sidewalk within a preset time (for example, 2 seconds, etc.) with the distance between them less than a preset threshold (for example, 0.5 meters, etc.) and moving in the same direction.
  • the pedestrian 14 may walk along the sidewalk from the leftmost side to the rightmost side of the sidewalk.
  • the vehicle can obtain the spatial coordinates of pedestrian 9, pedestrian 11, pedestrian 12, and pedestrian 14 in the human eye coordinate system, and the spatial coordinates of the HUD virtual screen in the human eye coordinate system, and based on pedestrian 9, pedestrian 11,
  • the spatial coordinates of pedestrian 12 and pedestrian 14 in the human eye coordinate system and the spatial coordinates of the HUD virtual screen in the human eye coordinate system are drawn.
  • Pedestrian 9, pedestrian 11, pedestrian 12, pedestrian 14 and the HUD virtual screen are drawn.
  • Plan view as shown in Figure 40.
  • the sidewalk lines are shown in FIG. 40 , it can be understood that the sidewalk lines are only to increase the sense of reality, and the sidewalk lines may be omitted in the plan view.
  • the vehicle can determine that pedestrian 9, pedestrian 11 and pedestrian 12 can be considered as a whole to form a crowd 4, and determine that pedestrian 14 is a separate individual.
  • the vehicle-machine can also determine that crowd 4 is outside the HUD virtual screen based on the floor plan, and determine that pedestrian 14 is outside the HUD virtual screen.
  • the vehicle machine can determine the special motion effects of crowd icon 4 corresponding to crowd 4 and the special motion effects of pedestrian icon 14 corresponding to pedestrian 14, and can determine the motion special effects of crowd 4, pedestrian 14, and the plane coordinates of the HUD virtual screen
  • the display positions of the crowd icon 4 corresponding to the crowd 4 and the pedestrian icon 14 corresponding to the pedestrian 14 on the HUD virtual screen are respectively determined.
  • the vehicle can also control the HUD, AR-HUD or other devices with display functions according to the display positions of the crowd icon 4 and the pedestrian icon 14 on the HUD virtual screen to display the crowd icon 4 and the pedestrian icon 14 in the display area according to the motion special effects. show. Therefore, as shown in Figure 40, a reduced, first color, directional crowd icon 4, such as a reduced, yellow, directional crowd icon 4, can be displayed on the left edge of the HUD virtual screen close to the crowd 4 ; and can display a reduced, first color directional pedestrian icon 14 on the right edge of the HUD virtual screen close to the pedestrian 14, such as a reduced, yellow directional pedestrian icon 14, wherein the directional crowd icon 4 arrow points to Crowd 4, directional pedestrian icon 14 points to pedestrian 14.
  • a reduced, first color, directional crowd icon 4 such as a reduced, yellow, directional crowd icon 4
  • a reduced, first color directional pedestrian icon 14 on the right edge of the HUD virtual screen close to the pedestrian 14, such as a reduced, yellow directional pedestrian icon 14, wherein the directional
  • the driver can be instructed that there is a group of people 4 at the corresponding position on the left side of the HUD virtual screen and is moving away from the vehicle in the left direction, and there is a pedestrian 14 at the corresponding position on the right side of the HUD virtual screen that is moving away from the vehicle in the right direction. Therefore, the spatial location information of the crowd outside the HUD virtual screen can be directly provided to the driver intuitively.
  • the crowd icon 4 is a pedestrian icon with an added corner value of 3. It is understandable that the crowd icon 4 can also be a group icon, and this application does not limit this.
  • the movement special effects of the crowd icon 4 in Figure 40 can omit at least one of the reduction, yellow, movement direction, and directivity
  • the movement special effects of the pedestrian icon 14 can omit the reduction, yellow, movement direction, and directivity. At least one of them is not limited in this application. It can be understood that the crowd icon 4 and pedestrian icon 14 in Figure 40 may also include distance information respectively, and this application does not limit this.
  • the preset warning icon can be a dynamic preset warning icon, and this application does not limit this.
  • the display method of this application can not only provide early warning for target obstacles in front of the vehicle, but also provide early warning for target obstacles not in front of the vehicle, such as early warning for target obstacles behind the vehicle or to the side of the vehicle. specifically:
  • the vehicle may be driving on the road, and there may be a moving vehicle behind and to the left of the vehicle, as shown in Figure 41.
  • the left rear of the vehicle is a vehicle, but it can be understood that the left rear of the vehicle can also be other objects, such as pedestrians, cyclists, etc.; vehicles can also be located on the left, directly behind, or right of the vehicle.
  • the rear, right side, etc. are not restricted in this application.
  • the vehicle computer can determine that the target obstacle not in front of the vehicle is the vehicle based on the data sensed by the sensor and the second warning range, and determine the position of the target obstacle vehicle relative to the vehicle as the left rear.
  • the vehicle computer can draw a plan view of the orientation distribution of the HUD virtual screen and the markings of the target obstacle vehicle based on the orientation of the target obstacle vehicle relative to the own vehicle, as shown in Figure 42.
  • the orientation between the mark 4201 of the target obstacle vehicle and the HUD virtual screen 4202 is the same as the orientation of the target obstacle vehicle relative to the host vehicle.
  • the vehicle machine can also determine the lower left corner area where the vehicle is located outside the own vehicle in the plan view based on the orientation of the target obstacle vehicle relative to the own vehicle.
  • the vehicle computer can also determine the display position of the preset warning icon on the HUD virtual screen based on the lower left corner area of the vehicle outside the vehicle in the plan view.
  • the process of determining the display position of the preset warning icon on the HUD virtual screen based on the lower left corner area of the vehicle located outside the vehicle in the plan view is the same as the above-mentioned determination of the display position of the preset warning icon on the HUD virtual screen in Figure 13
  • the process is similar, and this application does not limit it.
  • the vehicle can also control HUD, AR-HUD or other devices with display functions to display preset warning icons in the display area based on the display position. Therefore, as shown in Figure 43, a preset warning icon, such as an exclamation mark icon, can be displayed in the lower left corner of the HUD virtual screen. Then, it can indicate to the driver that there is a vehicle on the left rear of the vehicle.
  • Figure 44 is a flow chart of a display method provided by an embodiment of the present application. Methods include:
  • S4401 Obtain the position of the first target obstacle and the display range of the screen of the electronic device; wherein the position of the first target obstacle is located outside the display range of the screen.
  • S4402 Determine the first display position of the first preset warning icon on the screen according to the position of the first target obstacle and the display range of the screen.
  • the first preset warning icon is used to prompt the The first target obstacle.
  • S4403 Display the first preset warning icon on the screen according to the first display position.
  • the first preset warning icon includes a first icon and a second icon, the first icon is an indicator icon, and the indicator icon is used to indicate the direction of the first target obstacle, so The second icon corresponds to the first target obstacle.
  • the display range of the screen is the size of the visual range that can be seen through the screen.
  • the method before determining the first preset warning icon at the first display position on the screen based on the position of the first target obstacle and the display range of the screen, the method further includes: Obtain information on multiple first target obstacles; the information on multiple first target obstacles includes distances between multiple first target obstacles, and the movement of each first target obstacle direction, and the type of each first target obstacle; if the information of multiple first target obstacles satisfies preset conditions, determine the types of multiple first target obstacles as group types; according to The group type determines the first preset warning icon; wherein the first preset warning icon is a preset group icon.
  • the preset group icon is a second icon with an index value added, and the index value is the number of the first target obstacles; or, the preset group icon is Group icon.
  • displaying the first preset warning icon on the screen according to the first display position includes: if a plurality of the first target obstacles are multiple separate individuals, and When the first icons of the plurality of first preset warning icons overlap, a combined icon composed of a plurality of the first preset warning icons is displayed on the screen according to the first display position; wherein , the combined icon includes the first icon, and a plurality of the second icons in the combined icon are arranged according to the actual spatial position of the first target obstacle.
  • displaying the first preset warning icon on the screen according to the first display position includes: if a plurality of the first target obstacles are multiple separate individuals, and If the first icons of the plurality of first preset warning icons do not overlap but the second icons overlap, then the actual location of the first target obstacle according to the first display position is displayed on the screen.
  • the plurality of first preset warning icons are spaced away from each other.
  • displaying the first preset warning icon on the screen according to the first display position includes: displaying the directivity on the screen according to the first display position. a first preset warning icon; the method further includes: when the position of the first target obstacle is within the display range of the screen, determining the first preset warning according to the position of the first target obstacle The icon is at a second display position on the screen, and the first preset warning icon without directivity is displayed at the second display position on the screen.
  • displaying the first preset warning icon on the screen according to the first display position includes: the distance between the first target obstacle and the vehicle is a first when the distance between the first target obstacle and the vehicle is the second distance, Display the first preset warning icon of a second size on the screen according to the first display position; wherein the first distance is greater than the second distance, and the first size is greater than the second distance. size.
  • displaying the first preset warning icon on the screen according to the first display position includes: the distance between the first target obstacle and the vehicle is a first when the distance between the first target obstacle and the vehicle is the second distance, the first preset warning icon of the first color is displayed on the screen according to the first display position: Display the first preset warning icon in a second color on the screen according to the first display position.
  • displaying the first preset warning icon on the screen according to the first display position includes: when the first target obstacle is moving, displaying the first preset warning icon according to the first display position.
  • the dynamic first preset warning icon is displayed on the screen.
  • displaying the first preset warning icon on the screen according to the first display position includes: displaying the first preset warning icon on the screen according to the first display position with a first movement.
  • the first preset warning icon of the direction, the first running direction is determined according to the second movement direction of the first target obstacle.
  • obtaining the position of the first target obstacle and the display range of the screen of the electronic device includes: obtaining a first plan view of the first target obstacle and the display range of the screen; Determining the first display position of the first preset warning icon on the screen according to the position of the first target obstacle and the display range of the screen includes: according to the first target obstacle and the screen The position of the display range in the first plan view determines the first display position of the first preset warning icon on the screen.
  • the method before acquiring the first target obstacle and the first plan view of the display range of the screen, the method further includes: acquiring the first spatial coordinates of the first target obstacle and The second spatial coordinates of the screen; the first spatial coordinates are the first spatial coordinates under the human eye coordinate system, and the second spatial coordinates are the second spatial coordinates under the human eye coordinate system; the acquisition The first plan view of the first target obstacle and the display range of the screen includes: obtaining the first target obstacle and the first target obstacle from the human eye perspective according to the first spatial coordinates and the second spatial coordinates. The first plan view of the display range of the screen.
  • obtaining a first plan view of the first target obstacle and the display range of the screen from a human eye perspective based on the first spatial coordinates and the second spatial coordinates includes: based on the first spatial coordinates and the second spatial coordinates.
  • the first spatial coordinates and the second spatial coordinates obtain the indication of the first target obstacle from the human eye perspective and the first plan view of the display range of the screen; wherein, in the first plan view, The marked center point of the first target obstacle is located outside the display range of the screen; the first preset is determined based on the position of the first target obstacle and the display range of the screen in the first plan view.
  • the first display position of the warning icon on the screen includes: determining the first warning icon according to the marked center point of the first target obstacle and the position of the display range of the screen in the first plan view. Assume that the warning icon is at the first display position on the screen; the center point includes the center of gravity or the geometric center.
  • the method further includes: obtaining the orientation of the second target obstacle relative to the vehicle; and determining where the second preset warning icon is based on the orientation of the second target obstacle relative to the vehicle.
  • the second preset warning icon is used to prompt the second target obstacle; the second preset warning icon is displayed on the screen according to the third display position.
  • determining the third display position of the second preset warning icon on the screen based on the orientation of the second target obstacle relative to the vehicle includes: based on the second target obstacle Obtaining a second plan view of the azimuth distribution of the second target obstacle and the display range of the screen relative to the orientation of the vehicle; based on the second target obstacle and the display range of the screen in the second The position in the plan view determines the third display position of the second preset warning icon on the screen.
  • the first target obstacle is located in front of the vehicle, and the second target obstacle is located in front of the vehicle.
  • This application obtains the position of the first target obstacle and the display range of the screen, where the position of the first target obstacle is located outside the display range of the screen.
  • This application also determines the first display position of the first preset warning icon on the screen according to the display range of the screen and the position of the first target obstacle, and displays the first preset warning icon on the screen according to the first display position, thereby
  • the spatial position information of target obstacles outside the display range can be directly provided to the driver on an intuitive screen.
  • the present application can be implemented by software plus necessary general hardware. Of course, it can also be implemented by dedicated hardware including dedicated integrated circuits, dedicated CPUs, dedicated memories, Special components, etc. to achieve. In general, all functions performed by computer programs can be easily implemented with corresponding hardware. Moreover, the specific hardware structures used to implement the same function can also be diverse, such as analog circuits, digital circuits or special-purpose circuits. circuit etc. However, for this application, software program implementation is a better implementation in most cases. Based on this understanding, the technical solution of the present application can be embodied in the form of a software product in essence or that contributes to the existing technology.
  • the computer software product is stored in a readable storage medium, such as a computer floppy disk. , U disk, mobile hard disk, ROM, RAM, magnetic disk or optical disk, etc., including several instructions to cause a computer device (which can be a personal computer, server, or network device, etc.) to execute the method described in each embodiment of the application. .
  • a computer device which can be a personal computer, server, or network device, etc.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transferred from a website, computer, server, or data center Transmission to another website, computer, server or data center by wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) means.
  • wired such as coaxial cable, optical fiber, digital subscriber line (DSL)
  • wireless such as infrared, wireless, microwave, etc.
  • the computer-readable storage medium may be any available medium that a computer can store, or a data storage device such as a server or data center integrated with one or more available media.
  • the available media may be magnetic media (eg, floppy disk, hard disk, tape), optical media (eg, DVD), or semiconductor media (eg, Solid State Disk (SSD)), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)
  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)

Abstract

本申请公开了一种显示方法、电子设备、存储介质及程序产品,可在屏幕内显示屏幕的显示范围外的内容。一种显示方法,应用于电子设备,所述电子设备为车辆或者被设置在所述车辆中,所述方法包括:获取第一目标障碍物的位置和所述电子设备的屏幕的显示范围;其中,所述第一目标障碍物的位置位于所述屏幕的显示范围外;根据所述第一目标障碍物的位置和所述屏幕的显示范围确定第一预设预警图标在所述屏幕上的第一显示位置,所述第一预设预警图标用于提示所述第一目标障碍物;根据所述第一显示位置在所述屏幕上显示所述第一预设预警图标。

Description

显示方法、电子设备、存储介质及程序产品
本申请要求于2022年8月17日提交中国国家知识产权局、申请号为202210987609.7、发明名称为“显示方法、电子设备、存储介质及程序产品”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及辅助驾驶技术领域,尤其涉及一种显示方法、电子设备、存储介质及程序产品。
背景技术
高级驾驶辅助***(Advanced Driving Assistance System,ADAS)是利用安装在车上的各式各样传感器(毫米波雷达、激光雷达、单\双目摄像头以及卫星导航),在汽车行驶过程中随时来感应周围的环境,收集数据,进行静态、动态物体的辨识、侦测与追踪,并结合导航地图数据,进行***的运算与分析,从而预先让驾驶者察觉到可能发生的危险,有效增加汽车驾驶的安全性。ADAS通常包括车道偏离预警***(Lane Departure Warning System,LDWS)、自适应巡航***(Adaptive Cruise Control,ACC)、前碰撞预防***(Forward Collision Warning,FCW)等。ADAS信息可显示在中控台、或者仪表上。ADAS信息还可通过抬头显示(Head Up Display,HUD)技术或者增强现实抬头显示(Augmented Reality Head Up Display,AR-HUD)技术投射到前挡风玻璃,进而聚焦在前挡风玻璃前的HUD虚拟屏幕上。但是,受限于中控台的屏幕的尺寸、仪表的屏幕的尺寸或者HUD虚拟屏幕的尺寸,只能显示屏幕大小的内容,而对于超过屏幕大小的内容,即屏幕显示范围外的内容,无法在屏幕内显示。
发明内容
鉴于以上内容,有必要提供一种显示方法、电子设备、存储介质及程序产品,可在屏幕内显示屏幕显示范围外的内容。
第一方面,本申请的一实施例提供一种显示方法,应用于电子设备,电子设备为车辆或者被设置在车辆中,方法包括:获取第一目标障碍物的位置和电子设备的屏幕的显示范围;其中,第一目标障碍物的位置位于屏幕的显示范围外;根据第一目标障碍物的位置和屏幕的显示范围确定第一预设预警图标在屏幕上的第一显示位置,第一预设预警图标用于提示第一目标障碍物;根据第一显示位置在屏幕上显示第一预设预警图标。
本申请的第一方面,在第一目标障碍物的位置处于屏幕的显示范围外时,根据第一目标障碍物的位置和屏幕的显示范围确定第一预设预警图标在屏幕上的第一显示位置,并根据第一显示位置在屏幕上显示第一预设预警图标,其中,第一预设预警图标用于提示第一目标障碍物,从而可在屏幕上显示屏幕的显示范围外的第一目标障碍物的信息,同时由于第一显示位置为根据第一目标障碍物的位置和屏幕的显示范围确定,则可根据第一目标障碍物的位置实时调整屏幕的显示范围外的第一目标障碍物在屏幕上的显示位置,可直接提供给驾驶员直观的屏幕的显示范围外的目标障碍物的空间位置信息。
根据本申请的一些实施例,第一预设预警图标包括第一图标和第二图标,第一图标为指示图标,指示图标用于指示第一目标障碍物的方向,第二图标与第一目标障碍物对应。本申请通过第一预设预警图标为包括指示图标的第一预设预警图标,且第一预设预警图标的第二图标与第一目标障碍物对应,可在屏幕上显示屏幕的显示范围外的第一目标障碍物的信息,且可实时指示屏幕的显示范围外的第一目标障碍物的方向。
根据本申请的一些实施例,屏幕的显示范围为通过屏幕能够看到的可视范围大小。本申请通过定义屏幕的显示范围为通过屏幕能够看到的可视范围大小,使得通过屏幕能够看到的可视范围大小外的第一目标 障碍物的信息,可显示在屏幕上。
根据本申请的一些实施例,根据第一目标障碍物的位置和屏幕的显示范围确定第一预设预警图标在屏幕上的第一显示位置之前,方法还包括:获取多个第一目标障碍物的信息;多个第一目标障碍物的信息包括多个第一目标障碍物之间的距离,每个第一目标障碍物的运动方向,和每个第一目标障碍物的类型;若多个第一目标障碍物的信息满足预设条件,则确定多个第一目标障碍物的类型为群体类型;根据群体类型确定第一预设预警图标;其中,第一预设预警图标为预设的群体图标。本申请通过若多个第一目标障碍物的信息满足预设条件,确定这些第一目标障碍物为群体,并显示相应的群体图标,可在多个第一目标障碍物满足群体的条件时,以群体图标来提示驾驶员为群体的多个第一目标障碍物之间的距离关系、运动方向和类型,并可提示驾驶员这个群体的空间位置信息。
根据本申请的一些实施例,预设的群体图标为添加有角标值的第二图标,角标值为第一目标障碍物的数量;或者,预设的群体图标为群图标。本申请可通过添加有角标值的第二图标或者群图标来表示一群体。
根据本申请的一些实施例,根据第一显示位置在屏幕上显示第一预设预警图标,包括:若多个第一目标障碍物为多个单独的个体,且多个第一预设预警图标的第一图标重叠,则根据第一显示位置在屏幕上显示将多个第一预设预警图标组合成的组合图标;其中,组合图标包括第一图标,且组合图标中的多个第二图标按照第一目标障碍物的实际空间位置排布。本申请通过在为多个单独的个体的第一目标障碍物对应的第一预设预警图标的第一图标重叠时,将多个第一预设预警图标按照第一目标障碍物的实际空间位置进行组合成指向性的组合图标,可在多个第一目标障碍物为单独的个体时,以组合图标来提示驾驶员多个第一目标障碍物之间的实际空间位置信息,并可避免图标之间的相互干扰。
根据本申请的一些实施例,根据第一显示位置在屏幕上显示第一预设预警图标,包括:若多个第一目标障碍物为多个单独的个体,且多个第一预设预警图标的第一图标不重叠但第二图标重叠,则根据第一显示位置在屏幕上显示按照第一目标障碍物的实际空间位置相互远离分开后的多个第一预设预警图标。本申请通过在为多个单独的个体的第一目标障碍物对应的第一预设预警图标的第一图标不重叠但第二图标重叠时,将多个第一预设预警图标分开显示,可避免图标之间的相互干扰。
根据本申请的一些实施例,根据第一显示位置在屏幕上显示第一预设预警图标,包括:根据第一显示位置在屏幕上显示指向性的第一预设预警图标;方法还包括:第一目标障碍物的位置位于屏幕的显示范围内时,根据第一目标障碍物的位置确定第一预设预警图标在屏幕上的第二显示位置,在屏幕的第二显示位置显示不带有指向性的第一预设预警图标。本申请通过指向性的变化表达真实空间中第一目标障碍物为处于屏幕的显示范围内或者处于屏幕的显示范围外,从而可表达真实空间中第一目标障碍物距离车辆的位置远近。
根据本申请的一些实施例,根据第一显示位置在屏幕上显示第一预设预警图标,包括:第一目标障碍物与车辆之间的距离为第一距离时,根据第一显示位置在屏幕上显示第一尺寸的第一预设预警图标:第一目标障碍物与车辆之间的距离为第二距离时,根据第一显示位置在屏幕上显示第二尺寸的第一预设预警图标;其中,第一距离大于第二距离,第一尺寸大于第二尺寸。本申请通过第一预设预警图标的尺寸大小的变化表达真实空间中第一目标障碍物距离车辆的位置远近。
根据本申请的一些实施例,根据第一显示位置在屏幕上显示第一预设预警图标,包括:第一目标障碍物与车辆之间的距离为第一距离时,根据第一显示位置在屏幕上显示第一颜色的第一预设预警图标:第一目标障碍物与车辆之间的距离为第二距离时,根据第一显示位置在屏幕上显示第二颜色的第一预设预警图标。本申请通过第一预设预警图标的颜色的变化表达真实空间中第一目标障碍物距离车辆的位置远近。
根据本申请的一些实施例,根据第一显示位置在屏幕上显示第一预设预警图标,包括:第一目标障碍物正在移动时,根据第一显示位置在屏幕上显示动态的第一预设预警图标。本申请通过动态的第一预设预警图标表达真实空间中第一目标障碍物的运动状态。
根据本申请的一些实施例,根据第一显示位置在屏幕上显示第一预设预警图标,包括:根据第一显示位置在屏幕上显示带有第一运动方向的第一预设预警图标,第一运行方向根据第一目标障碍物的第二运动方向确定。本申请通过第一预设预警图标的运动方向表达真实空间中第一目标障碍物的运动方向。
根据本申请的一些实施例,获取第一目标障碍物的位置和电子设备的屏幕的显示范围,包括:获取第一目标障碍物和屏幕的显示范围的第一平面图;根据第一目标障碍物的位置和屏幕的显示范围确定第一预 设预警图标在屏幕上的第一显示位置,包括:根据第一目标障碍物和屏幕的显示范围在第一平面图中的位置,确定第一预设预警图标在屏幕上的第一显示位置。本申请通过第一目标障碍物和屏幕的显示范围的第一平面图来获取第一目标障碍物的位置和电子设备的屏幕的显示范围,并确定第一预设预警图标在屏幕上的第一显示位置,从而可实现第一平面图中屏幕的显示范围外的第一目标障碍物的第一显示位置的确定。
根据本申请的一些实施例,在获取第一目标障碍物和屏幕的显示范围的第一平面图之前,方法还包括:获取第一目标障碍物的第一空间坐标和屏幕的第二空间坐标;第一空间坐标为在人眼坐标系下的第一空间坐标,第二空间坐标为在人眼坐标系下的第二空间坐标;获取第一目标障碍物和屏幕的显示范围的第一平面图包括:根据第一空间坐标和第二空间坐标获取人眼视角下的第一目标障碍物和屏幕的显示范围的第一平面图。本申请可通过第一目标障碍物在人眼坐标系下的第一空间坐标和屏幕在人眼坐标系下的第二空间坐标获取第一目标障碍物和屏幕的显示范围的第一平面图,从而可进一步根据第一平面图确定第一目标障碍物的第一显示位置。
根据本申请的一些实施例,根据第一空间坐标和第二空间坐标获取人眼视角下的第一目标障碍物和屏幕的显示范围的第一平面图包括:根据第一空间坐标和第二空间坐标获取人眼视角下的第一目标障碍物的标示和屏幕的显示范围的第一平面图;其中,在第一平面图中,第一目标障碍物的标示的中心点位于屏幕的显示范围外;根据第一目标障碍物和屏幕的显示范围在第一平面图中的位置确定第一预设预警图标在屏幕上的第一显示位置包括:根据第一目标障碍物的标示的中心点和屏幕的显示范围在第一平面图中的位置确定第一预设预警图标在屏幕上的第一显示位置;中心点包括重心或几何中心。本申请通过第一目标障碍物的标示的重心或几何中心位于屏幕的显示范围外,来确定第一目标障碍物位于屏幕的显示范围外,并通过第一目标障碍物的标示的重心或几何中心和屏幕的显示范围在所述第一平面图中的位置来确定第一预设预警图标的显示位置,可通过第一目标障碍物的标示的中心或几何中心实现第一目标障碍物位于所述屏幕的显示范围外及第一预设预警图标在所述屏幕上的第一显示位置的确定。
根据本申请的一些实施例,方法还包括:获取第二目标障碍物相对于车辆的方位;根据第二目标障碍物相对于车辆的方位确定第二预设预警图标在屏幕上的第三显示位置,第二预设预警图标用于提示第二目标障碍物;根据第三显示位置在屏幕上显示第二预设预警图标。本申请根据第二目标障碍物相对于车辆的方位在屏幕上显示第二预设预警图标,可对第二目标障碍物进行预警。
根据本申请的一些实施例,根据第二目标障碍物相对于车辆的方位确定第二预设预警图标在屏幕上的第三显示位置包括:根据第二目标障碍物相对于车辆的方位获取第二目标障碍物和屏幕的显示范围的方位分布的第二平面图;根据第二目标障碍物和屏幕的显示范围在第二平面图中的位置确定第二预设预警图标在屏幕上的第三显示位置。本申请通过第二目标障碍物和屏幕的显示范围的方位分布的第二平面图,来确定第二预设预警图标在屏幕上的第三显示位置,从而可实现车辆的某个方位上的第二目标障碍物的第三显示位置的确定。
根据本申请的一些实施例,第一目标障碍物位于车辆的前方,第二目标障碍物位于车辆的非前方。本申请通过第一目标障碍物位于车辆的前方,第二目标障碍物位于车辆的非前方,可对车辆的前方和车辆的非前方的目标障碍物进行预警。
第二方面,本申请的一实施例提供一种电子设备,电子设备为车辆或者被设置在车辆中,电子设备包括处理器和存储器,存储器用于存储程序指令,处理器调用存储指令时,实现如上第一方面任意一种可能的实施例的显示方法。
第三方面,本申请的一实施例提供一种计算机可读存储介质,其特征在于,计算机可读存储介质存储有程序,程序使得电子设备实现如上第一方面任意一种可能的实施例的显示方法。
第四方面,本申请的一实施例提供一种计算机程序产品,其特征在于,计算机程序产品包括计算机执行指令,计算机执行指令存储在计算机可读存储介质中;电子设备的至少一个处理器可以从计算机可读存储介质中读取计算机执行指令,至少一个处理器执行计算机执行指令使得电子设备执行如上第一方面任意一种可能的实施例的显示方法。
本申请中第二方面到第四方面及其各种实现方式的有益效果,可以参考第一方面及其各种实现方式,以及第一方面中的有益效果分析,此处不再赘述。
附图说明
图1为本申请实施例提供的人眼坐标系的示意图。
图2为本申请实施例提供的图像坐标系的示意图。
图3为本申请实施例提供的电子设备的硬件结构示意图。
图4为本申请实施例提供的一种AR-HUD架构的示意图。
图5为本申请实施例提供的一种人眼坐标系下HUD虚拟屏幕的空间坐标确定示意图。
图6为本申请实施例提供的确定目标障碍物在人眼坐标系下的空间坐标的架构图。
图7为本申请实施例提供的本车辆与行人的仰视角度的位置分布图。
图8为图7所述的场景在人眼坐标系下的示意图。
图9为本申请实施例提供的确定图7所示的行人处于HUD虚拟屏幕内外的示意图。
图10A为本申请实施例提供的二维边框的中心点位于HUD虚拟屏幕内的示意图;图10B为本申请实施例提供的二维边框的中心点位于HUD虚拟屏幕外的示意图。
图11A为本申请实施例提供的车辆的梯形标示的示意图;图11B为本申请实施例提供的骑行人员的二维边框标示的示意图;图11C为本申请实施例提供的无法识别的物体的标示的示意图。
图12为本申请实施例提供的二维边框的中心点位于HUD虚拟屏幕外的边区域时确定预设预警图标位置的示意图。
图13为本申请实施例提供的二维边框的中心点位于HUD虚拟屏幕外的角区域时确定预设预警图标位置的示意图。
图14为本申请实施例提供的预设预警图标的示意图。
图15A为本申请实施例提供的行人图标的运动方向为朝向左方向时的示意图;图15B为本申请实施例提供的行人图标的运动方向为朝向右方向时的示意图。
图16为本申请实施例提供的行人与本车辆的一场景示意图。
图17为处于图16所述的场景下时HUD成像的效果示意图。
图18为本申请实施例提供的行人与本车辆的另一场景示意图。
图19为处于图18所述的场景下时HUD成像的效果示意图。
图20为本申请实施例提供的行人与本车辆的另一场景示意图。
图21为处于图20所述的场景下时HUD成像的效果示意图。
图22为本申请实施例提供的骑行人员与本车辆的一场景示意图。
图23为处于图22所述的场景下时HUD成像的效果示意图。
图24为本申请实施例提供的人眼透视原理示意图。
图25为本申请实施例提供的骑行人员与本车辆的另一场景示意图。
图26为处于图25所述的场景下时HUD成像的效果示意图。
图27为本申请实施例提供的人群的二维边框标示的示意图。
图28为本申请实施例提供的人群图标及组合图标的示意图。
图29A为本申请实施例提供的指向性的行人图标单独标识时的效果示意图,其中指向性的行人图标中的行人图标重叠但指示图标不重叠。
图29B为图29A所示的指向性的行人图标分开显示时的效果示意图。
图30为本申请实施例提供的带有方框的人群图标显示时的效果示意图。
图31为本申请实施例提供的人群与本车辆的一场景示意图。
图32为处于图31所述的场景下时HUD成像的效果示意图。
图33为本申请实施例提供的多个单独的行人与本车辆的一场景示意图。
图34为处于图33所述的场景下时HUD成像的效果示意图。
图35为本申请实施例提供的行人、骑行人员和本车辆的一场景示意图。
图36为处于图35所述的场景下时HUD成像的效果示意图。
图37为本申请实施例提供的人群、多个单独的行人与本车辆的一场景示意图。
图38为处于图37所述的场景下时HUD成像的效果示意图。
图39为本申请实施例提供的人群、单独的行人与本车辆的一场景示意图。
图40为处于图39所述的场景下时HUD成像的效果示意图。
图41为本申请实施例提供的本车辆与本车辆左后方的车辆的一场景示意图。
图42为本申请实施例提供的HUD虚拟屏幕和车辆的方位分布的平面图。
图43为处于图41所述的场景下时HUD成像的效果示意图。
图44为本申请实施例提供的显示方法的流程图。
具体实施方式
在本申请实施例的描述中,“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“例如”等词旨在以具体方式呈现相关概念。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请中的技术领域的技术人员通常理解的含义相同。本申请的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本申请。应理解,本申请中除非另有说明,“多个”是指两个或多于两个。
现有技术提供一种显示方法,可将ADAS信息通过AR-HUD技术投射到车辆的前挡风玻璃上进行显示。显示的ADAS信息经过前挡风玻璃反射,可在前挡风玻璃前的HUD虚拟屏幕上显示虚像,其中,虚像会叠加在实物之上。目前,HUD虚拟屏幕覆盖范围与HUD光机的视场角(Field of View,FOV)有关。由此可知,HUD虚拟屏幕覆盖的水平范围与HUD光机的FOV呈正比关系。但是,受限于HUD光机的FOV的大小,导致呈现在用户前方的HUD虚拟屏幕的尺寸往往只能覆盖一车道,降低了用户的体验感。
现有技术提供一种显示方法,可对骑行人员进行预警。但是,现有的显示方法为骑行人员处于HUD光机的HUD虚拟屏幕显示范围内时,显示预警图标进行预警。当骑行人员不处于HUD光机的HUD虚拟屏幕显示范围内时,不进行预警。这样将会导致只有当骑行人员进入HUD虚拟屏幕显示范围内时才显示预警图标,使得显示预警图标的时机太晚,提前预警的作用不太大。显示范围为通过HUD虚拟屏幕能够看到的可视范围大小。显示范围不为屏幕上显示区域的大小,而为通过屏幕可以呈现的车辆前方真实景物的范围。
有鉴于此,本申请实施例提出了一种显示方法、电子设备、存储介质及程序产品,可直接提供给驾驶员直观的屏幕显示范围外的目标障碍物的空间位置信息。
为了更好地理解本申请,下面先对本申请所涉及的一些术语和概念进行介绍。
人眼坐标系:人眼坐标系是在人眼上建立的坐标系,是为了从人眼的角度描述物体的位置而定义,单位为米,用(Xe,Ye,Ze)表示其坐标值。如图1所示,人眼坐标系以人眼为坐标原点,以人眼的视线为Z轴,以水平方向上垂直于人眼的视线的方向为X轴,以竖直方向上垂直于人眼的视线的方向为Y轴。
相机坐标系:相机坐标系也称为光心坐标系,是在相机上建立的坐标系,是为了从相机的角度描述物体的位置而定义,单位为米,用(Xc,Yc,Zc)表示其坐标值。以相机的镜头光心为坐标原点,X轴和Y轴分别平行于图像坐标系的X轴和Y轴,相机的光轴为Z轴。物体在相机坐标系下的空间坐标可转换为在人眼坐标系下的空间坐标。
图像坐标系:图像坐标系是图像平面上的二维直角坐标系。图像坐标系的原点0为镜头光轴与像平面的交点(也称主点),图像坐标系的x轴和y轴分别平行于相机坐标系的X轴和Y轴。用(x,y)表示坐标值,如图2所示。图像坐标系是用物理单位(例如毫米)表示像素在图像中的位置。物体在图像坐标系下的坐标可转换为在相机坐标系下的空间坐标。
眼盒:眼盒通常是指驾驶员的眼睛能够看到全部显示图像的范围。一般眼盒尺寸大小是130毫米x 50毫米。由于不同驾驶员的身高,需要满足眼盒在垂直方向上有约±50毫米的移动范围。在本申请中,人眼在眼盒范围可以看到清晰的HUD虚像的区域。
请参考图3,为本申请实施例提供的电子设备的硬件结构示意图。电子设备3可为车机、车载电脑、车辆等设备。电子设备3还可为与车辆连接的手机、平板电脑等终端。
电子设备3可包括存储器31、处理器32及通信接口33。可理解,图3中示出的结构并不构成对电子设备3的限定,所述电子设备3可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部 件,或者不同的部件布置。
所述存储器31可用于存储软件程序和/或模块/单元。所述处理器32通过运行或执行存储在所述存储器31内的软件程序和/或模块/单元,以及调用存储在存储器31内的数据,实现所述电子设备3的各种功能。所述存储器31可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作***、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据电子设备3的使用所创建的数据(比如图像数据等)等。此外,存储器31可以包括非易失性计算机可读存储器,例如硬盘、内存、插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)、至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。
处理器32可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。处理器32可以是微处理器或者处理器也可以是任何常规的处理器等,所述处理器32是所述电子设备3的控制中心,利用各种接口和线路连接整个电子设备3的各个部分。
处理器32中还可以设置有存储器31,用于存储指令和数据。在一些实施例中,处理器32中的存储器31为高速缓冲存储器。存储器31可以保存处理器32刚用过或循环使用的指令或数据。如果处理器32需要再次使用指令或数据,可从存储器31中直接调用。避免了重复存取,减少了处理器32的等待时间,因而提高了***的效率。
在一些实施例中,处理器32可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,SIM接口,和/或USB接口等。
所述通信接口33可包括标准的有线接口、无线接口等。所述通信接口33用于供所述电子设备3与外部设备进行通信,例如与相机等进行通信。
本申请的显示方法应用于图3所示的电子设备3。所述显示方法不仅可应用于AR-HUD场景,还可应用于中控台或仪表等车辆内屏幕显示的预警场景、终端的地图应用程序里的预警场景等,本申请对此不作限制。
为了描述的方便,下面以显示方法应用于AR-HUD场景为例,对本申请进行详细的描述。
下面首先对AR-HUD场景进行说明。
请参考图4,为本申请实施例提供的一种AR-HUD架构的示意图。AR-HUD架构4包括图像投射装置41和前挡风玻璃42。所述图像投射装置41可以称为HUD光机。所述图像投射装置41可设置在前挡风玻璃42下方的中控台等内,也可设置在前挡风玻璃42附近的其他位置,本申请对此不作限制。所述图像投射装置41包括图像生成模块(Picture Generation Unit,PGU)411和光学镜组412。所述PGU 411可包括LED光源等,本申请对此不作限制。所述光学镜组412可包括非球面镜等,本申请对此不作限制。所述PGU 411用于生成投射图像,所述光学镜组412用于将投射图像投射至前挡风玻璃42。从而,经过前挡风玻璃42的反射,可在前挡风玻璃42的外侧的HUD虚拟屏幕中呈现出目标障碍物的虚像,所述虚像可叠加在车外真实环境上。从而,可对目标障碍物进行预警,并提高驾驶员视野中图像的真实程度。可理解,所述AR-HUD架构还可包括电子设备,所述电子设备用于控制PGU生成投射图像,本申请对此不作限制。
请参考图5,为本申请实施例提供的一种HUD虚拟屏幕在人眼坐标系下的空间坐标确定示意图。在图5中,人眼位于眼盒范围内。人眼与HUD虚拟屏幕的水平距离为虚像距。不同车辆的虚像距可通过变焦方式确定。可理解,虚像距还可通过其他方式确定,本申请对此不作限制。在图5中,以人眼为中心到虚像的水平边缘所呈的角度为水平FOV,以人眼为中心到虚像的垂直边缘所呈的角度为垂直FOV。HUD光机的水平FOV和HUD光机的垂直FOV与HUD光机的类型有关。人眼坐标系下HUD虚拟屏幕的空间坐标可根据人眼位置、虚像距、HUD光机的水平FOV和HUD光机的垂直FOV确定。具体地,可根据人眼位置、虚像距、HUD光机的水平FOV和HUD光机的垂直FOV确定人眼坐标系下HUD虚拟屏幕的四 个顶角的空间坐标,例如图5中的顶角A的空间坐标、顶角B的空间坐标、顶角C的空间坐标、及顶角D的空间坐标。
可理解,HUD虚拟屏幕在人眼坐标系下的空间坐标还可为所述电子设备内的其他***所计算的HUD虚拟屏幕在人眼坐标系下的空间坐标,或者为所述电子设备外其他装置所计算的HUD虚拟屏幕在人眼坐标系下的空间坐标,或者为预存在电子设备中的HUD虚拟屏幕在人眼坐标系下的空间坐标,本申请对此不作限制。
为了可在HUD虚拟屏幕中呈现出目标障碍物的虚像,需要先确定目标障碍物,并确定目标障碍物在人眼坐标系下的空间坐标。
示例性地,请参考图6,ADAS 601可利用安装在本车辆上的各式各样传感器(毫米波雷达、激光雷达、单/双目摄像头以及卫星导航),随时感应本车辆周围的环境,收集环境数据,进行静态、动态物体的辨识、侦测与追踪等技术上的处理,从而可对物体的行动路线做出预估,同时测算本车辆与物体之间的距离、方位及相对速度等关键信息,通过对关键信息的评估判断本车辆与前方物体之间是否存在潜在碰撞危险。当本车辆与前方物体之间存在潜在碰撞危险时,ADAS 601可确定所述物体为障碍物。所述物体可为行人、车辆、骑行人员、以及无法识别的物体等。
电子设备602可获取目标障碍物的空间坐标,所述目标障碍物的空间坐标为目标障碍物在相机坐标系下的空间坐标。所述电子设备602可为车机、车载电脑、车辆等设备。电子设备602还可为与车辆连接的手机、平板电脑等终端。为了描述的方便,下面以所述电子设备602为车机为例对本申请进行说明。
车机获取目标障碍物的空间坐标有多种方式。下面示例性介绍车机获取目标障碍物的空间坐标的两方式。
在第一种方式中,摄像头603为高级摄像头,具有计算能力。所述摄像头603可直接或间接从ADAS获取障碍物信息,并获取拍摄的前方图像,对前方图像进行预处理及特征提取,并基于特征提取的结果进行目标识别。所述预处理包括成帧、颜色调整、白平衡、对比度均衡、图像扭正等工作。所述特征提取为在预处理的基础上提取出前方图像中的特征点。所述目标识别为基于提取的特征点,运用机器学习、神经网络等算法对前方图像中的物体进行识别,例如识别为行人、车辆、骑行人员、无法识别的物体等。
所述摄像头603还根据确定前方图像中识别的物体中与所述障碍物信息匹配的目标障碍物,确定目标障碍物在图像坐标系下的坐标,并将目标障碍物在图像坐标系下的坐标转换为目标障碍物在相机坐标系下的空间坐标。所述摄像头603还将目标障碍物在相机坐标系下的空间坐标发送至所述车机。从而,所述车机可获取目标障碍物在相机坐标系下的空间坐标。
在第二种方式中,所述摄像头603为普通摄像头,不具有计算能力。车机可从ADAS获取障碍物信息,并从摄像头603获取摄像头603拍摄的前方图像。车机还检测前方图像中与障碍物信息匹配的目标障碍物,并确定目标障碍物在图像坐标系下的坐标。可理解,车机在检测前方图像中与障碍物信息匹配的目标障碍物之前,还可对前方图像进行预处理、特征提取及目标识别等操作,本申请对此不作限制。所述车机还将目标障碍物在图像坐标系下的坐标转换为目标障碍物在相机坐标系下的空间坐标。
车机在获取目标障碍物在相机坐标系下的空间坐标之后,还将目标障碍物在相机坐标系下的空间坐标转换为目标障碍物在人眼坐标系下的空间坐标。从而,车机可确定目标障碍物在人眼坐标系下的空间坐标。
可理解,本申请不仅局限于通过ADAS确定障碍物信息,然后通过车机或摄像头603确定本车辆前方的目标障碍物,还可为车机或摄像头603根据传感器所感测的数据及第一预警范围确定本车辆前方的目标障碍物信息等,本申请对此不作限制。
可理解,目标障碍物在人眼坐标系下的空间坐标可为所述车机内的其他***所计算的目标障碍物在人眼坐标系下的空间坐标,或者为所述车机外其他装置所计算的目标障碍物在人眼坐标系下的空间坐标,本申请对此不作限制。
请参考图7,为本申请实施例提供的用户与目标障碍物的俯视角度的位置分布图。在图7中,目标障碍物为行人,但是可理解,所述目标障碍物还可为其他物体,例如车辆、骑行人员、无法识别的物体等,本申请对此不作限制。在图7中,用户位于本车辆中,HUD虚拟屏幕位于用户的前方,行人也位于用户的前方。在人眼坐标系下,如图8所示,行人存在远小近大的透视规律,所述行人可通过二维边框8标示。则,根据上述的HUD虚拟屏幕在人眼坐标系下的空间坐标和上述的目标障碍物(行人)在人眼坐标系下 的空间坐标,可绘制人眼所看到的HUD虚拟屏幕901和行人的标示902的平面图9,即HUD虚拟屏幕和二维边框的平面图,如图9所示。其中,图9的平面图为从人眼坐标系下的三维空间转换为人眼所能看到的二维平面,去除了HUD虚拟屏幕和行人在人眼坐标系下的Z轴信息。图9的平面图中的X轴可为人眼坐标系下的X轴,平面图中的Y轴可为人眼坐标系下的Y轴。图9的平面图包括HUD虚拟屏幕的平面坐标和二维边框的平面坐标。可理解,所述行人还可通过椭圆形、或者行人图标等标示,本申请对此不作限制。
所述行人可位于所述HUD虚拟屏幕内,或者位于所述HUD虚拟屏幕外。对于图9中的行人,可首先确定行人对应的二维边框902的中心点P,然后确定所述二维边框902的中心点P是否位于所述HUD虚拟屏幕901外,从而可确定所述行人是否位于所述HUD虚拟屏幕901外。
在一些实施例中,所述二维边框的中心点可为二维边框的对角线的交点,即二维边框的重心,如图8中的二维边框的圆点P。其中,图8中的虚线为二维边框的对角线。在一些实施例中,可根据二维边框的平面坐标确定二维边框的中心点的平面坐标,并根据二维边框的中心点的平面坐标确定所述二维边框的中心点是否位于所述HUD虚拟屏幕外。若二维边框的中心点的平面坐标处于所述HUD虚拟屏幕的平面坐标范围内,则确定所述二维边框的中心点位于所述HUD虚拟屏幕内,从而可确定行人位于所述HUD虚拟屏幕内。例如,如图10A所示,二维边框C1的中心点t1的平面坐标为(7,8),所述HUD虚拟屏幕S1的平面坐标范围为(Xs1,Ys1),其中Xs1∈[0,10],Ys1∈[0,10]。则,二维边框C1的中心点t1的平面坐标(7,8)处于所述HUD虚拟屏幕S1的平面坐标范围(Xs1,Ys1)内,可确定所述二维边框C1的中心点t1位于所述HUD虚拟屏幕S1内,从而可确定行人位于所述HUD虚拟屏幕内。若二维边框的中心点的平面坐标处于所述HUD虚拟屏幕的平面坐标范围外,则确定所述二维边框的中心点位于所述HUD虚拟屏幕外,从而可确定行人位于所述HUD虚拟屏幕外。例如,如图10B所示,二维边框C2的中心点t2的平面坐标为(12,8),所述HUD虚拟屏幕S2的平面坐标范围为(Xs2,Ys2),其中Xs2∈[0,10],Ys2∈[0,10]。则,二维边框C2的中心点t2的平面坐标(12,8)处于所述HUD虚拟屏幕S2的平面坐标范围(Xs2,Ys2)外,可确定所述二维边框C2的中心点t2位于所述HUD虚拟屏幕S2外,从而可确定行人位于所述HUD虚拟屏幕外。
则,在图9中,可确定所述二维边框902的中心点P位于所述HUD虚拟屏幕901外,则所述行人位于所述HUD虚拟屏幕901外。可理解,二维边框的中心点还可为二维边框的几何中心等,本申请对此不作限制。
可理解,所述目标障碍物可为常见障碍物或者不常见障碍物。常见障碍物可为,例如行人、车辆、骑行人员等。不同常见障碍物可采用相同或不同的标示。例如,车辆可通过二维边框、梯形(如图11A所示)、或者车辆图标等标示。骑行人员可通过二维边框(如图11B所示)、梯形、或者骑行人员图标等标示。不常见障碍物可为,例如路边的泥堆等无法识别的物体。路边的泥堆可通过物体的轮廓(如图11C)等标示。不常见障碍物的中心点可为不常见障碍物的标示的几何中心。
所述行人可位于所述HUD虚拟屏幕外的边区域,或者位于所述HUD虚拟屏幕外的角区域。对于图9中的行人,可首先确定行人对应的二维边框902的中心点P所处的HUD虚拟屏幕901外的子区域,并再根据子区域确定二维边框902的中心点P是否位于所述HUD虚拟屏幕901外的边区域,从而可确定所述行人是否位于所述HUD虚拟屏幕901外的边区域。
在一些实施例中,如图12-13所示,所述HUD虚拟屏幕的顶角A、顶角B、顶角C、顶角D的坐标分别为(0,10),(10,10),(0,0),及(10,0)。以所述HUD虚拟屏幕的四条边为边界,划分所述HUD虚拟屏幕外的区域为8个子区域,分别为第一子区域z1、第二子区域z2、第三子区域z3、第四子区域z4、第五子区域c1、第六子区域c2、第七子区域c3、及第八子区域c4。其中,第一子区域z1、第二子区域z2、第三子区域z3、第四子区域z4为HUD虚拟屏幕外的边区域,第五子区域c1、第六子区域c2、第七子区域c3、及第八子区域c4为HUD虚拟屏幕外的角区域。第一子区域z1的X轴坐标X1处于区间(0,10)内,且Y轴坐标Y1大于10。第二子区域z2的X轴坐标X2大于10,且Y轴坐标Y2处于区间(0,10)内。第三子区域z3的X轴坐标X3处于区间(0,10)内,且Y轴坐标Y3小于0。第四子区域z4的X轴坐标X4小于0,且Y轴坐标Y4处于区间(0,10)内。第五子区域c1的X轴坐标X5大于10,且Y轴坐标Y5大于10。第六子区域c2的X轴坐标X6大于10,且Y轴坐标Y6小于0。第七子区域c3的X轴 坐标X7小于0,且Y轴坐标Y7小于0。第八子区域c4的X轴坐标X8小于0,且Y轴坐标Y8大于10。在一些实施例中,所述目标障碍物的标示为二维边框,则可根据二维边框的中心点O的平面坐标查询所述二维边框的中心点O所位于的目标子区域,并根据目标子区域确定二维边框的中心点O是位于HUD虚拟屏幕外的边区域,还是位于HUD虚拟屏幕外的角区域。其中,二维边框的中心点O的平面坐标位于所述目标子区域的坐标区间内。
例如,如图12所示,二维边框R1的中心点O1的平面坐标(6,12)位于所述HUD虚拟屏幕的第一子区域z1的(X1,Y1)区间内,则可确定二维边框R1的中心点O1位于第一子区域z1,且可确定二维边框R1的中心点O1位于HUD虚拟屏幕外的边区域,从而可确定行人位于所述HUD虚拟屏幕外的边区域。例如,如图13所示,二维边框R2的中心点O2的平面坐标(12,12)位于所述HUD虚拟屏幕的第五子区域c1的(X5,Y5)区间内,则可确定二维边框R2的中心点O2位于第五子区域c1,且确定二维边框R2的中心点O2位于HUD虚拟屏幕外的角区域,从而可确定行人位于所述HUD虚拟屏幕外的角区域。
则,在图9中,二维边框902的中心点P处于HUD虚拟屏幕901外的z2子区域,则可确定二维边框902的中心点P位于所述HUD虚拟屏幕901外的边区域,从而可确定行人位于所述HUD虚拟屏幕902外的边区域。以现有的显示方法,HUD虚拟屏幕中不会显示所述行人的任何信息,也就无法针对所述行人对驾驶员进行提前预警。本申请可根据上述的确定的行人处于HUD虚拟屏幕902外的边区域,确定预设预警图标在HUD虚拟屏幕902内的显示位置。
示例性地,如图12所示,二维边框R1的中心点O1位于HUD虚拟屏幕外的边区域,此时可确定HUD虚拟屏幕的四条边中最靠近z1子区域的目标边AB,则经过二维边框R1的中心点O1作垂直于目标边AB的垂线,垂线与所述目标边相交于交点E,则可确定在HUD虚拟屏幕内在交点E处或者交点E附近显示预设预警图标。
在一些实施例中,所述预设预警图标为不带有指向性的预设预警图标。所述预设预警图标用于直接提供给驾驶员直观的HUD虚拟屏幕内外的目标障碍物的空间位置信息。在一些实施例中,所述预设预警图标可根据目标障碍物的类型确定。所述目标障碍物的类型包括行人、车辆、骑行人员、无法识别的物体等。例如,行人对应的预设预警图标为行人图标,车辆对应的预设预警图标为车辆图标,骑行人员对应的预设预警图标为骑行人员图标,无法识别的物体对应的预设预警图标为预设信息,例如星星,如图14所示。可理解,无法识别的物体还可为叹号、警示号等,本申请对此不作限制。在一些实施例中,所述预设预警图标可为预设信息,例如星星、叹号、警示号等,不同类型的目标障碍物对应的预设预警图标可以相同或不同。在一些实施例中,在所述目标障碍物或车机移动的过程中,根据目标障碍物和HUD虚拟屏幕的位置,可以实时地调整预设预警图标显示的位置。
在一些实施例中,所述预设预警图标为指向性的预设预警图标,如图12所示的指向性的星星。所述指向性的预设预警图标可以包括指示图标。所述指示图标用于指示所述目标障碍物的方向。例如,所述指示图标包括箭头,所述指示图标的箭头指向所述目标障碍物的中心点的方向。在一些实施例中,所述箭头和所述预设预警图标的中心点均在垂线的延长线上。在一些实施例中,在所述目标障碍物或车辆移动的过程中,可以根据目标障碍物和HUD虚拟屏幕的位置,实时地调整预设预警图标显示的位置,并实时地调整所述指示图标的箭头指向,使其始终指向所述目标障碍物的中心的方向。
在一些实施例中,预设预警图标还可增加运动效果(或称:运动特效)。在一些实施例中,运动效果可为目标障碍物在HUD虚拟屏幕外时,可以显示指向性的预设预警图标或者不带有指向性的预设预警图标;目标障碍物在HUD虚拟屏幕内时,由于用户可以直观看到目标障碍物的位置,可以显示不带指向性的预设预警图标。在一些实施例中,运动效果可为在二维平面上放大或缩小预设预警图标。其中,预设预警图标可以随着目标障碍物距离本车辆的远近变化而变化。目标障碍物与本车辆之间的距离越近,则预设预警图标越大。目标障碍物与本车辆之间的距离越远,则预设预警图标越小。示例性的,在目标障碍物与本车辆之间的距离为预设距离时,预设预警图标为初始大小。在目标障碍物与本车辆之间的距离大于预设距离时,预设预警图标为缩小的。例如,在车辆等红路灯时,若行人处于人行道的左侧边缘或右侧边缘,则行人图标为缩小的。在目标障碍物与本车辆之间的距离小于预设距离时,预设预警图标为放大的。例如,在车辆等红路灯时,若行人在人行道上走到了本车辆前方,则行人图标为放大的。且,随着目标障碍物逐 渐靠近本车辆,预设预警图标可以逐渐变大,且随着目标障碍物逐渐远离本车辆,预设预警图标可以逐渐变小。从而,可通过预设预警图标的大小来提示行人距离车辆的远近。在一些实施例中,运动效果可为改变预设预警图标的颜色。其中,目标障碍物与本车辆处于不同距离范围内时预设预警图标的颜色可以不同。目标障碍物与本车辆之间的距离越近,则预设预警图标的颜色越醒目。例如,若目标障碍物与本车辆之间的距离处于第一距离范围内时,预设预警图标的颜色为第一颜色。若目标障碍物与本车辆之间的距离处于第二距离范围内时,预设预警图标的颜色为第二颜色。第一距离范围大于第二距离范围时,所述第一颜色可为黄色,所述第二颜色可为红色,本申请对此不作限制。从而,可通过预设预警图标的颜色来提示行人距离车辆的远近。在一些实施例中,运动效果还可为预设预警图标的运动方向根据目标障碍物的运动方向确定,例如行人为面朝左方向,则行人图标的运动方向为朝向左方向,如图15A所示;行人为面朝右方向,则行人图标的运动方向为朝向右方向,如图15B所示。在一些实施例中,在目标障碍物移动的过程中,例如图12中的目标障碍物在本车辆前方朝着本车辆运动时,运动效果可为预设预警图标为动态的预设预警图标,例如行走的动态的行人图标、骑行的动态的骑行人员、开动的动态的车辆等。所述动态的预设预警图标的运动方向与目标障碍物的运动方向相同,例如行走的行人图标的运动方向与行人的行走方向相同,即与行人的面朝方向相同。
在一些实施例中,所述预设预警图标还可包括距离信息。所述距离信息为所述目标障碍物与本车辆之间的距离。在所述目标障碍物或车机移动的过程中,可以根据目标障碍物与车辆之间的距离,实时地调整显示的距离信息。可理解,所述目标障碍物与本车辆之间的距离可通过所述传感器获取,本申请对此不作限制。可理解,所述距离信息的位置可为数字、汉字等信息;所述距离信息可显示在HUD虚拟屏幕上的任何合适的位置,本申请对此不作限制。
如图9所示,二维边框902的中心点P位于HUD虚拟屏幕901外的z2子区域,经过二维边框902的中心点P且垂直于目标边a1的垂线与所述目标边a1相交于交点F,则可在HUD虚拟屏幕内在交点F处显示预设预警图标。
可理解,行人的位置不仅可位于所述HUD虚拟屏幕外的边区域,还可位于所述HUD虚拟屏幕外的角区域。以现有的显示方法,HUD虚拟屏幕中不会显示所述行人的任何信息,也就无法针对所述行人对驾驶员进行提前预警。本申请可根据上述的确定的行人处于HUD虚拟屏幕外的角区域,确定预设预警图标在HUD虚拟屏幕上的显示位置。
示例性地,如图13所示,二维边框R2的中心点O2位于HUD虚拟屏幕外的角区域,此时可确定HUD虚拟屏幕的四条角中最靠近c1子区域的目标角B,并作二维边框R2的中心点O2与所述目标角B的连线O2B,则可确定在HUD虚拟屏幕内在目标角B处显示预设预警图标。在一些实施例中,指示图标的指向(例如箭头的方向)和所述预设预警图标的中心点均在连线O2B的延长线上。图13中显示预设预警图标的过程与图12中显示预设预警图标的过程相似,在此不再赘述。
在一些实施例中,所述车机还可根据预设预警图标在HUD虚拟屏幕上的显示位置,控制在显示区域显示所述预设预警图标。在一些实施例中,所述车机可将预设预警图标在HUD虚拟屏幕上的显示位置发送至HUD、AR-HUD或其他具有显示功能的设备。所述HUD、AR-HUD或其他具有显示功能的设备可根据预设预警图标在HUD虚拟屏幕上的显示位置,控制在显示区域显示所述预设预警图标,从而车机可控制HUD、AR-HUD或其他具有显示功能的设备在显示区域显示所述预设预警图标。其中,HUD、AR-HUD或其他具有显示功能的设备,可以安装于车辆的中控台上方或中控台内部,主要用于将预设预警图标在显示区域进行显示。可理解,HUD、AR-HUD或其他具有显示功能的设备也可设置在其他位置,本申请对此不作限制。显示区域可以为车辆的前挡风玻璃,还可以为独立显示的透明屏幕,用以反射HUD、AR-HUD或其他具有显示功能的设备发出的预设预警图标的光线后进入到用户的眼中,使得用户透过前挡风玻璃或透明屏幕望向车外时,能够看到预设预警图标与车辆外的目标障碍物的位置相对应,以指示HUD虚拟屏幕外的目标障碍物的空间位置信息,可提高行驶的安全性。
以下将结合一些场景,详细的介绍本申请的显示方法。
当本车辆正在交叉路口等待红路灯时,行人可能位于本车辆前方的人行道的最右侧,准备沿着人行道横穿道路,如图16所示。此时,车机可获取行人在人眼坐标系下的空间坐标和HUD虚拟屏幕在人眼坐标系下的空间坐标,并根据行人在人眼坐标系下的空间坐标和HUD虚拟屏幕在人眼坐标系下的空间坐标绘 制人眼所看到的行人和HUD虚拟屏幕的平面图,如图17所示。图17中虽然示出了人行道线,但是可理解,所述人行道线仅仅是为了增加真实感,HUD虚拟屏幕的平面图中可省略所述人行道线。车机可根据平面图确定行人位于HUD虚拟屏幕外,此时,车机可确定预设预警图标的运动特效,并可根据行人的平面坐标和HUD虚拟屏幕的平面坐标确定预设预警图标在HUD虚拟屏幕上的显示位置。车机还可根据所述显示位置控制HUD、AR-HUD或其他具有显示功能的设备根据所述运动特效将预设预警图标在显示区域进行显示。从而,如图17所示,可在HUD虚拟屏幕上右侧边缘靠近行人处显示缩小的、第一颜色的指向性的预设预警图标,例如缩小的、黄色的、指向性的行人图标。其中,指向性的行人图标的箭头指向HUD虚拟屏幕外右侧的行人的中心点的方向。从而,可指示HUD虚拟屏幕外右侧对应位置有行人,且正朝向左方向,并可直接提供给驾驶员直观的HUD虚拟屏幕外右侧的行人的空间位置信息。可理解,图17中的运动特效可省略缩小的、黄色、运动方向、和指向性中的至少一种,本申请对此不作限制。可理解,图17中的预设预警图标还可包括距离信息,本申请对此不作限制。
随着行人沿着人行道的行走,行人与车辆的距离越来越小,图17中显示在HUD虚拟屏幕上右侧边缘靠近行人处的行人图标根据运动特效可以越来越大,且为行走的动态的行人图标,直至行人走到本车辆前方,如图18所示。此时,车机可获取行人在人眼坐标系下的空间坐标和HUD虚拟屏幕在人眼坐标系下的空间坐标,并根据行人在人眼坐标系下的空间坐标和HUD虚拟屏幕在人眼坐标系下的空间坐标绘制人眼所看到的行人和HUD虚拟屏幕的平面图,如图19所示。图19中虽然示出了人行道线,但是可理解,所述人行道线仅仅是为了增加真实感,所述平面图中可省略所述人行道线。车机可根据平面图确定行人位于HUD虚拟屏幕内,此时,车机可确定预设预警图标的运动特效,确定预设预警图标在HUD虚拟屏幕上的显示位置。车机还可根据所述显示位置控制HUD、AR-HUD或其他具有显示功能的设备根据所述运动特效将预设预警图标在显示区域进行显示。从而,如图19所示,可在HUD虚拟屏幕内显示放大的、第二颜色的不带有指向性的预设预警图标,例如放大的、红色的行人图标。从而,可以使得用户所看到的放大的、红色的行人图标能够与真实世界的行人的位置相重叠。从而可提示驾驶员行人距离车辆很近,正朝向左方向,且可提高驾驶员视野中图标的真实程度。可理解,图19中的运动特效可省略放大的、红色、和运动方向中的至少一种,本申请对此不作限制。可理解,在行人走到图18所示的位置之前,随着行人与车辆的距离的变化,图17中的行人图标的颜色还可变化,本申请对此不作限制。
随着行人继续沿着人行道的行走,图19中的行人图标在HUD虚拟屏幕内随着行人与车辆的距离的变化而变化,随着行人的行走而移动,且始终与行人的位置相重叠,直至行人运动至本车辆前方的人行道的左侧,如图20所示。此时,车机可获取行人在人眼坐标系下的空间坐标和HUD虚拟屏幕在人眼坐标系下的空间坐标,并根据行人在人眼坐标系下的空间坐标和HUD虚拟屏幕在人眼坐标系下的空间坐标绘制人眼所看到的行人和HUD虚拟屏幕的平面图,如图21所示。图21中虽然示出了人行道线,但是可理解,所述人行道线仅仅是为了增加真实感,所述平面图中可省略所述人行道线。车机可根据平面图确定行人位于HUD虚拟屏幕外,此时,车机可确定预设预警图标的运动特效,并可根据行人的平面坐标和HUD虚拟屏幕的平面坐标确定预设预警图标在HUD虚拟屏幕上的显示位置。车机还可根据所述显示位置控制HUD、AR-HUD或其他具有显示功能的设备根据所述运动特效将预设预警图标在显示区域进行显示。从而,如图21所示,可在HUD虚拟屏幕上左侧边缘靠近行人处显示缩小的、第一颜色的指向性的预设预警图标,例如缩小的、黄色的、指向性的行人图标。其中,指向性的行人图标的箭头指向HUD虚拟屏幕外左侧的行人的中心点的方向。从而,可指示HUD虚拟屏幕外左侧对应位置有行人且正朝向左方向,从而可直接提供给驾驶员直观的HUD虚拟屏幕外左侧的行人的空间位置信息。可理解,图21中的运动特效可省略缩小的、黄色、运动方向、和指向性中的至少一种,本申请对此不作限制。可理解,在行人走到图20所示的位置之后,随着行人与车辆的距离越来越大,图21中显示在HUD虚拟屏幕上左侧边缘靠近行人处的行人图标根据运动特效会越来越小,且为行走的行人图标,直至行人不再被识别为目标障碍物,则此时图21中的行人图标消失。可理解,图21中的预设预警图标还可包括距离信息,本申请对此不作限制。
可理解,行人不仅可从HUD虚拟屏幕外右侧运动至HUD虚拟屏幕内,还可从HUD虚拟屏幕外上侧、下侧或左侧运动至HUD虚拟屏幕内,显示过程与从HUD虚拟屏幕外右侧运动至HUD虚拟屏幕内时的显示过程相似,在此不再赘述。
图16-21为以目标障碍物从HUD虚拟屏幕外运动至HUD虚拟屏幕内,再运动至HUD虚拟屏幕外为例描述了显示方法。在真实场景中,目标障碍物还可为在屏幕外相对于车辆靠近车辆运动,例如车辆从骑行的骑行人员旁边经过。以下以车辆从骑行的骑行人员旁边经过的场景为例进行说明。
当本车辆正在公路上行驶时,骑行人员可能远离本车辆,且处于位于本车辆右侧的自行车道上骑行,如图22所示。此时,车机可获取骑行人员在人眼坐标系下的空间坐标和HUD虚拟屏幕在人眼坐标系下的空间坐标,并根据骑行人员在人眼坐标系下的空间坐标和HUD虚拟屏幕在人眼坐标系下的空间坐标绘制人眼所看到的骑行人员和HUD虚拟屏幕的平面图,如图23所示。图23中虽然示出了公路线及行驶方向的图标,但是可理解,所述公路线及行驶方向的图标仅仅是为了增加真实感,所述平面图中可省略公路线及行驶方向的图标。车机可根据平面图确定骑行人员位于HUD虚拟屏幕外,此时,车机可确定预设预警图标的运动特效,并可根据平面图中骑行人员的平面坐标和HUD虚拟屏幕的平面坐标确定预设预警图标在HUD虚拟屏幕上的显示位置。车机还可根据所述显示位置控制HUD、AR-HUD或其他具有显示功能的设备根据所述运动特效将预设预警图标在显示区域进行显示。从而,如图23所示,可在HUD虚拟屏幕上右侧边缘上端靠近骑行人员处显示缩小的、第一颜色的指向性的预设预警图标,例如缩小的、黄色的、指向性的骑行人员图标。其中,指向性的骑行人员图标的箭头指向HUD虚拟屏幕外右侧上端的骑行人员的中心点的方向。从而,可指示HUD虚拟屏幕外右侧对应位置有骑行人员,且正背向车辆,并可直接提供给驾驶员直观的HUD虚拟屏幕外右侧的骑行人员的空间位置信息。可理解,图23中的运动特效可省略缩小的、黄色、运动方向、和指向性中的至少一种,本申请对此不作限制。
随着车辆在公路上的行驶,车辆可能逐渐靠近骑行人员。在人眼坐标系下,视平线会在无穷远处与地平线相交,从而视平线以下的等高物体愈远愈向上,如图24所示。同时,在人眼坐标系下,骑行人员存在远小近大的透视规律。则,在车辆从远处逐渐靠近骑行人员时,骑行人员在视觉上逐渐变大,且向下运动,如图25所示。此时,车机可获取骑行人员在人眼坐标系下的空间坐标和HUD虚拟屏幕在人眼坐标系下的空间坐标,并根据骑行人员在人眼坐标系下的空间坐标和HUD虚拟屏幕在人眼坐标系下的空间坐标绘制人眼所看到的骑行人员和HUD虚拟屏幕的平面图,如图26所示。图26中虽然示出了公路线及行驶方向的图标,但是可理解,所述公路线及行驶方向的图标仅仅是为了增加真实感,所述平面图中可省略公路线及行驶方向的图标。图26相较于图23可以看出,随着车辆靠近骑行人员,骑行人员相对于HUD虚拟屏幕向下运动,且放大尺寸。车机可根据平面图确定骑行人员位于HUD虚拟屏幕外。此时,车机可确定预设预警图标的运动特效,并可根据骑行人员的平面坐标和HUD虚拟屏幕的平面坐标确定预设预警图标在HUD虚拟屏幕上的显示位置。车机还可根据所述显示位置控制HUD、AR-HUD或其他具有显示功能的设备根据所述运动特效将预设预警图标在显示区域进行显示。从而,如图26所示,可在HUD虚拟屏幕上右侧边缘下端靠近骑行人员处显示放大的、第一颜色的指向性的预设预警图标,例如放大的、黄色的、指向性的骑行人员图标。其中,指向性的骑行人员图标的箭头指向HUD虚拟屏幕外右侧下端的骑行人员的中心点的方向。从而,可指示HUD虚拟屏幕外右侧对应位置有骑行人员,且正背向车辆,从而可直接提供给驾驶员直观的HUD虚拟屏幕外右侧的骑行人员的空间位置信息。可理解,图26中的运动特效可省略放大的、黄色、运动方向、和指向性中的至少一种,本申请对此不作限制。可理解,在车辆行驶至图25所示的位置之前,随着车辆逐渐靠近骑行人员,图23中的骑行人员图标沿着HUD虚拟屏幕的右侧边缘逐渐向下移动直至移动至图26所示的位置,且图23中的骑行人员图标逐渐放大至图26所示的骑行人员图标的尺寸。可理解,在车辆行驶至图25所示的位置之后,随着车辆的继续行驶,图26中的骑行人员图标会继续放大,为动态的骑行人员图标,且继续沿着HUD虚拟屏幕的右侧边缘继续向下移动直至车辆赶上或超过骑行人员,则此时图26中的骑行人员图标消失。可理解,图26中的运动特效可省略放大的、黄色、运动方向、和指向性中的至少一种,本申请对此不作限制。可理解,在车辆逐渐靠近骑行人员的过程中,显示在HUD虚拟屏幕中的骑行人员图标的颜色也可改变,本申请对此不作限制。
可理解,所述目标障碍物还可为车辆或者无法识别的物体,车辆或者无法识别的物体的显示过程与上述的行人及骑行人员的显示过程相似,在此不再赘述。
本申请的显示方法不仅可为驾驶员提供单个目标障碍物的空间位置信息,还可为驾驶员提供多个目标障碍物的空间位置信息,所述多个目标障碍物的类型可相同或者不相同。在多个目标障碍物的场景下,多 个目标障碍物可作为一个整体,形成一群体,例如一人群。多个目标障碍物还可不作为一个整体,形成多个单独的个体。
例如,若多个目标障碍物中的每个目标障碍物的类型与其他目标障碍物的类型相同,且在预设时间(例如2秒等)内多个目标障碍物中每相邻的两个目标障碍物之间的距离小于预设阈值(例如0.5米等)且为同方向运动,则多个目标障碍物可作为一个整体,形成一群体,例如一人群。若多个目标障碍物中的每个目标障碍物的类型与其他目标障碍物的类型相同,且在预设时间(例如2秒等)内多个目标障碍物中每相邻的两个目标障碍物之间的距离小于预设阈值(例如0.5米等)且皆为静止时,则多个目标障碍物可作为一个整体,形成一群体,例如人群。则所述目标障碍物的类型还可包括人群。所述人群为常见障碍物。所述人群可通过二维边框(如图27所示)、梯形、或者群组图标等标示。人群对应的预设预警图标可为人群图标。人群图标可为添加有角标值的行人图标,即图标角标。所述角标值可为人群的数量,如图28所示。人群图标还可为群图标,如图28所示。
又例如,多个目标障碍物可不作为一个整体,形成多个单独的个体。示例性地,若多个目标障碍物中任意相邻的两个目标障碍物存在类型不相同、或者之间的距离大于预设阈值(例如0.5米等),或者两者的运动方向相反,则多个目标障碍物可不作为一个整体,形成多个单独的个体。每个单独的个体单独标识,例如每个单独的行人以行人图标单独标识。若单独标识的行人图标的指示图标重叠,则可将指示图标重叠的行人图标组合形成组合图标。组合图标可按照行人的实际空间位置将行人图标组合,例如根据行人的运动方向、左右位置及远近位置(前后位置)将行人图标组合。在一些实施例中,组合图标可通过距离信息真实反映行人之间的距离。在一些实施例中,组合图标不能真实反映行人之间的距离。例如,行人1位于行人2的左边,行人1相对于行人2靠近车辆,则组合图标中的行人图标1位于行人图标2的左边,且行人图标1相对于行人图标2靠近驾驶员,即行人图标1在行人图标2的前方。组合图标中的行人图标可前后叠放,如图28所示,此时行人的运动方向相反且行人之间的距离小于预设距离(例如0.05米等),例如两行人擦身而过。组合图标中的行人图标可相互分开按照实际空间位置显示,如图28所示,此时行人之间的距离为大于预设距离(例如0.05米等)的任何距离。在一些实施例中,组合图标可为指向性的组合图标,即组合图标包括一个指示图标。在一些实施例中,组合图标可为不带有指向性的组合图标。例如,若组合之前的行人图标为指向性的行人图标,则组合图标为可以指向性的组合图标;若组合之前的行人图标为不带有指向性的行人图标,则组合图标可以为不带有指向性的组合图标。
若单独标识的行人图标中的行人图标重叠但指示图标不重叠,则可将行人图标按照行人的实际空间位置相互远离分开显示,使得单独标识的行人图标中的行人图标不重叠且指示图标不重叠。例如,如图29A所示,HUD虚拟屏幕中的行人3的指向性的行人图标3中的行人图标3与行人4的指向性的行人图标4中的行人图标4重叠,但是指示图标不重叠。则,在实际显示时,可将指向性的行人图标3和指向性的行人图标4相互远离分开,直至指向性的行人图标3中的行人图标3与指向性的行人图标4中的行人图标4不重叠且指示图标也不重叠,然后可按照相互远离分开后的指向性的行人图标3和指向性的行人图标4进行显示,如图29B所示,从而可使得显示更清楚,且可指示HUD虚拟屏幕外的行人3和行人4的空间位置。
在一些实施例中,行人图标中的行人图标重叠但指示图标不重叠可为指向性的行人图标中的行人图标重叠但指示图标不重叠。行人图标中的行人图标重叠但指示图标不重叠还可为指向性的行人图标和不带有指向性的行人图标的行人图标重叠。在一些实施例中,行人图标中的行人图标不重叠且指示图标不重叠可为指向性的行人图标中的行人图标不重叠且指示图标不重叠。行人图标中的行人图标不重叠且指示图标不重叠还可为指向性的行人图标和不带有指向性的行人图标的行人图标不重叠。
在一些实施例中,若单独标识的行人图标均为不带有指向性的行人图标,且行人图标重叠,则可将行人图标按照行人的实际空间位置前后叠放。
多个目标障碍物可位于所述HUD虚拟屏幕内,或者位于所述HUD虚拟屏幕外。若多个目标障碍物为一群体,则首先确定群体的标识的中心点,例如人群的二维边框的中心点,然后确定人群的二维边框的中心点是否位于HUD虚拟屏幕内。若人群的二维边框的中心点位于HUD虚拟屏幕内,则人群位于所述HUD虚拟屏幕内。若人群的二维边框的中心点位于HUD虚拟屏幕外,则人群位于所述HUD虚拟屏幕外。后续,在多个目标障碍物场景下根据人群的二维边框的中心点确定人群对应的人群图标在HUD虚拟屏幕内的显 示位置,与上述的单个目标障碍物场景下根据行人的二维边框的中心点确定行人对应的行人图标在HUD虚拟屏幕上的显示位置相似,在此不再赘述。
若多个目标障碍物为多个单独的个体,则可以分别判断每个单独的个体的标识的中心点,例如每个行人的二维边框的中心点,然后确定每个行人的二维边框的中心点是否位于HUD虚拟屏幕内。若一个行人的二维边框的中心点位于HUD虚拟屏幕内,则所述行人位于所述HUD虚拟屏幕内。若一个行人的二维边框的中心点位于HUD虚拟屏幕外,则所述行人位于所述HUD虚拟屏幕外。后续,在多个目标障碍物场景下根据行人的二维边框的中心点确定人群对应的人群图标在HUD虚拟屏幕上的显示位置,与上述的单个目标障碍物场景下根据行人的二维边框的中心点确定行人对应的行人图标在HUD虚拟屏幕上的显示位置相似,在此不再赘述。
可理解,由于所述目标障碍物的类型还可包括人群,则人群对应的人群图标也有运动效果,人群图标的运动效果与单个目标障碍物对应的预设预警图标的运动效果相似。示例性地,人群在HUD虚拟屏幕外时为指向性的人群图标或者不带有指向性的人群图标,人群在HUD虚拟屏幕内时为不带指向性的人群图标。可选的,人群在HUD虚拟屏幕内时可以为不带指向性的但带有方框的人群图标,如图30所示,所述方框框选了作为整体的人群,所述方框可指示作为整体的人群所在区域的实际面积大小。
可理解,所述显示方法还可将人群图标和行人图标按照行人的实际空间位置相互远离分开显示;所述显示方法还可将人群图标和另一人群图标按照行人的实际空间位置相互远离分开显示;所述显示方法还可将组合图标和行人图标按照行人的实际空间位置相互远离分开显示;所述显示方法还可将一组合图标和另一组合图标按照行人的实际空间位置相互远离分开显示,本申请对此不作限制。
可理解,所述群体还可为一骑行人员群、一车辆群等,则所述目标障碍物的类型还可包括一骑行人员群、一车辆群等,本申请对此不作限制。
可理解,所述组合图标中的图标可包括不同类型的图标,例如行人图标和骑行人员图标,本申请对此不作限制。
下面先介绍多个目标障碍物为一群体的场景时的显示方法。
当本车辆正在交叉路口等待红路灯时,多个行人可能位于本车辆前方的人行道上,如图31所示。在图31中,行人5和行人6可能在预设时间(例如2秒等)内以两者之间的距离小于预设阈值(例如0.5米等)且为同方向运动走到人行道的最右侧,且正准备从人行道的最右侧沿着人行道横穿道路,行人7和行人8可能在预设时间(例如2秒等)内以两者之间的距离小于预设阈值(例如0.5米等)且为同方向运动从人行道的最右侧沿着人行道走到人行道的中间。
此时,车机可获取行人5、行人6、行人7、行人8在人眼坐标系下的空间坐标,和HUD虚拟屏幕在人眼坐标系下的空间坐标,并根据行人5、行人6、行人7、行人8在人眼坐标系下的空间坐标和HUD虚拟屏幕在人眼坐标系下的空间坐标绘制人眼所看到的行人5、行人6、行人7、行人8和HUD虚拟屏幕的平面图,如图32所示。图32中虽然示出了人行道线,但是可理解,所述人行道线仅仅是为了增加真实感,所述平面图中可省略所述人行道线。车机可确定行人5和行人6可作为一个整体,形成人群1,并确定行人7和行人8可作为一个整体,形成人群2。车机还可根据平面图确定人群1位于HUD虚拟屏幕外,并确定人群2位于HUD虚拟屏幕内。此时,车机可确定人群1对应的人群图标1的运动特效和人群2对应的人群图标2的运动特效,并可根据人群1的平面坐标、人群2的平面坐标和HUD虚拟屏幕的平面坐标确定人群1对应的人群图标1在HUD虚拟屏幕上的显示位置和人群2对应的人群图标2在HUD虚拟屏幕上的显示位置。车机还可根据所述显示位置控制HUD、AR-HUD或其他具有显示功能的设备根据所述运动特效将人群图标1和人群图标2在显示区域进行显示。从而,如图32所示,可在HUD虚拟屏幕上右侧边缘靠近行人处显示缩小的、第一颜色的指向性的人群图标1,例如缩小的、黄色的、指向性的人群图标1;并可在HUD虚拟屏幕内显示放大的、第二颜色的、不带有指向性的、带有方框的、且与真实世界的人群的位置重叠的人群图标2,例如放大的、红色的、带有方框的、且与真实世界的人群2的位置重叠的人群图标2,从而,可指示驾驶员HUD虚拟屏幕外右侧对应位置有人群1,人群1正朝向左方向,且提示驾驶员人群2距离车辆很近,且正朝向左方向,从而可直接提供给驾驶员直观的HUD虚拟屏幕内外的人群的空间位置信息。
在图32中,人群图标1为添加有角标值2的行人图标,人群图标2为添加有角标值2的行人图标,可理解,人群图标1和人群图标2还可为群图标,本申请对此不作限制。
可理解,图32中的人群图标1的运动特效可省略缩小的、黄色、运动方向、和指向性中的至少一种,人群图标2的运动特效可省略放大的、红色、运动方向、带有方框中的至少一种,本申请对此不作限制。可理解,图32中的人群图标1和人群图标2还可分别包括距离信息,本申请对此不作限制。
可理解,人群还可位于HUD虚拟屏幕外左侧向左走,人群还可位于HUD虚拟屏幕外左侧向右走,人群还可位于HUD虚拟屏幕外右侧向右走,人群还可位于HUD虚拟屏幕内向右走,本申请对此不作限制。
可理解,人群中的行人的数量不仅局限于两个,还可为其他数量,例如三个、四个等;不同人群中的行人的数量可相同或者不相同,本申请对此不作限制。
可理解,所述群体还可为骑行人员群、车辆群等,本申请对此不作限制。
下面接着详细介绍多个目标障碍物为同类型的多个单独的个体的场景时的显示方法。
当本车辆正在交叉路口等待红路灯时,多个行人可能位于本车辆前方的人行道上,如图33所示。在图33中,行人9和行人10可能为反方向运动且处于人行道的最右侧,且两者之间的距离大于预设阈值(例如0.5米等),行人11和行人12可能为分别从人行道的最左侧和人行道的最右侧反方向运动至人行道的中间,且两者之间的距离小于预设距离(例如0.05米等),行人13和行人14可能为反方向运动且处于人行道的最左侧,且两者之间的距离小于预设距离(例如0.05米等)。
此时,车机可获取行人9、行人10、行人11、行人12、行人13、行人14在人眼坐标系下的空间坐标,和HUD虚拟屏幕在人眼坐标系下的空间坐标,并根据行人9、行人10、行人11、行人12、行人13、行人14在人眼坐标系下的空间坐标和HUD虚拟屏幕在人眼坐标系下的空间坐标绘制人眼所看到的行人9、行人10、行人11、行人12、行人13、行人14和HUD虚拟屏幕的平面图,如图34所示。图34中虽然示出了人行道线,但是可理解,所述人行道线仅仅是为了增加真实感,所述平面图中可省略所述人行道线。车机可确定行人9、行人10、行人11、行人12、行人13和行人14为多个单独的个体。车机还可根据平面图确定行人9、行人10、行人13和行人14位于HUD虚拟屏幕外,并确定行人11和行人12位于HUD虚拟屏幕内。此时,车机可确定行人9、行人10、行人11、行人12、行人13和行人14中每个行人对应的行人图标的运动特效,并可根据行人9、行人10、行人11、行人12、行人13和行人14中每个行人的平面坐标和HUD虚拟屏幕的平面坐标确定行人9、行人10、行人11、行人12、行人13和行人14的各自的行人图标在HUD虚拟屏幕上的显示位置。车机还可根据运动特效确定行人9的行人图标9和行人10的行人图标10的指示图标重叠,行人11的行人图标11和行人12的行人图标12重叠,行人13的行人图标13和行人14的行人图标14的指示图标重叠,并可将行人图标9和行人图标10组合形成组合图标1,将行人图标11和行人图标12按照行人的实际空间位置前后叠放,将行人图标13和行人图标14组合形成组合图标2。其中,组合图标1中行人图标9和行人图标10相互分开且按照实际空间位置显示,组合图标2中行人图标13和行人图标14前后叠放。车机还可确定组合图标1在HUD虚拟屏幕上的显示位置,前后叠放的行人图标11和行人图标12在HUD虚拟屏幕上的显示位置,及组合图标2在HUD虚拟屏幕上的显示位置。车机还可根据组合图标1的显示位置,前后叠放的行人图标11和行人图标12的显示位置,及组合图标2的显示位置控制HUD、AR-HUD或其他具有显示功能的设备将组合图标1、前后叠放的行人图标11和行人图标12、和组合图标2在显示区域进行显示。从而,如图34所示,可在HUD虚拟屏幕上右侧边缘靠近行人9和行人10处显示缩小的、第一颜色的指向性的组合图标1,例如缩小的、黄色的指向性的组合图标1,其中,指向性的组合图标1的箭头指向行人9和行人10;在HUD虚拟屏幕上显示放大的、第二颜色的且与真实世界的行人11和行人12的位置重叠的前后叠放的行人图标11和行人图标12,例如放大的、红色的且与真实世界的行人11和行人12的位置重叠的前后叠放的行人图标11和行人图标12;并在HUD虚拟屏幕上左侧边缘靠近行人11和行人12处显示缩小的、第一颜色的指向性的组合图标2,例如缩小的、黄色的指向性的组合图标2,其中,指向性的组合图标2的箭头指向行人11和行人12。从而,可指示驾驶员HUD虚拟屏幕外右侧对应位置有行人9和行人10,且行人9正朝向右方向,行人10正朝向左方向,同时可提示驾驶员行人11和行人12距离车辆很近,且行人11正朝向右方向,行人12正朝向左方向。此外,还可指示驾驶员HUD虚拟屏幕外左侧对应位置有行人13和行人14,且行人13正朝向左方向,行人14正朝向右方向。从而可直接提供给驾驶员直观的HUD虚拟屏幕内外的人群的空间位置信息。
可理解,图34中的组合图标1中的行人图标9和行人图标10的运动特效可省略缩小的、黄色、和指向性中的至少一种,前后叠放的行人图标11和行人图标12的运动特效可省略放大的、红色中的至少一种,图34中的组合图标2中的行人图标13和行人图标14的运动特效可省略缩小的、黄色、和指向性中的至少一种,本申请对此不作限制。可理解,图34中的组合图标1中的行人图标9和行人图标10,以及组合图标2中的行人图标13和行人图标14还可分别包括距离信息,本申请对此不作限制。
可理解,多个单独的行人对应的行人图标还可为在HUD虚拟屏幕外行人图标重叠但指示图标不重叠,多个单独的行人对应的行人图标还可为在HUD虚拟屏幕内和HUD虚拟屏幕外行人图标重叠但指示图标不重叠,此时需要将多个单独的行人图标按照行人的实际空间位置相互远离分开显示,本申请对此不作限制。
下面再详细介绍多个目标障碍物为类型不同的多个单独的个体的场景时的显示方法。
当本车辆正在交叉路口等待红路灯时,行人15和骑行人员1可能位于本车辆前方的人行道上,如图35所示。在图35中,行人15和骑行人员1可能为反方向运动且处于人行道的最左侧,且两者之间的距离小于预设阈值(例如0.5米等)。
此时,车机可获取行人15和骑行人员1在人眼坐标系下的空间坐标,和HUD虚拟屏幕在人眼坐标系下的空间坐标,并根据行人15和骑行人员1在人眼坐标系下的空间坐标和HUD虚拟屏幕在人眼坐标系下的空间坐标绘制人眼所看到的行人15、骑行人员1和HUD虚拟屏幕的平面图,如图36所示。图36中虽然示出了人行道线,但是可理解,所述人行道线仅仅是为了增加真实感,所述平面图中可省略所述人行道线。车机可确定行人15和骑行人员1为多个单独的个体。车机还可根据平面图确定行人15和骑行人员1位于HUD虚拟屏幕外。此时,车机可确定行人15对应的行人图标15和骑行人员1对应的骑行人员图标1的运动特效,并可根据行人15和骑行人员1的平面坐标和HUD虚拟屏幕的平面坐标确定行人15的行人图标15和骑行人员1的骑行人员图标1在HUD虚拟屏幕上的显示位置。车机还可根据运动特效确定行人图标15和骑行人员图标1的指示图标重叠,并可将行人图标15和骑行人员图标1组合形成组合图标3。其中,组合图标3中行人图标15和骑行人员图标1相互分开且按照实际空间位置显示。车机还可确定组合图标3在HUD虚拟屏幕上的显示位置,并可根据组合图标3的显示位置控制HUD、AR-HUD或其他具有显示功能的设备将组合图标3在显示区域进行显示。从而,如图36所示,可在HUD虚拟屏幕上左侧边缘靠近行人15和骑行人员1处显示缩小的、第一颜色的指向性的组合图标3,例如缩小的、黄色的指向性的组合图标3,其中,指向性的组合图标3的箭头指向行人15和骑行人员1。从而,可指示驾驶员HUD虚拟屏幕外左侧对应位置有行人15和骑行人员1,且行人15正朝向右方向,骑行人员1正朝向左方向。从而可直接提供给驾驶员直观的HUD虚拟屏幕外的行人和骑行人员的空间位置信息。
可理解,图36中的组合图标1中的行人图标15和骑行人员图标1的运动特效可省略缩小的、黄色、和指向性中的至少一种,本申请对此不作限制。可理解,图36中的组合图标1中的行人图标15和骑行人员1还可分别包括距离信息,本申请对此不作限制。可理解,所述组合图标中的图标可为行人图标和车辆图标,所述组合图标中的图标还可为骑行人员和车辆图标,本申请对此不作限制。
可理解,在多个单独的个体的场景下也可包括多个单独的预设预警图标,本申请对此不作限制。
可理解,上述的多个目标障碍物的场景可组合在一起,本申请对此不作限制。
可理解,可能存在相邻的两个目标障碍物不能形成一群体,但是相互之间隔着一个目标障碍物的两个目标障碍物可形成一群体,本申请对此不作限制。
可理解,多个目标障碍物还可从群体的场景切换至包括多个单独的个体的场景。下面将以图31所示的群体的场景为例进行说明。
随着图31中的多个行人沿着人行道的行走,不同行人的行走速度可能不相同,多个行人可能行走至图37所示的位置。在图37中,行人5和行人6可能在预设时间(例如2秒等)内以两者之间的距离小于预设阈值(例如0.5米等)且为同方向运动从人行道的最右侧沿着人行道走到人行道的中间,行人7和行人8可能从人行道的中间沿着人行道走到人行道的最左侧,且两者之间的距离大于预设阈值(例如0.5米)
此时,车机可获取行人5、行人6、行人7、行人8在人眼坐标系下的空间坐标,和HUD虚拟屏幕在人眼坐标系下的空间坐标,并根据行人5、行人6、行人7、行人8在人眼坐标系下的空间坐标和HUD虚拟屏幕在人眼坐标系下的空间坐标绘制人眼所看到的行人5、行人6、行人7、行人8和HUD虚拟屏幕的平面图,如图38所示。图38中虽然示出了人行道线,但是可理解,所述人行道线仅仅是为了增加真实感, 所述平面图中可省略所述人行道线。车机可确定行人5和行人6可作为一个整体,形成人群3,并确定行人7和行人8为两个单独的个体。车机还可根据平面图确定人群3位于HUD虚拟屏幕内,并确定行人7和行人8位于HUD虚拟屏幕外。此时,车机可确定人群3对应的人群图标3的运动特效、行人7对应的行人图标7的运动特效和行人8对应的行人图标8的运动特效,并可根据人群3的平面坐标、行人7的平面坐标、行人8的平面坐标和HUD虚拟屏幕的平面坐标分别确定人群3对应的人群图标3、行人7对应的行人图标7和行人8对应的行人图标8在HUD虚拟屏幕上的显示位置。车机还可根据运动特效确定行人7的行人图标7和行人8的行人图标8的指示图标重叠,将行人图标7和行人图标8组合形成组合图标4,并可确定组合图标4在HUD虚拟屏幕上的显示位置。车机还可根据人群图标3在HUD虚拟屏幕上的显示位置控制HUD、AR-HUD或其他具有显示功能的设备根据所述运动特效将人群图标3在显示区域进行显示,并根据组合图标4在HUD虚拟屏幕上的显示位置控制HUD、AR-HUD或其他具有显示功能的设备将组合图标4在显示区域进行显示。从而,如图38所示,可在HUD虚拟屏幕上显示放大的、第二颜色的、不带有指向性的、带有方框的、且与真实世界的人群的位置重叠的人群图标3,例如放大的、红色的、带有方框的、且与真实世界的人群3的位置重叠的人群图标3;并可在HUD虚拟屏幕上左侧边缘靠近行人7和行人8处显示缩小的、第一颜色的指向性的组合图标4,例如缩小的、黄色的指向性的组合图标4,其中,指向性的组合图标4的箭头指向行人7和行人8。从而,可提示驾驶员人群3距离车辆很近,同时可指示驾驶员HUD虚拟屏幕外左侧对应位置有行人7和行人8,且行人7和行人8正朝向左方向远离车辆。从而,可直接提供给驾驶员直观的HUD虚拟屏幕内外的人群的空间位置信息。
在图38中,人群图标3为添加有角标值2的行人图标,可理解,人群图标3还可为群图标,本申请对此不作限制。
可理解,图38中的人群图标3的运动特效可省略放大的、红色、带有方框中的至少一种,组合图标4中的行人图标7和行人图标8的运动特效可省略缩小的、黄色、和指向性中的至少一种,本申请对此不作限制。可理解,图38中的人群图标3和组合图标4中的行人图标7和行人图标8还可分别包括距离信息,本申请对此不作限制。
可理解,从群体的场景切换至包括多个单独的个体的场景还可为,例如随着图31中的多个行人沿着人行道的行走,人群中的行人的行走方向可能变化为相反方向,本申请对此不作限制。
多个目标障碍物还可从同类型的多个单独的个体的场景切换至包括群体的场景。下面将以图33所示的同类型的多个单独的个体的场景为例进行说明。
随着图33中的多个行人沿着人行道的行走,行人的行走速度和行走方向可能会改变,例如行人9加快行走速度,行人11改变行走方向,行人14加快行走速度,多个行人可能行走至图39所示的位置。在图39中,行人9、行人11和行人12可能在预设时间(例如2秒等)内以两者之间的距离小于预设阈值(例如0.5米等)且为同方向运动走到人行道的最左侧,行人14可能从人行道的最左侧沿着人行道走到人行道的最右侧。
此时,车机可获取行人9、行人11、行人12、行人14在人眼坐标系下的空间坐标,和HUD虚拟屏幕在人眼坐标系下的空间坐标,并根据行人9、行人11、行人12、行人14在人眼坐标系下的空间坐标和HUD虚拟屏幕在人眼坐标系下的空间坐标绘制人眼所看到的行人9、行人11、行人12、行人14和HUD虚拟屏幕的平面图,如图40所示。图40中虽然示出了人行道线,但是可理解,所述人行道线仅仅是为了增加真实感,所述平面图中可省略所述人行道线。车机可确定行人9、行人11和行人12可作为一个整体,形成人群4,并确定行人14为单独的个体。车机还可根据平面图确定人群4位于HUD虚拟屏幕外,并确定行人14位于HUD虚拟屏幕外。此时,车机可确定人群4对应的人群图标4的运动特效和行人14对应的行人图标14的运动特效,并可根据人群4的平面坐标、行人14的平面坐标和HUD虚拟屏幕的平面坐标分别确定人群4对应的人群图标4和行人14对应的行人图标14在HUD虚拟屏幕上的显示位置。车机还可根据人群图标4和行人图标14在HUD虚拟屏幕上的显示位置控制HUD、AR-HUD或其他具有显示功能的设备根据所述运动特效将人群图标4和行人图标14在显示区域进行显示。从而,如图40所示,可在HUD虚拟屏幕上左侧边缘靠近人群4处显示缩小的、第一颜色的、指向性的人群图标4,例如缩小的、黄色的、指向性的人群图标4;并可在HUD虚拟屏幕上右侧边缘靠近行人14处显示缩小的、第一颜色的指向性的行人图标14,例如缩小的、黄色的指向性的行人图标14,其中,指向性的人群图标4的箭头指向 人群4,指向性的行人图标14指向行人14。从而,可指示驾驶员HUD虚拟屏幕外左侧对应位置有人群4正朝向左方向远离车辆,且HUD虚拟屏幕外右侧对应位置有行人14正朝向右方向远离车辆。从而,可直接提供给驾驶员直观的HUD虚拟屏幕外的人群的空间位置信息。
在图40中,人群图标4为添加有角标值3的行人图标,可理解,人群图标4还可为群图标,本申请对此不作限制。
可理解,图40中的人群图标4的运动特效可省略缩小的、黄色、运动方向、指向性中的至少一种,行人图标14的运动特效可省略缩小的、黄色、运动方向、和指向性中的至少一种,本申请对此不作限制。可理解,图40中的人群图标4和行人图标14还可分别包括距离信息,本申请对此不作限制。
可理解,在多目标障碍物场景下,若目标障碍物运动,则预设预警图标可为动态的预设预警图标,本申请对此不作限制。
本申请的显示方法不仅可对本车辆前方的目标障碍物进行预警,还可对本车辆非前方的目标障碍物进行预警,例如对本车辆后方或者本车辆侧方的目标障碍物进行预警。具体地:
本车辆可能正在道路上行驶,本车辆的左后方可能存在有正在行进的车辆,如图41所示。在图41中,本车辆的左后方为车辆,但是可理解,本车辆的左后方还可为其他物体,例如行人、骑行人员等;车辆还可位于本车辆的左侧、正后方、右后方、右侧等,本申请对此不作限制。车机可根据传感器所感测的数据及第二预警范围确定本车辆非前方的目标障碍物为车辆,并确定目标障碍物车辆相对于本车辆的方位为左后方。车机可根据目标障碍物车辆相对于本车辆的方位绘制HUD虚拟屏幕和目标障碍物车辆的标示的方位分布的平面图,如图42所示。在图42中,目标障碍物车辆的标示4201和HUD虚拟屏幕4202之间的方位与目标障碍物车辆相对于本车辆的方位相同。车机还可根据目标障碍物车辆相对于本车辆的方位确定平面图中所述车辆位于本车辆外的左下角区域。车机还可根据平面图中车辆位于本车辆外的左下角区域确定预设预警图标在HUD虚拟屏幕上的显示位置。车机根据平面图中车辆位于本车辆外的左下角区域确定预设预警图标在HUD虚拟屏幕上的显示位置的过程,与上述的图13中的确定预设预警图标在HUD虚拟屏幕上的显示位置的过程相似,本申请对此不作限制。车机还可根据显示位置控制HUD、AR-HUD或其他具有显示功能的设备将预设预警图标在显示区域进行显示。从而,如图43所示,可在HUD虚拟屏幕上左下角显示预设预警图标,例如叹号图标。则,可指示驾驶员本车辆左后方有车辆。
可理解,随着左后方的车辆的行进,HUD虚拟屏幕上的预设预警图标的位置也随着变化,例如,随着左后方车辆逐渐超越本车辆,预设预警图标的位置可以由图43所示左下角逐渐移动至左上角,直至车辆不再识别为本车辆周围的目标障碍物,此时图43中的叹号图标消失,本申请对此不作限制。
请参考图44,为本申请实施例提供的显示方法的流程图。方法包括:
S4401:获取第一目标障碍物的位置和所述电子设备的屏幕的显示范围;其中,所述第一目标障碍物的位置位于所述屏幕的显示范围外。
S4402:根据所述第一目标障碍物的位置和所述屏幕的显示范围确定第一预设预警图标在所述屏幕上的第一显示位置,所述第一预设预警图标用于提示所述第一目标障碍物。
S4403:根据所述第一显示位置在所述屏幕上显示所述第一预设预警图标。
在一些实施例中,所述第一预设预警图标包括第一图标和第二图标,所述第一图标为指示图标,所述指示图标用于指示所述第一目标障碍物的方向,所述第二图标与所述第一目标障碍物对应。
在一些实施例中,所述屏幕的显示范围为通过所述屏幕能够看到的可视范围大小。
在一些实施例中,所述根据所述第一目标障碍物的位置和所述屏幕的显示范围确定第一预设预警图标在所述屏幕上的第一显示位置之前,所述方法还包括:获取多个所述第一目标障碍物的信息;多个所述第一目标障碍物的信息包括多个所述第一目标障碍物之间的距离,每个所述第一目标障碍物的运动方向,和每个所述第一目标障碍物的类型;若多个所述第一目标障碍物的信息满足预设条件,则确定多个所述第一目标障碍物的类型为群体类型;根据所述群体类型确定所述第一预设预警图标;其中,所述第一预设预警图标为预设的群体图标。
在一些实施例中,所述预设的群体图标为添加有角标值的第二图标,所述角标值为所述第一目标障碍物的数量;或者,所述预设的群体图标为群图标。
在一些实施例中,所述根据所述第一显示位置在所述屏幕上显示所述第一预设预警图标,包括:若多个所述第一目标障碍物为多个单独的个体,且多个所述第一预设预警图标的所述第一图标重叠,则根据所述第一显示位置在所述屏幕上显示将多个所述第一预设预警图标组合成的组合图标;其中,所述组合图标包括所述第一图标,且所述组合图标中的多个所述第二图标按照所述第一目标障碍物的实际空间位置排布。
在一些实施例中,所述根据所述第一显示位置在所述屏幕上显示所述第一预设预警图标,包括:若多个所述第一目标障碍物为多个单独的个体,且多个所述第一预设预警图标的所述第一图标不重叠但所述第二图标重叠,则根据所述第一显示位置在所述屏幕上显示按照所述第一目标障碍物的实际空间位置相互远离分开后的多个所述第一预设预警图标。
在一些实施例中,所述根据所述第一显示位置在所述屏幕上显示所述第一预设预警图标,包括:根据所述第一显示位置在所述屏幕上显示指向性的所述第一预设预警图标;所述方法还包括:所述第一目标障碍物的位置位于所述屏幕的显示范围内时,根据所述第一目标障碍物的位置确定所述第一预设预警图标在所述屏幕上的第二显示位置,在所述屏幕的所述第二显示位置显示不带有指向性的所述第一预设预警图标。
在一些实施例中,所述根据所述第一显示位置在所述屏幕上显示所述第一预设预警图标,包括:所述第一目标障碍物与所述车辆之间的距离为第一距离时,根据所述第一显示位置在所述屏幕上显示第一尺寸的所述第一预设预警图标:所述第一目标障碍物与所述车辆之间的距离为第二距离时,根据所述第一显示位置在所述屏幕上显示第二尺寸的所述第一预设预警图标;其中,所述第一距离大于所述第二距离,所述第一尺寸大于所述第二尺寸。
在一些实施例中,所述根据所述第一显示位置在所述屏幕上显示所述第一预设预警图标,包括:所述第一目标障碍物与所述车辆之间的距离为第一距离时,根据所述第一显示位置在所述屏幕上显示第一颜色的所述第一预设预警图标:所述第一目标障碍物与所述车辆之间的距离为第二距离时,根据所述第一显示位置在所述屏幕上显示第二颜色的所述第一预设预警图标。
在一些实施例中,所述根据所述第一显示位置在所述屏幕上显示所述第一预设预警图标,包括:所述第一目标障碍物正在移动时,根据所述第一显示位置在所述屏幕上显示动态的所述第一预设预警图标。
在一些实施例中,所述根据所述第一显示位置在所述屏幕上显示所述第一预设预警图标,包括:根据所述第一显示位置在所述屏幕上显示带有第一运动方向的所述第一预设预警图标,所述第一运行方向根据所述第一目标障碍物的第二运动方向确定。
在一些实施例中,所述获取第一目标障碍物的位置和所述电子设备的屏幕的显示范围,包括:获取所述第一目标障碍物和所述屏幕的显示范围的第一平面图;所述根据所述第一目标障碍物的位置和所述屏幕的显示范围确定第一预设预警图标在所述屏幕上的第一显示位置,包括:根据所述第一目标障碍物和所述屏幕的显示范围在所述第一平面图中的位置,确定第一预设预警图标在所述屏幕上的第一显示位置。
在一些实施例中,在所述获取所述第一目标障碍物和所述屏幕的显示范围的第一平面图之前,所述方法还包括:获取所述第一目标障碍物的第一空间坐标和所述屏幕的第二空间坐标;所述第一空间坐标为在人眼坐标系下的第一空间坐标,所述第二空间坐标为在人眼坐标系下的第二空间坐标;所述获取所述第一目标障碍物和所述屏幕的显示范围的第一平面图包括:根据所述第一空间坐标和所述第二空间坐标获取人眼视角下的所述第一目标障碍物和所述屏幕的显示范围的第一平面图。
在一些实施例中,所述根据所述第一空间坐标和所述第二空间坐标获取人眼视角下的所述第一目标障碍物和所述屏幕的显示范围的第一平面图包括:根据所述第一空间坐标和所述第二空间坐标获取人眼视角下的所述第一目标障碍物的标示和所述屏幕的显示范围的第一平面图;其中,在所述第一平面图中,所述第一目标障碍物的标示的中心点位于所述屏幕的显示范围外;所述根据所述第一目标障碍物和所述屏幕的显示范围在所述第一平面图中的位置确定第一预设预警图标在所述屏幕上的第一显示位置包括:根据所述第一目标障碍物的标示的中心点和所述屏幕的显示范围在所述第一平面图中的位置确定所述第一预设预警图标在所述屏幕上的第一显示位置;所述中心点包括重心或几何中心。
在一些实施例中,所述方法还包括:获取第二目标障碍物相对于所述车辆的方位;根据所述第二目标障碍物相对于所述车辆的方位确定第二预设预警图标在所述屏幕上的第三显示位置,所述第二预设预警图标用于提示所述第二目标障碍物;根据所述第三显示位置在所述屏幕上显示所述第二预设预警图标。
在一些实施例中,所述根据所述第二目标障碍物相对于所述车辆的方位确定第二预设预警图标在所述屏幕上的第三显示位置包括:根据所述第二目标障碍物相对于所述车辆的方位获取所述第二目标障碍物和所述屏幕的显示范围的方位分布的第二平面图;根据所述第二目标障碍物和所述屏幕的显示范围在所述第二平面图中的位置确定所述第二预设预警图标在所述屏幕上的第三显示位置。
在一些实施例中,所述第一目标障碍物位于所述车辆的前方,所述第二目标障碍物位于所述车辆的非前方。
本申请获取第一目标障碍物的位置和屏幕的显示范围,其中第一目标障碍物的位置位于屏幕的显示范围外。本申请还根据屏幕的显示范围和第一目标障碍物的位置确定第一预设预警图标在屏幕上的第一显示位置,并根据第一显示位置在屏幕上显示第一预设预警图标,从而可直接提供给驾驶员直观的屏幕的显示范围外的目标障碍物的空间位置信息。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到本申请可借助软件加必需的通用硬件的方式来实现,当然也可以通过专用硬件包括专用集成电路、专用CPU、专用存储器、专用元器件等来实现。一般情况下,凡由计算机程序完成的功能都可以很容易地用相应的硬件来实现,而且,用来实现同一功能的具体硬件结构也可以是多种多样的,例如模拟电路、数字电路或专用电路等。但是,对本申请而言更多情况下软件程序实现是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在可读取的存储介质中,如计算机的软盘、U盘、移动硬盘、ROM、RAM、磁碟或者光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。
所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存储的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘(Solid State Disk,SSD))等。
最后应说明的是,以上实施例仅用以说明本申请的技术方案而非限制,尽管参照较佳实施例对本申请进行了详细说明,本领域的普通技术人员应当理解,可以对本申请的技术方案进行修改或等同替换,而不脱离本申请技术方案的精神和范围。

Claims (21)

  1. 一种显示方法,应用于电子设备,所述电子设备为车辆或者被设置在所述车辆中,其特征在于,所述方法包括:
    获取第一目标障碍物的位置和所述电子设备的屏幕的显示范围;其中,所述第一目标障碍物的位置位于所述屏幕的显示范围外;
    根据所述第一目标障碍物的位置和所述屏幕的显示范围确定第一预设预警图标在所述屏幕上的第一显示位置,所述第一预设预警图标用于提示所述第一目标障碍物;
    根据所述第一显示位置在所述屏幕上显示所述第一预设预警图标。
  2. 如权利要求1所述的方法,其特征在于:
    所述第一预设预警图标包括第一图标和第二图标,所述第一图标为指示图标,所述指示图标用于指示所述第一目标障碍物的方向,所述第二图标与所述第一目标障碍物对应。
  3. 如权利要求1至2任一项所述的方法,其特征在于:所述屏幕的显示范围为通过所述屏幕能够看到的可视范围大小。
  4. 如权利要求1至3任一项所述的方法,其特征在于:所述根据所述第一目标障碍物的位置和所述屏幕的显示范围确定第一预设预警图标在所述屏幕上的第一显示位置之前,所述方法还包括:
    获取多个所述第一目标障碍物的信息;多个所述第一目标障碍物的信息包括多个所述第一目标障碍物之间的距离,每个所述第一目标障碍物的运动方向,和每个所述第一目标障碍物的类型;
    若多个所述第一目标障碍物的信息满足预设条件,则确定多个所述第一目标障碍物的类型为群体类型;
    根据所述群体类型确定所述第一预设预警图标;其中,所述第一预设预警图标为预设的群体图标。
  5. 如权利要求4所述的方法,其特征在于:
    所述预设的群体图标为添加有角标值的第二图标,所述角标值为所述第一目标障碍物的数量;
    或者,所述预设的群体图标为群图标。
  6. 如权利要求2至5任一项所述的方法,其特征在于,所述根据所述第一显示位置在所述屏幕上显示所述第一预设预警图标,包括:
    若多个所述第一目标障碍物为多个单独的个体,且多个所述第一预设预警图标的所述第一图标重叠,则根据所述第一显示位置在所述屏幕上显示将多个所述第一预设预警图标组合成的组合图标;其中,所述组合图标包括所述第一图标,且所述组合图标中的多个所述第二图标按照所述第一目标障碍物的实际空间位置排布。
  7. 如权利要求2至6任一项所述的方法,其特征在于,所述根据所述第一显示位置在所述屏幕上显示所述第一预设预警图标,包括:
    若多个所述第一目标障碍物为多个单独的个体,且多个所述第一预设预警图标的所述第一图标不重叠但所述第二图标重叠,则根据所述第一显示位置在所述屏幕上显示按照所述第一目标障碍物的实际空间位置相互远离分开后的多个所述第一预设预警图标。
  8. 如权利要求1至7任一项所述的方法,其特征在于,所述根据所述第一显示位置在所述屏幕上显示所述第一预设预警图标,包括:
    根据所述第一显示位置在所述屏幕上显示指向性的所述第一预设预警图标;
    所述方法还包括:
    所述第一目标障碍物的位置位于所述屏幕的显示范围内时,根据所述第一目标障碍物的位置确定所述第一预设预警图标在所述屏幕上的第二显示位置,在所述屏幕的所述第二显示位置显示不带有指向性的所述第一预设预警图标。
  9. 如权利要求1至8任一项所述的方法,其特征在于,所述根据所述第一显示位置在所述屏幕上显示所述第一预设预警图标,包括:
    所述第一目标障碍物与所述车辆之间的距离为第一距离时,根据所述第一显示位置在所述屏幕上显示 第一尺寸的所述第一预设预警图标:
    所述第一目标障碍物与所述车辆之间的距离为第二距离时,根据所述第一显示位置在所述屏幕上显示第二尺寸的所述第一预设预警图标;其中,所述第一距离大于所述第二距离,所述第一尺寸大于所述第二尺寸。
  10. 如权利要求1至9任一项所述的方法,其特征在于,所述根据所述第一显示位置在所述屏幕上显示所述第一预设预警图标,包括:
    所述第一目标障碍物与所述车辆之间的距离为第一距离时,根据所述第一显示位置在所述屏幕上显示第一颜色的所述第一预设预警图标:
    所述第一目标障碍物与所述车辆之间的距离为第二距离时,根据所述第一显示位置在所述屏幕上显示第二颜色的所述第一预设预警图标。
  11. 如权利要求1至10任一项所述的方法,其特征在于,所述根据所述第一显示位置在所述屏幕上显示所述第一预设预警图标,包括:
    所述第一目标障碍物正在移动时,根据所述第一显示位置在所述屏幕上显示动态的所述第一预设预警图标。
  12. 如权利要求1至11任一项所述的方法,其特征在于,所述根据所述第一显示位置在所述屏幕上显示所述第一预设预警图标,包括:
    根据所述第一显示位置在所述屏幕上显示带有第一运动方向的所述第一预设预警图标,所述第一运行方向根据所述第一目标障碍物的第二运动方向确定。
  13. 根据权利要求1至12任一项所述的方法,其特征在于:
    所述获取第一目标障碍物的位置和所述电子设备的屏幕的显示范围,包括:
    获取所述第一目标障碍物和所述屏幕的显示范围的第一平面图;
    所述根据所述第一目标障碍物的位置和所述屏幕的显示范围确定第一预设预警图标在所述屏幕上的第一显示位置,包括:
    根据所述第一目标障碍物和所述屏幕的显示范围在所述第一平面图中的位置,确定第一预设预警图标在所述屏幕上的第一显示位置。
  14. 如权利要求13所述的方法,其特征在于:
    在所述获取所述第一目标障碍物和所述屏幕的显示范围的第一平面图之前,所述方法还包括:
    获取所述第一目标障碍物的第一空间坐标和所述屏幕的第二空间坐标;所述第一空间坐标为在人眼坐标系下的第一空间坐标,所述第二空间坐标为在人眼坐标系下的第二空间坐标;
    所述获取所述第一目标障碍物和所述屏幕的显示范围的第一平面图包括:
    根据所述第一空间坐标和所述第二空间坐标获取人眼视角下的所述第一目标障碍物和所述屏幕的显示范围的第一平面图。
  15. 如权利要求14所述的方法,其特征在于:
    所述根据所述第一空间坐标和所述第二空间坐标获取人眼视角下的所述第一目标障碍物和所述屏幕的显示范围的第一平面图包括:
    根据所述第一空间坐标和所述第二空间坐标获取人眼视角下的所述第一目标障碍物的标示和所述屏幕的显示范围的第一平面图;其中,在所述第一平面图中,所述第一目标障碍物的标示的中心点位于所述屏幕的显示范围外;
    所述根据所述第一目标障碍物和所述屏幕的显示范围在所述第一平面图中的位置确定第一预设预警图标在所述屏幕上的第一显示位置包括:
    根据所述第一目标障碍物的标示的中心点和所述屏幕的显示范围在所述第一平面图中的位置确定所述第一预设预警图标在所述屏幕上的第一显示位置;
    所述中心点包括重心或几何中心。
  16. 如权利要求1至15任一项所述的方法,其特征在于,所述方法还包括:
    获取第二目标障碍物相对于所述车辆的方位;
    根据所述第二目标障碍物相对于所述车辆的方位确定第二预设预警图标在所述屏幕上的第三显示位 置,所述第二预设预警图标用于提示所述第二目标障碍物;
    根据所述第三显示位置在所述屏幕上显示所述第二预设预警图标。
  17. 如权利要求16所述的方法,其特征在于,所述根据所述第二目标障碍物相对于所述车辆的方位确定第二预设预警图标在所述屏幕上的第三显示位置包括:
    根据所述第二目标障碍物相对于所述车辆的方位获取所述第二目标障碍物和所述屏幕的显示范围的方位分布的第二平面图;
    根据所述第二目标障碍物和所述屏幕的显示范围在所述第二平面图中的位置确定所述第二预设预警图标在所述屏幕上的第三显示位置。
  18. 如权利要求16至17任一项所述的方法,其特征在于:
    所述第一目标障碍物位于所述车辆的前方,所述第二目标障碍物位于所述车辆的非前方。
  19. 一种电子设备,所述电子设备为车辆或者被设置在所述车辆中,其特征在于,所述电子设备包括处理器和存储器,所述存储器用于存储程序指令,所述处理器调用所述存储指令时,实现如权利要求1至18任一项所述的显示方法。
  20. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有程序,所述程序使得电子设备实现如权利要求1至18任一项所述的显示方法。
  21. 一种计算机程序产品,其特征在于,所述计算机程序产品包括计算机执行指令,所述计算机执行指令存储在计算机可读存储介质中;电子设备的至少一个处理器可以从所述计算机可读存储介质中读取所述计算机执行指令,所述至少一个处理器执行所述计算机执行指令使得所述电子设备执行如权利要求1至18任一项所述的显示方法。
PCT/CN2023/111325 2022-08-17 2023-08-04 显示方法、电子设备、存储介质及程序产品 WO2024037363A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210987609.7A CN117631911A (zh) 2022-08-17 2022-08-17 显示方法、电子设备、存储介质及程序产品
CN202210987609.7 2022-08-17

Publications (1)

Publication Number Publication Date
WO2024037363A1 true WO2024037363A1 (zh) 2024-02-22

Family

ID=89940672

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/111325 WO2024037363A1 (zh) 2022-08-17 2023-08-04 显示方法、电子设备、存储介质及程序产品

Country Status (2)

Country Link
CN (1) CN117631911A (zh)
WO (1) WO2024037363A1 (zh)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170065738A (ko) * 2015-12-03 2017-06-14 현대오트론 주식회사 헤드업 디스플레이 제어 장치 및 방법
CN110070623A (zh) * 2019-04-16 2019-07-30 百度在线网络技术(北京)有限公司 引导线绘制提示方法、装置、计算机设备和存储介质
US20200298704A1 (en) * 2017-10-10 2020-09-24 Maxell, Ltd. Information display apparatus
CN112109550A (zh) * 2020-09-08 2020-12-22 中国第一汽车股份有限公司 基于ar-hud的预警信息的显示方法、装置、设备及车辆
CN112356850A (zh) * 2020-10-27 2021-02-12 恒大新能源汽车投资控股集团有限公司 一种基于盲点检测的预警方法、装置及电子设备
US20220024316A1 (en) * 2019-03-20 2022-01-27 Yuuki Suzuki Display control apparatus, display apparatus, display system, moving body, program, and image generation method
CN114290990A (zh) * 2021-12-24 2022-04-08 浙江吉利控股集团有限公司 车辆a柱盲区的障碍物预警***、方法和信号处理装置
CN114298908A (zh) * 2021-12-30 2022-04-08 阿波罗智联(北京)科技有限公司 一种障碍物展示方法、装置、电子设备及存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170065738A (ko) * 2015-12-03 2017-06-14 현대오트론 주식회사 헤드업 디스플레이 제어 장치 및 방법
US20200298704A1 (en) * 2017-10-10 2020-09-24 Maxell, Ltd. Information display apparatus
US20220024316A1 (en) * 2019-03-20 2022-01-27 Yuuki Suzuki Display control apparatus, display apparatus, display system, moving body, program, and image generation method
CN110070623A (zh) * 2019-04-16 2019-07-30 百度在线网络技术(北京)有限公司 引导线绘制提示方法、装置、计算机设备和存储介质
CN112109550A (zh) * 2020-09-08 2020-12-22 中国第一汽车股份有限公司 基于ar-hud的预警信息的显示方法、装置、设备及车辆
CN112356850A (zh) * 2020-10-27 2021-02-12 恒大新能源汽车投资控股集团有限公司 一种基于盲点检测的预警方法、装置及电子设备
CN114290990A (zh) * 2021-12-24 2022-04-08 浙江吉利控股集团有限公司 车辆a柱盲区的障碍物预警***、方法和信号处理装置
CN114298908A (zh) * 2021-12-30 2022-04-08 阿波罗智联(北京)科技有限公司 一种障碍物展示方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN117631911A (zh) 2024-03-01

Similar Documents

Publication Publication Date Title
CN114282597B (zh) 一种车辆可行驶区域检测方法、***以及采用该***的自动驾驶车辆
US9878667B2 (en) In-vehicle display apparatus and program product
JP5962594B2 (ja) 車載表示装置およびプログラム
US11214248B2 (en) In-vehicle monitoring camera device
US11670087B2 (en) Training data generating method for image processing, image processing method, and devices thereof
US9475420B2 (en) Display apparatus
US20200238991A1 (en) Dynamic Distance Estimation Output Generation Based on Monocular Video
JP2019096072A (ja) 物体検出装置、物体検出方法およびプログラム
CN110865388B (zh) 摄像机与激光雷达的联合标定方法、装置及存储介质
JP6443716B2 (ja) 画像表示装置、画像表示方法及び画像表示制御プログラム
US10672269B2 (en) Display control assembly and control method therefor, head-up display system, and vehicle
JP2009093332A (ja) 車両周辺画像処理装置及び車両周辺状況提示方法
US20210078597A1 (en) Method and apparatus for determining an orientation of a target object, method and apparatus for controlling intelligent driving control, and device
JPWO2014017521A1 (ja) 立体物検出装置
WO2023072093A1 (zh) 虚拟车位确定方法、显示方法、装置、设备、介质及程序
JP5825713B2 (ja) 車両用危険場面再現装置
CN113903188B (zh) 车位检测方法、电子设备及计算机可读存储介质
JP6186905B2 (ja) 車載表示装置およびプログラム
CN102137247A (zh) 全周鸟瞰图像距离界面产生方法与***
US20210382560A1 (en) Methods and System for Determining a Command of an Occupant of a Vehicle
WO2024037363A1 (zh) 显示方法、电子设备、存储介质及程序产品
JP6328366B2 (ja) ヘッドアップディスプレイの表示制御装置および表示制御方法
JPWO2019111305A1 (ja) 表示制御装置及び表示制御方法
JP2004168304A (ja) 車輌用走行案内装置
EP4246455A1 (en) Method and device for detecting object and vehicle

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23854266

Country of ref document: EP

Kind code of ref document: A1