CN111731101A - AR-HUD display method and system fusing V2X information - Google Patents

AR-HUD display method and system fusing V2X information Download PDF

Info

Publication number
CN111731101A
CN111731101A CN202010850072.0A CN202010850072A CN111731101A CN 111731101 A CN111731101 A CN 111731101A CN 202010850072 A CN202010850072 A CN 202010850072A CN 111731101 A CN111731101 A CN 111731101A
Authority
CN
China
Prior art keywords
information
vehicle
blind area
determining
projection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010850072.0A
Other languages
Chinese (zh)
Other versions
CN111731101B (en
Inventor
胡爽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Joynext Technology Corp
Original Assignee
Ningbo Joynext Technology Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Joynext Technology Corp filed Critical Ningbo Joynext Technology Corp
Priority to CN202010850072.0A priority Critical patent/CN111731101B/en
Publication of CN111731101A publication Critical patent/CN111731101A/en
Application granted granted Critical
Publication of CN111731101B publication Critical patent/CN111731101B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/20Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor
    • B60K35/21Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor using visual output, e.g. blinking lights or matrix displays
    • B60K35/22Display screens
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q1/00Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q2400/00Special features or arrangements of exterior signal lamps for vehicles
    • B60Q2400/50Projected symbol or information, e.g. onto the road or car body
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0141Head-up displays characterised by optical features characterised by the informative content of the display

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Transportation (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application relates to an AR-HUD display method and system fusing V2X information, comprising the following steps: acquiring V2X information and environment information; carrying out perception algorithm processing on the V2X information and the environmental information to determine traffic participant information of the vehicle blind area and obstacle information around the vehicle; fusing the information of the traffic participants in the vehicle blind area into a projection image to generate projection content information of the AR-HUD; and sending the fused projection content information to an imaging device so as to enable the imaging device to perform projection imaging. According to the scheme, the traffic information of the blind area detected by the vehicle-mounted sensor can be sensed by acquiring the V2X information; the information of V2X information and the information that vehicle-mounted sensor detected fuse, throws formation of image through AR-HUD to pedestrian, vehicle of blind area etc. carry out the early warning suggestion to the driver to improve the security of driving a vehicle.

Description

AR-HUD display method and system fusing V2X information
Technical Field
The application relates to the technical field of auxiliary driving, in particular to an AR-HUD display method and system fusing V2X information.
Background
At present, the known vehicle-mounted AR-HUD system rarely considers fused V2X information, only the patent CN110619746A and the patent CN209257986U relate to the system, but the early sensing and early warning of blind area traffic information detected by sensors such as a vehicle-mounted camera and a radar are not considered.
In the correlation technique, the existing known AR-HUD system cannot display traffic information of detection blind areas of sensors such as a camera and a radar, and cannot sense and warn in advance under the condition of a pedestrian ghost probe.
Disclosure of Invention
To overcome, at least to some extent, the problems in the related art, the present application provides an AR-HUD display method and system that fuses V2X information.
According to a first aspect of embodiments of the present application, there is provided an AR-HUD display method fusing V2X information, including:
acquiring V2X information and environment information;
carrying out perception algorithm processing on the V2X information and the environmental information to determine traffic participant information of the vehicle blind area and obstacle information around the vehicle;
fusing the information of the traffic participants in the vehicle blind area into a projection image to generate projection content information of the AR-HUD;
and sending the fused projection content information to an imaging device so as to enable the imaging device to perform projection imaging.
Further, the determining of the traffic participant information of the vehicle blind area and the obstacle information around the vehicle includes:
determining the information of the traffic participants in the vehicle blind area according to the V2X information;
obstacle information around the vehicle is determined from the environmental information.
Further, the V2X information includes first V2X information and second V2X information; the first V2X information is information acquired by a camera and a radar on the road side, and the second V2X information is information transmitted by other vehicles.
Further, the traffic participant information of the vehicle blind area is determined according to the V2X information, and comprises the following steps:
determining the position information and the speed information of pedestrians and/or non-motor vehicles in the blind area of the vehicle according to the first V2X information;
and determining the position information and the speed information of the motor vehicle in the blind area of the vehicle according to the second V2X information.
Further, the environment information comprises image data and vehicle-mounted radar data around a vehicle body, which are acquired by a vehicle-mounted camera;
accordingly, the determining obstacle information around the vehicle according to the environment information includes:
determining the type of obstacles around the vehicle through object detection and classification algorithm processing according to image data around the vehicle body;
and performing fusion positioning according to the vehicle-mounted radar data, and determining the motion state information of obstacles around the vehicle.
Further, the fusion of the traffic participant information of the vehicle blind area into the projection image comprises:
acquiring map information and vehicle positioning information;
determining a vehicle blind area according to the obstacle information and the vehicle positioning information;
determining the information of the traffic participants needing to be displayed according to the vehicle blind areas and the vehicle positioning information;
and fusing the traffic participant information needing to be displayed into the projection image.
Further, the method further comprises:
fusing basic state information of the vehicle to a fixed position of a close-range layer of the projected image;
calibrating the projection position of the auxiliary driving information according to the actual road environment, and fusing the auxiliary driving information to the calibration position of the distant view layer of the projection image;
wherein the basic status information comprises at least one of: vehicle speed, road speed limit, remaining oil/electricity; the driving assistance information includes at least one of: ADAS information, AR navigation information.
According to a second aspect of embodiments of the present application, there is provided an AR-HUD display system fusing V2X information, comprising:
the vehicle-end V2X equipment is used for receiving V2X information sent by the road-end V2X equipment and/or other vehicles;
the electronic control unit is used for acquiring the V2X information and the environmental information, performing perception algorithm processing on the V2X information and the environmental information, and determining the information of traffic participants in the vehicle blind area and the information of obstacles around the vehicle;
the information fusion unit is used for fusing the information of the traffic participants in the vehicle blind area into the projection image, generating the projection content information of the AR-HUD, and sending the fused projection content information to the imaging device;
and the imaging device is used for performing projection imaging according to the fused projection content information.
According to a third aspect of embodiments of the present application, there is provided a computer apparatus comprising:
a memory for storing a computer program;
a processor for executing the computer program in the memory to implement the operational steps of the method according to any of the above embodiments.
The technical scheme provided by the embodiment of the application has the following beneficial effects:
according to the scheme, the traffic information of the blind area detected by the vehicle-mounted sensor can be sensed by acquiring the V2X information; the information of V2X information and the information that vehicle-mounted sensor detected fuse, throws formation of image through AR-HUD to pedestrian, vehicle of blind area etc. carry out the early warning suggestion to the driver to improve the security of driving a vehicle.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
FIG. 1 is a flow diagram illustrating an AR-HUD display method that fuses V2X information, according to an exemplary embodiment.
FIG. 2 is a block diagram of the on-board AR-HUD system of the present application incorporating V2X information.
FIG. 3 is a schematic diagram of sensing and AR-HUD imaging early warning in a pedestrian ghost probe scene according to the present application.
FIG. 4 is a schematic diagram of sensing and AR-HUD imaging early warning of a non-motor vehicle cutting into a lane scene in a blind area according to the present application.
FIG. 5 is a schematic diagram of sensing and AR-HUD imaging early warning of a blind area vehicle cut-in lane scene.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of methods and apparatus consistent with certain aspects of the present application, as detailed in the appended claims.
FIG. 1 is a flow diagram illustrating an AR-HUD display method that fuses V2X information, according to an exemplary embodiment. The method may comprise the steps of:
step S1: acquiring V2X information and environment information;
step S2: carrying out perception algorithm processing on the V2X information and the environmental information to determine traffic participant information of the vehicle blind area and obstacle information around the vehicle;
step S3: fusing the information of the traffic participants in the vehicle blind area into a projection image to generate projection content information of the AR-HUD;
step S4: and sending the fused projection content information to an imaging device so as to enable the imaging device to perform projection imaging.
According to the scheme, the traffic information of the blind area detected by the vehicle-mounted sensor can be sensed by acquiring the V2X information; the information of V2X information and the information that vehicle-mounted sensor detected fuse, throws formation of image through AR-HUD to pedestrian, vehicle of blind area etc. carry out the early warning suggestion to the driver to improve the security of driving a vehicle.
Referring to fig. 2, the method of the present application may be applied to an onboard system as shown. Compared with the traditional AR-HUD, the system fuses V2X information.
As shown in fig. 2, the V2X information module mainly includes two parts, namely, a road-side V2X device and a vehicle-side V2X device. Sensors such as a camera and a radar at the road side end are used for detecting information such as pedestrians and non-motor vehicles and sending the information to an rsu (road unit). Then the RSU sends the information of the pedestrians and the non-motor vehicles to the OBU (on BoardUnit) at the vehicle end through a PC5 communication protocol. The other vehicles directly communicate with the target vehicle OBU through the OBU and according to the protocol of the PC 5.
The vehicle-end V2X device can acquire information of surrounding pedestrians, vehicles, non-motor vehicles and the like, and particularly traffic information in a blind area detected by vehicle-mounted sensors such as a vehicle-mounted camera and a radar. These traffic information are transmitted to an ecu (electronic Control unit) in charge of ADAS (Advanced driver-assistance systems) by Ethernet and CAN protocols.
In some embodiments, the determining the traffic participant information of the vehicle blind area and the obstacle information around the vehicle includes:
determining the information of the traffic participants in the vehicle blind area according to the V2X information;
obstacle information around the vehicle is determined from the environmental information.
As shown in fig. 2, the ADAS ECU receives information input from the vehicle-mounted camera, the vehicle-mounted radar, and the vehicle-side OBU, and performs perception algorithm processing. According to image data around a vehicle body collected by a vehicle-mounted camera, fusion positioning is carried out through object detection and classification algorithm processing and vehicle-mounted radar data, and the types (such as virtual/real/single/double lines) of lane lines around a vehicle, the colors (yellow/white), the positions, the curvature change conditions and the like, the types (such as pedestrians, motor vehicles/non-motor vehicles and the like), the positions, the volume sizes, the moving speeds and the like of obstacles around the vehicle are identified. According to the V2X information received by the OBU at the vehicle end, the position, speed and other information of traffic participants such as pedestrians, motor vehicles, non-motor vehicles and the like with V2X communication capability in the blind area can be obtained through the vehicle-mounted camera and the radar.
In some embodiments, the V2X information includes first V2X information and second V2X information; the first V2X information is information acquired by a camera and a radar on the road side, and the second V2X information is information transmitted by other vehicles.
In some embodiments, the determining the traffic participant information of the vehicle blind area according to the V2X information includes:
determining the position information and the speed information of pedestrians and/or non-motor vehicles in the blind area of the vehicle according to the first V2X information;
and determining the position information and the speed information of the motor vehicle in the blind area of the vehicle according to the second V2X information.
In some embodiments, the environmental information includes image data around a vehicle body collected by a vehicle-mounted camera, vehicle-mounted radar data;
accordingly, the determining obstacle information around the vehicle according to the environment information includes:
determining the type of obstacles around the vehicle through object detection and classification algorithm processing according to image data around the vehicle body;
and performing fusion positioning according to the vehicle-mounted radar data, and determining the motion state information of obstacles around the vehicle.
As shown in fig. 2, the processing result of the ADAS ECU is sent to the vehicle end via Ethernet, and the AR Core on the vehicle end simultaneously combines information such as navigation/map, IMU (Inertial Measurement Unit), GPS, and vehicle state to generate the projection content of the AR-HUD, and calibrate the position of the projection content. Basic state information such as vehicle speed, road speed limit, residual oil/electric quantity and the like is projected on a close-range layer of the AR-HUD, the distance is generally 3-5 meters ahead of the sight line of a driver, and the position of the projected content in the close-range layer is fixed without calibration. On the other hand, ADAS information (e.g., LDW lane departure warning, FCW preceding vehicle collision warning, etc.), AR navigation information (e.g., lane change indication, exit prompt), etc. are projected on a distant view layer of the AR-HUD, and generally require a distance of more than 10 meters ahead of the driver's sight line, and the projected content needs to be calibrated according to the actual environment of the road ahead, for example, the lane departure warning mark needs to be projected above the lane line, and the preceding vehicle collision warning mark needs to be projected on the road surface below the tail of the preceding vehicle, etc.
In some embodiments, the fusing the traffic participant information of the vehicle blind area into the projection image includes:
acquiring map information and vehicle positioning information;
determining a vehicle blind area according to the obstacle information and the vehicle positioning information;
determining the information of the traffic participants needing to be displayed according to the vehicle blind areas and the vehicle positioning information;
and fusing the traffic participant information needing to be displayed into the projection image.
The pedestrians/non-motor/obstacles and the like in the non-blind area can be directly seen in the sight range of the driver without being displayed through the AR-HUD. Aiming at various traffic information of non-blind areas, the contents needing AR-HUD projection display mainly comprise: the AR-HUD system is mainly known to provide an augmented reality display function of LDW (Lane Departure Warning), an augmented reality display function of FCW (Forward Collision Warning), and AR navigation contents (such as a prompt icon for Lane change, exit, etc.).
The information of the traffic participants in the vehicle blind area is not required to be displayed, and some V2X information is far away from the vehicle and can still receive the information, which is not required to be displayed; or the V2X message is already behind the vehicle and the message is still received, which need not be displayed. According to the scheme, the V2X information needing to be displayed is selected, and unnecessary information is discarded.
In some embodiments, the method further comprises:
fusing basic state information of the vehicle to a fixed position of a close-range layer of the projected image;
calibrating the projection position of the auxiliary driving information according to the actual road environment, and fusing the auxiliary driving information to the calibration position of the distant view layer of the projection image;
wherein the basic status information comprises at least one of: vehicle speed, road speed limit, remaining oil/electricity; the driving assistance information includes at least one of: ADAS information, AR navigation information.
As shown in FIG. 2, the processed results of the AR Core are transmitted to the AR-HUD optical assembly via LVDS signals for projection imaging.
Fig. 3 is a schematic diagram of an embodiment of the present scheme in a pedestrian ghost probe scene, and it can be seen from the schematic diagram that in this scene, the present scheme can identify that the traffic participant is a pedestrian through information (i.e., first V2X information) sent by the roadside camera and the roadside radar, so as to fuse a graphic mark representing the pedestrian into the projection image. Fig. 4 is a schematic diagram of an embodiment of the scheme in a blind area non-motor vehicle lane cutting scene, and according to the schematic diagram, in the scene, the scheme can identify that a traffic participant is a non-motor vehicle through information (namely: first V2X information) sent by a road side camera and a road side radar, so that a graphic mark representing the non-motor vehicle is fused into a projection image. Fig. 5 is a schematic diagram of an embodiment of the scheme in a scene that a blind area vehicle cuts into a lane, and according to the schematic diagram, the scheme can sense the vehicle cut from the blind area through information (namely, second V2X information) sent by other vehicles, so that a graphic mark representing the vehicle is fused into a projection image. The vehicle of this scheme of application can effectively acquire pedestrian, non-motor vehicle, the motor vehicle's of sensor detection blind areas such as on-vehicle camera, radar information, carries out the formation of image early warning through AR-HUD.
As shown in fig. 2, the present application also provides an AR-HUD display system fusing V2X information, comprising: the system comprises a vehicle-end V2X device, an electronic control unit, an information fusion unit and an imaging device.
And the vehicle-end V2X equipment is used for receiving V2X information sent by the road-end V2X equipment and/or other vehicles.
And the electronic control unit is used for acquiring the V2X information and the environment information, performing perception algorithm processing on the V2X information and the environment information, and determining the information of traffic participants in the vehicle blind area and the information of obstacles around the vehicle.
The information fusion unit is used for fusing the information of the traffic participants in the vehicle blind area into the projection image, generating the projection content information of the AR-HUD, and sending the fused projection content information to the imaging device;
and the imaging device is used for performing projection imaging according to the fused projection content information.
Wherein, the electronic control unit can be ADAS ECU, the information fusion unit can be AR Core, and the imaging device can be AR-HUD optical assembly.
According to a third aspect of embodiments of the present application, there is provided a computer apparatus comprising:
a memory for storing a computer program;
a processor for executing the computer program in the memory to implement an AR-HUD display method that fuses V2X information: acquiring V2X information and environment information; carrying out perception algorithm processing on the V2X information and the environmental information to determine traffic participant information of the vehicle blind area and obstacle information around the vehicle; fusing the information of the traffic participants in the vehicle blind area into a projection image to generate projection content information of the AR-HUD; and sending the fused projection content information to an imaging device so as to enable the imaging device to perform projection imaging.
It is understood that the same or similar parts in the above embodiments may be mutually referred to, and the same or similar parts in other embodiments may be referred to for the content which is not described in detail in some embodiments.
It should be noted that, in the description of the present application, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Further, in the description of the present application, the meaning of "a plurality" means at least two unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (9)

1. An AR-HUD display method fusing V2X information, comprising:
acquiring V2X information and environment information;
carrying out perception algorithm processing on the V2X information and the environmental information to determine traffic participant information of the vehicle blind area and obstacle information around the vehicle;
fusing the information of the traffic participants in the vehicle blind area into a projection image to generate projection content information of the AR-HUD;
and sending the fused projection content information to an imaging device so as to enable the imaging device to perform projection imaging.
2. The method of claim 1, wherein the determining traffic participant information for the vehicle blind zone and obstacle information around the vehicle comprises:
determining the information of the traffic participants in the vehicle blind area according to the V2X information;
obstacle information around the vehicle is determined from the environmental information.
3. The method of claim 2, wherein the V2X information includes first V2X information and second V2X information; the first V2X information is information acquired by a camera and a radar on the road side, and the second V2X information is information transmitted by other vehicles.
4. The method of claim 3, wherein determining the traffic participant information for the vehicle blind spot from the V2X information comprises:
determining the position information and the speed information of pedestrians and/or non-motor vehicles in the blind area of the vehicle according to the first V2X information;
and determining the position information and the speed information of the motor vehicle in the blind area of the vehicle according to the second V2X information.
5. The method of claim 2, wherein: the environment information comprises image data and vehicle-mounted radar data around a vehicle body, which are acquired by a vehicle-mounted camera;
accordingly, the determining obstacle information around the vehicle according to the environment information includes:
determining the type of obstacles around the vehicle through object detection and classification algorithm processing according to image data around the vehicle body;
and performing fusion positioning according to the vehicle-mounted radar data, and determining the motion state information of obstacles around the vehicle.
6. The method according to any one of claims 1-5, wherein the fusing of the traffic participant information of the vehicle blind spot into the projected image comprises:
acquiring map information and vehicle positioning information;
determining a vehicle blind area according to the obstacle information and the vehicle positioning information;
determining the information of the traffic participants needing to be displayed according to the vehicle blind areas and the vehicle positioning information;
and fusing the traffic participant information needing to be displayed into the projection image.
7. The method of claim 6, further comprising:
fusing basic state information of the vehicle to a fixed position of a close-range layer of the projected image;
calibrating the projection position of the auxiliary driving information according to the actual road environment, and fusing the auxiliary driving information to the calibration position of the distant view layer of the projection image;
wherein the basic status information comprises at least one of: vehicle speed, road speed limit, remaining oil/electricity; the driving assistance information includes at least one of: ADAS information, AR navigation information.
8. An AR-HUD display system fusing V2X information, comprising:
the vehicle-end V2X equipment is used for receiving V2X information sent by the road-end V2X equipment and/or other vehicles;
the electronic control unit is used for acquiring the V2X information and the environmental information, performing perception algorithm processing on the V2X information and the environmental information, and determining the information of traffic participants in the vehicle blind area and the information of obstacles around the vehicle;
the information fusion unit is used for fusing the information of the traffic participants in the vehicle blind area into the projection image, generating the projection content information of the AR-HUD, and sending the fused projection content information to the imaging device;
and the imaging device is used for performing projection imaging according to the fused projection content information.
9. A computer device, comprising:
a memory for storing a computer program;
a processor for executing the computer program in the memory to carry out the operational steps of the method of any one of claims 1 to 7.
CN202010850072.0A 2020-08-21 2020-08-21 AR-HUD display method and system fusing V2X information Active CN111731101B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010850072.0A CN111731101B (en) 2020-08-21 2020-08-21 AR-HUD display method and system fusing V2X information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010850072.0A CN111731101B (en) 2020-08-21 2020-08-21 AR-HUD display method and system fusing V2X information

Publications (2)

Publication Number Publication Date
CN111731101A true CN111731101A (en) 2020-10-02
CN111731101B CN111731101B (en) 2020-12-25

Family

ID=72658747

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010850072.0A Active CN111731101B (en) 2020-08-21 2020-08-21 AR-HUD display method and system fusing V2X information

Country Status (1)

Country Link
CN (1) CN111731101B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112241204A (en) * 2020-12-17 2021-01-19 宁波均联智行科技有限公司 Gesture interaction method and system of vehicle-mounted AR-HUD
CN113002422A (en) * 2021-01-27 2021-06-22 浙江大华技术股份有限公司 Safe driving early warning method, system, terminal and computer readable storage medium
CN113183878A (en) * 2021-04-15 2021-07-30 杭州鸿泉物联网技术股份有限公司 360-degree look-around method and device, vehicle and electronic equipment
CN113240939A (en) * 2021-03-31 2021-08-10 浙江吉利控股集团有限公司 Vehicle early warning method, device, equipment and storage medium
CN113547984A (en) * 2021-06-23 2021-10-26 云度新能源汽车有限公司 Vehicle A column blind area identification method and system based on HUD
CN113724520A (en) * 2021-08-31 2021-11-30 上海商汤临港智能科技有限公司 Vehicle-road cooperation information processing method and device, electronic equipment and storage medium
CN113724519A (en) * 2021-08-31 2021-11-30 上海商汤临港智能科技有限公司 Vehicle control system, road side equipment and vehicle and road cooperative system
CN113885483A (en) * 2021-09-06 2022-01-04 中汽创智科技有限公司 Vehicle remote control method and device
CN113949998A (en) * 2021-07-31 2022-01-18 重庆长安汽车股份有限公司 Emergency information processing control method
CN114093186A (en) * 2021-11-17 2022-02-25 中国第一汽车股份有限公司 Vehicle early warning information prompting system, method and storage medium
CN115086623A (en) * 2021-03-15 2022-09-20 北汽福田汽车股份有限公司 Projection method and device of vehicle-mounted projection system, storage medium and electronic equipment
CN116923288A (en) * 2023-06-26 2023-10-24 桂林电子科技大学 Intelligent detection system and method for driving ghost probe

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107428299A (en) * 2015-04-03 2017-12-01 株式会社电装 Information presentation device
CN108010383A (en) * 2017-09-29 2018-05-08 北京车和家信息技术有限公司 Blind zone detection method, device, terminal and vehicle based on driving vehicle
JP2018149965A (en) * 2017-03-14 2018-09-27 アイシン・エィ・ダブリュ株式会社 Display device
CN108875658A (en) * 2018-06-26 2018-11-23 大陆汽车投资(上海)有限公司 A kind of object identifying method based on V2X communication apparatus
CN108986510A (en) * 2018-07-31 2018-12-11 同济大学 A kind of local dynamic map of intelligence towards crossing realizes system and implementation method
CN109353279A (en) * 2018-12-06 2019-02-19 延锋伟世通电子科技(上海)有限公司 A kind of vehicle-mounted head-up-display system of augmented reality
CN110422176A (en) * 2019-07-04 2019-11-08 苏州车萝卜汽车电子科技有限公司 Intelligent transportation system, automobile based on V2X
CN110428611A (en) * 2019-07-04 2019-11-08 苏州车萝卜汽车电子科技有限公司 Information processing method and device for intelligent transportation

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107428299A (en) * 2015-04-03 2017-12-01 株式会社电装 Information presentation device
JP2018149965A (en) * 2017-03-14 2018-09-27 アイシン・エィ・ダブリュ株式会社 Display device
CN108010383A (en) * 2017-09-29 2018-05-08 北京车和家信息技术有限公司 Blind zone detection method, device, terminal and vehicle based on driving vehicle
CN108875658A (en) * 2018-06-26 2018-11-23 大陆汽车投资(上海)有限公司 A kind of object identifying method based on V2X communication apparatus
CN108986510A (en) * 2018-07-31 2018-12-11 同济大学 A kind of local dynamic map of intelligence towards crossing realizes system and implementation method
CN109353279A (en) * 2018-12-06 2019-02-19 延锋伟世通电子科技(上海)有限公司 A kind of vehicle-mounted head-up-display system of augmented reality
CN110422176A (en) * 2019-07-04 2019-11-08 苏州车萝卜汽车电子科技有限公司 Intelligent transportation system, automobile based on V2X
CN110428611A (en) * 2019-07-04 2019-11-08 苏州车萝卜汽车电子科技有限公司 Information processing method and device for intelligent transportation

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112241204A (en) * 2020-12-17 2021-01-19 宁波均联智行科技有限公司 Gesture interaction method and system of vehicle-mounted AR-HUD
CN113002422A (en) * 2021-01-27 2021-06-22 浙江大华技术股份有限公司 Safe driving early warning method, system, terminal and computer readable storage medium
CN115086623A (en) * 2021-03-15 2022-09-20 北汽福田汽车股份有限公司 Projection method and device of vehicle-mounted projection system, storage medium and electronic equipment
CN113240939A (en) * 2021-03-31 2021-08-10 浙江吉利控股集团有限公司 Vehicle early warning method, device, equipment and storage medium
CN113183878A (en) * 2021-04-15 2021-07-30 杭州鸿泉物联网技术股份有限公司 360-degree look-around method and device, vehicle and electronic equipment
CN113547984A (en) * 2021-06-23 2021-10-26 云度新能源汽车有限公司 Vehicle A column blind area identification method and system based on HUD
CN113949998A (en) * 2021-07-31 2022-01-18 重庆长安汽车股份有限公司 Emergency information processing control method
CN113724519A (en) * 2021-08-31 2021-11-30 上海商汤临港智能科技有限公司 Vehicle control system, road side equipment and vehicle and road cooperative system
CN113724520A (en) * 2021-08-31 2021-11-30 上海商汤临港智能科技有限公司 Vehicle-road cooperation information processing method and device, electronic equipment and storage medium
CN113885483A (en) * 2021-09-06 2022-01-04 中汽创智科技有限公司 Vehicle remote control method and device
CN113885483B (en) * 2021-09-06 2024-05-28 中汽创智科技有限公司 Vehicle remote control method and device
CN114093186A (en) * 2021-11-17 2022-02-25 中国第一汽车股份有限公司 Vehicle early warning information prompting system, method and storage medium
CN114093186B (en) * 2021-11-17 2022-11-25 中国第一汽车股份有限公司 Vehicle early warning information prompting system, method and storage medium
CN116923288A (en) * 2023-06-26 2023-10-24 桂林电子科技大学 Intelligent detection system and method for driving ghost probe
CN116923288B (en) * 2023-06-26 2024-04-12 桂林电子科技大学 Intelligent detection system and method for driving ghost probe

Also Published As

Publication number Publication date
CN111731101B (en) 2020-12-25

Similar Documents

Publication Publication Date Title
CN111731101B (en) AR-HUD display method and system fusing V2X information
US10293748B2 (en) Information presentation system
US11827274B2 (en) Turn path visualization to improve spatial and situational awareness in turn maneuvers
US8620526B2 (en) Method for operating a motor vehicle and motor vehicle
US8618952B2 (en) Method of intersection identification for collision warning system
EP3293488A2 (en) System and method of simulataneously generating a multiple lane map and localizing a vehicle in the generated map
US7605773B2 (en) Head-up display system and method for carrying out the location-correct display of an object situated outside a vehicle with regard to the position of the driver
US8886386B2 (en) Method for wireless communication between vehicles
US20080015772A1 (en) Drive-assist information providing system for driver of vehicle
US20070016372A1 (en) Remote Perspective Vehicle Environment Observation System
CN111915915A (en) Driving scene reconstruction method, device, system, vehicle, equipment and storage medium
US20070126565A1 (en) Process for monitoring blind angle in motor vehicles
WO2020057406A1 (en) Driving aid method and system
EP3207494B1 (en) Systems and methods for traffic sign validation
CN110497919B (en) Object position history playback for automatic vehicle transition from autonomous mode to manual mode
US20240001846A1 (en) Apparatus and method for giving warning about vehicle in violation of traffic signal at intersection
US11361687B2 (en) Advertisement display device, vehicle, and advertisement display method
JP7243516B2 (en) Display controller and display control program
JP2005352610A (en) Warning system and moving body terminal
JP7351283B2 (en) Driver notification device
WO2021020033A1 (en) Display control device, display control method, and display control program
JP7123830B2 (en) Map data update system, traveling probe information collecting device, traveling probe information providing device, and traveling probe information collecting method
CN112950995A (en) Parking assistance device, corresponding method, vehicle and server
JP2021148506A (en) Display control device, display control method, and program
WO2016058634A1 (en) Systems and methods for traffic sign assistance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 4 / F, building 5, 555 Dongqing Road, hi tech Zone, Ningbo City, Zhejiang Province

Patentee after: Ningbo Junlian Zhixing Technology Co.,Ltd.

Address before: 4 / F, building 5, 555 Dongqing Road, hi tech Zone, Ningbo City, Zhejiang Province

Patentee before: Ningbo Junlian Zhixing Technology Co.,Ltd.