CN115633085A - Driving scene image display method, device, equipment and storage medium - Google Patents

Driving scene image display method, device, equipment and storage medium Download PDF

Info

Publication number
CN115633085A
CN115633085A CN202211053332.7A CN202211053332A CN115633085A CN 115633085 A CN115633085 A CN 115633085A CN 202211053332 A CN202211053332 A CN 202211053332A CN 115633085 A CN115633085 A CN 115633085A
Authority
CN
China
Prior art keywords
target object
vehicle
road
display
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211053332.7A
Other languages
Chinese (zh)
Inventor
杨剑
赵奕铭
李润丽
郭剑锐
马泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongfeng Motor Group Co Ltd
Original Assignee
Dongfeng Motor Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongfeng Motor Group Co Ltd filed Critical Dongfeng Motor Group Co Ltd
Priority to CN202211053332.7A priority Critical patent/CN115633085A/en
Publication of CN115633085A publication Critical patent/CN115633085A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mechanical Engineering (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Atmospheric Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a driving scene image display method, a driving scene image display device, driving scene image display equipment and a storage medium. The method comprises the following steps: respectively acquiring a sensor sensing object and a vehicle road cooperative sensing object; determining a first target object, a second target object and a third target object according to the sensor sensing object and the vehicle-road cooperative sensing object, wherein the first target object is simultaneously contained in the sensor sensing object and the vehicle-road cooperative sensing object, the second target object is only contained in the vehicle-road cooperative sensing object, and the third target object is only contained in the sensor sensing object; and determining a display object according to the first target object, the second target object and the third target object. Has the beneficial effects that: the algorithm is low in complexity, small in calculation amount and low in cost of displaying the driving scene image.

Description

Driving scene image display method, device, equipment and storage medium
Technical Field
The invention belongs to the technical field of automobile control, and particularly relates to a driving scene image display method, a driving scene image display device, driving scene image display equipment and a storage medium.
Background
At present, the image display of a driving scene is based on a vehicle-mounted sensor, the vehicle-mounted sensor (such as a camera and a radar) completes sensing detection on the surrounding environment, map data, vehicle positions and obstacle information are generated, autonomous control is achieved according to information integration, an optimal driving path is planned, meanwhile, various traffic identifications are identified and sensed by means of artificial intelligence, an optimal driving decision is selected, the driving task of a vehicle is completed in a highly intelligent mode, the accident rate is reduced, and the driving efficiency is improved.
The vehicle-mounted sensor has certain perception limitation, so that the perception limitation of the driving assistance/automatic driving bicycle intelligence is caused, for example, the crossroad which is seriously sheltered cannot identify whether vehicles enter in other directions or not, the traffic light crossroad cannot identify the phase information of the traffic lights, and collision test accidents sometimes occur. How to combine complicated scene, break through vehicle perception limitation, promote vehicle assistance/automatic driving safety more and more important.
The vehicle-road cooperation is a safe, efficient and environment-friendly road traffic system which adopts advanced wireless communication, new-generation internet and other technologies, implements vehicle-vehicle and vehicle-road dynamic real-time information interaction in all directions, develops vehicle active safety control and road cooperative management on the basis of full-time-space dynamic traffic information acquisition and fusion, fully realizes effective cooperation of people and vehicles, ensures traffic safety, and improves traffic efficiency.
At present, the algorithm for fusing the sensor data and the vehicle and road cooperative data is high in complexity and large in calculation amount, so that the cost for displaying the driving scene image is high.
Disclosure of Invention
In view of the above-mentioned drawbacks or needs for improvement in the related art, the present invention provides a driving scene image display method, apparatus, device, and storage medium.
In a first aspect, a driving scene image display method includes the steps of:
respectively acquiring a sensor perception object and a vehicle road cooperative perception object;
determining a first target object, a second target object and a third target object according to the sensor sensing object and the vehicle-road cooperative sensing object, wherein the first target object is simultaneously contained in the sensor sensing object and the vehicle-road cooperative sensing object, the second target object is only contained in the vehicle-road cooperative sensing object, and the third target object is only contained in the sensor sensing object;
and determining a display object according to the first target object, the second target object and the third target object.
In an optional embodiment, the step of determining the first target object, the second target object and the third target object according to the sensor sensing object and the vehicle-road cooperative sensing object includes:
calibrating the sensor sensing object by a first characteristic value and a second characteristic value, and calibrating the vehicle-road cooperative sensing object by the first characteristic value and the second characteristic value;
determining the first target object according to the range of the first characteristic value;
determining the second target object according to the range of the second characteristic value;
and determining the third target object according to the first target object, the second target object and the vehicle-road cooperative sensing object.
In an alternative embodiment, the step of determining a display object based on the first target object comprises:
and determining the first target object as the display object by using the vehicle-road cooperative perception object.
In an alternative embodiment, the step of determining the display object based on the second target object comprises:
and if the current vehicle-road cooperative sensing function is normal, determining the second target object as the display object by using the vehicle-road cooperative sensing object.
In an alternative embodiment, the step of determining a display object based on the third target object comprises:
determining the third target object as the display object by using the sensor perception object.
In an optional embodiment, the step of separately acquiring the sensor sensing object and the vehicle-road cooperative sensing object comprises:
and determining the road image according to the map information and the current position of the vehicle.
In an alternative embodiment, the step of determining the road image based on the map information and the current location of the vehicle comprises:
and displaying road information of the vehicle on the map, wherein the road information comprises a road view, traffic facilities and marks representing the current vehicle within a preset range of the position of the vehicle.
In an alternative embodiment, the step after determining the display object comprises:
the display object determined based on the first target object is displayed on the road image with a first display characteristic;
the display object determined based on the second target object is displayed on the road image with a second display characteristic;
the display object determined based on the third target object is displayed on the road image with a third display characteristic.
In an alternative embodiment, the first display characteristic, the second display characteristic, and the third display characteristic are different from each other.
In an optional embodiment, the first display characteristic, the second display characteristic and the third display characteristic respectively include one or more of animation effect, color, line, thickness and text.
In a second aspect, a driving scene image display apparatus includes:
the sensing object module is used for respectively acquiring a sensor sensing object and a vehicle road cooperative sensing object;
the target object module is used for determining a first target object, a second target object and a third target object according to the sensor sensing object and the vehicle-road cooperative sensing object, wherein the first target object is simultaneously contained in the sensor sensing object and the vehicle-road cooperative sensing object, the second target object is only contained in the vehicle-road cooperative sensing object, and the third target object is only contained in the sensor sensing object;
and the display object module is used for determining a display object according to the first target object, the second target object and the third target object.
In a third aspect, an electronic device includes a memory and a processor, the memory stores computer readable instructions, and the computer readable instructions, when executed by the processor, cause the processor to execute the steps of the driving scene image display method as described in any one of the above embodiments.
In a fourth aspect, a computer storage medium has computer readable instructions stored therein, which, when executed by one or more processors, cause the one or more processors to perform the steps of the driving scene image display method as described in any one of the above embodiments.
In general, compared with the prior art, the above technical solution contemplated by the present invention can achieve the following beneficial effects:
according to the driving scene image display method, the driving scene image display device, the electronic equipment and the computer storage medium, the sensor sensing object and the vehicle road cooperative sensing object are respectively obtained; determining a first target object, a second target object and a third target object according to the sensor sensing object and the vehicle-road cooperative sensing object, wherein the first target object is simultaneously contained in the sensor sensing object and the vehicle-road cooperative sensing object, the second target object is only contained in the vehicle-road cooperative sensing object, and the third target object is only contained in the sensor sensing object; and the display object is determined according to the first target object, the second target object and the third target object, the algorithm complexity is low, the calculated amount is small, and the cost for displaying the driving scene image is low.
Drawings
FIG. 1 is a schematic diagram of an overall framework of a single-vehicle intelligent autonomous vehicle according to the present embodiment;
FIG. 2 is a flowchart of a driving scene image display method according to an embodiment of the present disclosure;
FIG. 3 is a schematic illustration of an actual driving scene suitable for use in the driving scene image display method provided in FIG. 2;
FIG. 4 is a schematic structural diagram of a driving scene image display device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The automatic driving technology is an important means for solving the problems of traffic safety, road congestion and the like, and the automatic driving automobile becomes an important engine for the strategic direction of the global automobile industry development and the continuous increase of the world economy. The environment perception is the premise and the basis of safe and reliable running of the automatic driving automobile, and determines the intelligent level of the automobile. The terminal of the automatic driving single vehicle is limited by a hardware structure, storage resources and computing power, a perception system of the terminal is not comprehensive and stable, and sudden change and variable scenes are difficult to accurately perceive in real time. The development of the vehicle-road cooperation technology in the intelligent network connection environment provides a feasible idea for solving the problems, and the vehicle-road cooperation technology improves the safety and reliability of vehicle driving in the complex traffic environment through cooperative sensing, decision and calculation based on V2X information interaction. However, the current fusion algorithm for integrating the vehicle-road cooperation and the vehicle sensing technology is complex in operation and large in calculation amount, so that the cost for displaying the driving scene image is high.
Fig. 1 is a schematic diagram of an overall framework of a single-vehicle intelligent automatic driving vehicle in this embodiment, the vehicle includes sensor modules mainly including a radar and a camera, a positioning module, a perception fusion module, a decision planning module, and a drive-by-wire chassis, the radar and the camera are used for perceiving traffic environment information of a road around a vehicle running and sending the information to the perception fusion module; the positioning module is used for determining the position of the automobile in the environment and sending the position to the perception fusion module; the perception fusion module is used for processing road traffic environment information around the running of the automobile perceived by the radar and the camera and sending the processed information to the decision planning module; the decision planning module plans an optimal driving path according to the road traffic environment information subjected to perception fusion processing and selects an optimal driving decision control line control chassis to highly intelligently complete the driving task of the vehicle; the drive-by-wire chassis means that the driving, gear, braking, steering, parking and necessary indicator light of the vehicle CAN be controlled through a CAN bus, and the vehicle CAN give correct and timely state feedback.
Traditional sensors such as radars and cameras have certain sensing limitation, a shielded barrier cannot sense the traditional sensors, the sensing distance is limited, traffic light phase information cannot be accurately acquired when the traditional sensors pass through a traffic light intersection, emergency braking is easily triggered, and the comfort of passengers is influenced; due to different sensor perception principles, information acquired when the same object is identified is different, and therefore huge error identification and error analysis challenges are brought to a perception fusion module. The Vehicle-road cooperative V2X (Vehicle-to-outside information exchange) technology is a communication technology for carrying out information exchange by utilizing the characteristic that an electric wave signal can be propagated in a free space, can break through the limitation of partial time and space, can identify a sheltered barrier, is not limited in sensing distance, can accurately acquire traffic light phase information when passing through a traffic light intersection, can know whether a green light is converted into a red light state in advance, and carries out braking in advance, so that the comfort and safety of passengers are improved.
Fig. 2 is a flowchart of a driving scene image display method provided in an embodiment of the present application, and referring to fig. 2, the method includes the following steps.
S101, determining a road image according to the map information and the current position of the vehicle.
Specifically, the map information may be, but is not limited to, a map provided by a vehicle built-in APP, such as a gold map, a Baidu map, an Tencent map, a Google map, and the like, and the current position of the vehicle road is obtained according to a positioning device, such as a GPS, beidou, and galileo positioning device.
Step S101 specifically includes: and displaying road information of the vehicle on the high-grade map, wherein the road information comprises a road view in a preset range of the position of the vehicle, traffic facilities, marks representing the current vehicle and road names.
And S101, performing S102, and respectively acquiring a sensor sensing object and a vehicle road cooperative sensing object.
Specifically, in the present embodiment, the vehicle sensor includes a radar and a camera, and the sensor perception object is acquired from the vehicle sensor (radar and camera).
The radar is not limited to ultrasonic radar, electromagnetic wave radar, laser radar and the like, in the embodiment, the radar is a millimeter wave radar, the working frequency range of the millimeter wave radar is 30GHz-300GHz, and the millimeter wave radar can detect a target, measure the speed, measure the distance and measure the azimuth. In this embodiment, the radar is not limited to being mounted on the head, the side, and the tail of the vehicle. The radar installed at the front of the vehicle can acquire a certain range in front of the vehicle, and further can sense the information of obstacles at the front of the vehicle, and the range can be set to be a conical or fan-shaped area, for example, a fan-shaped area with the radius of 10m and the angle of 90 degrees. The radar mounted on the side of the vehicle can sense a certain range on the side of the vehicle, which can be set to a conical or fan-shaped area, for example, a fan-shaped area with a radius of 5m and an angle of 150 °, and thus can acquire obstacle information on the side of the vehicle. The radar mounted on the rear of the vehicle can sense a certain range behind the vehicle, such as a sector with a radius of 10m and an angle of 90 degrees.
The camera is not limited to hemisphere camera, rifle type camera, interval lavatory camera, bayonet socket camera etc. according to the functional differentiation, simultaneously according to the difference of mounted position, the whole camera that can divide into of vision perception forward-looking camera, look around camera, back vision camera, looks sideways at camera etc..
Camera perceivable objects include, but are not limited to: whether there are vehicles, pedestrians, traffic lights, road greenery and the presence of width, height, speed, relative speed, vehicle brand (LOGO), color of the vehicle. Road greenery includes, but is not limited to, lawns, bushes, trees.
In this embodiment, the sensor sensing object can fuse the radar sensing object acquired by the radar and the camera fusion object acquired by the camera through the existing fusion technology, so as to obtain a complete sensor sensing object.
For example, a front radar and a front camera are taken as an example for explanation, the front radar senses the obstacle 1 and the obstacle 2, the front camera senses the vehicle 1 and the vehicle 2, and the sensors sense the target vehicle 1 and the vehicle 2 and specific characteristic values thereof through fusion, wherein the specific characteristic values include the width, the speed, the relative speed and the color of the brand and the vehicle 1, the distance between the vehicle and the vehicle 1, and the direction of the vehicle 1 relative to the vehicle.
The vehicle-road cooperative sensing object is obtained through a vehicle-road cooperative technology, and can obtain objects carrying the vehicle-road cooperative technology under the same scene in a vehicle-road cooperative network, such as road signs, traffic light phase information, circumferential vehicles, street lamp information, vehicle specific characteristic information and the like, wherein the range of the vehicle specific characteristic information at least comprises all or part of information which can be sensed by the sensor. The characteristics of the vehicle speed, the vehicle type, the relative speed, the distance, the vehicle body color, the vehicle brand and the like of the front vehicle can be sensed through the vehicle-road cooperation technology.
Fig. 3 is a schematic view of an actual driving scene suitable for the driving scene image display method provided in fig. 2, and as can be seen from fig. 3, the actual driving scene includes an intersection, where a traffic light 1 and a traffic light 2 are provided, the intersection is provided with two zebra stripes, and the vehicle is located on a road extending from south to north and has a left-turn tendency. Vehicles 1 and 2 are left on the road extending from west to east, and pedestrians who are crossing the road exist on the zebra crossing on the road at the moment.
In this embodiment, the sensor senses the object, and includes: vehicle 1, vehicle 2, pedestrian, traffic lights 1, and zebra crossing. The vehicle-road cooperative perception object comprises: vehicle 2, traffic light 1, traffic light 2, road sign (allowing a left turn), two zebra crossings.
And step S102 is followed by step S103, and the first target object, the second target object and the third target object are determined according to the sensor sensing object and the vehicle-road cooperative sensing object. The first target object is contained in the sensor sensing object and the vehicle-road cooperative sensing object at the same time, the second target object is contained in the vehicle-road cooperative sensing object only, and the third target object is contained in the sensor sensing object only.
The step is used for classifying the perception objects acquired by the two sensors, so that the later-period calculation amount is reduced conveniently.
Specifically, calibrating a sensor perception object by a first characteristic value and a second characteristic value, and calibrating a vehicle-road cooperative perception object by the first characteristic value and the second characteristic value; and determining the first target object according to the range of the first characteristic value, determining the second target object according to the range of the second characteristic value, and determining the third target object according to the first target object, the second target object and the vehicle path cooperative perception object.
That is, the first target object can be determined according to the first characteristic value and the range thereof, the second target object can be determined according to the second characteristic value and the range thereof, and the sensor excluding the first target object and the second target object perceives the object as the third target object.
In this embodiment, the first characteristic value may include, but is not limited to: category, spacing, width, length, speed of movement, relative speed, automobile brand.
It is reasonable to say that when the category is non-vehicle, the automobile brand is not displayed or is displayed as "unknown". Of course, if neither the sensor nor the vehicle-road cooperation senses a certain item, no "unknown" is displayed or displayed.
Taking the vehicle 1 in fig. 3 as an example, the first characteristic value of the vehicle 1 in the sensor sensing object is: type (vehicle), spacing (6 m), width (1750 mm), length (4500 mm), speed of motion (30 km/h), car brand (unknown). First characteristic value of vehicle 1 in the vehicle-road cooperative perception object: category (vehicle), spacing (5.8 m), width (1745 mm), length (4550 mm), speed of motion (35 km/h), auto brand (a). According to the difference algorithm calculation, the vehicle 1 in the two perception objects is determined to be the same object, namely, the vehicle 1 is determined to be the first target object.
It should be noted that, the difference algorithm may use each of the first characteristic values to perform a difference to obtain a single difference value, and each of the first characteristic values may also be provided with an individual weight, and multiply the corresponding weight by the single difference value, if the final result is within a threshold, it indicates that, for the same object, the same object belongs to the same object in the sensor sensing object and the vehicle route cooperative sensing object, and the object is determined as the first target object.
In this embodiment, the second feature value may include, but is not limited to, a part or all of the first feature value, and further include a feature that is only present in the perception object in cooperation with the vehicle route, for example, the feature that is only present in the perception object in cooperation with the vehicle route in this embodiment includes: the intensity of the vehicle-road cooperative signal and the fluctuation value of the vehicle-road cooperative signal.
In this embodiment, the intensity of the vehicle-road cooperative signal and the fluctuation value of the vehicle-road cooperative signal may be obtained according to an ECU (Electronic Control Unit).
Optionally, the second characteristic value in this embodiment includes: the type, the interval, the width, the length, the movement speed, the relative speed, the automobile brand, the intensity of the vehicle-road cooperative signal and the fluctuation value of the vehicle-road cooperative signal.
Taking the vehicle 2 in fig. 3 as an example, the sensor does not sense the vehicle 2 in the object. The second characteristic value of the vehicle 2 in the vehicle-road cooperative perception object: type (vehicle), spacing (9.3 m), width (1845 mm), length (4950 mm), speed of motion (20 km/h), automobile brand (A), lane cooperation signal strength (90 dBm), lane cooperation signal fluctuation value (+/-5 dBm/s).
At this time, since the vehicle-road cooperative perception object includes the vehicle 2 and the sensor perception object does not include the vehicle 2, the vehicle 2 is determined as the second target object.
And then, determining the vehicle-road cooperative perception object except the first target object and the second target object as a third target object.
According to the above method, the present embodiment finally determines that the first target object is: vehicle 1, traffic light 1 and two zebra crossings. Finally, the second target object is determined as follows: traffic light 2, road signs (left turn allowed). The third target object is: vehicle 2 and pedestrian.
Step S103 is followed by step S104 of determining a display object according to the first target object, the second target object, and the third target object.
Specifically, the first target object is determined as a display object by the vehicle-road cooperative perception object.
The vehicle 1, the traffic light 1 and the two zebra crossings are determined as display objects by taking the vehicle-road cooperative perception objects as output objects, the response characteristics are obtained according to the vehicle-road cooperative perception, and the accuracy is higher. For example, although the sensor can sense the traffic light 1, the accuracy in determining the current signal and the signal change of the traffic light is not as high as the accuracy of the vehicle-road coordination.
And if the current vehicle-road cooperative sensing function is normal, determining a second target object by using the vehicle-road cooperative sensing object as the display object.
And judging whether the current vehicle road cooperative sensing function is normal or not. For example, the determination may be performed by setting a threshold of intensity of the vehicle-road cooperative signal and a threshold of fluctuation value of the vehicle-road cooperative signal, in this embodiment, if the threshold of intensity of the vehicle-road cooperative signal is greater than 85dBm and the fluctuation value of the vehicle-road cooperative signal is within ± 20dBm/s, the current vehicle-road cooperative sensing function is normal. At this time, the traffic light 2 and the road sign (allowing left turn) are determined as display objects by the vehicle and road cooperative sensing objects, the display objects are used as output objects, the reaction characteristics are obtained according to the vehicle and road cooperative sensing, and the accuracy is higher first.
It can be understood that if the vehicle-road cooperative sensing function is not normal at this time, the second target object is considered to be possibly absent, and the second target object is excluded. For example, when the vehicle travels in a tunnel, the communication signal (e.g., 5G signal) and the positioning signal (e.g., GPS signal) are poor, and the vehicle-road cooperation signal is poor and has large fluctuation, it is considered that the second target acquired at this time does not exist and is not temporarily used as the display object.
And determining a third target object as a display object by using the sensor perception object. The vehicle 2 and the pedestrian are the display objects at this time.
Step S104 is followed by step S105 of displaying a display object determined based on the first target object on the road image with the first display characteristic, displaying a display object determined based on the second target object on the road image with the second display characteristic, and displaying a display object determined based on the third target object on the road image with the third display characteristic.
It can be understood that the first target object in the final road image is displayed with the first display characteristic, so that people can understand that the first target object is simultaneously perceived by the vehicle and the road in a coordinated manner and perceived by the sensor, and the accuracy and the reliability are high. The second target object is only cooperatively sensed by the vehicle road, the sensor does not sense the second target object, the accuracy and the reliability are higher, and the third target object is only sensed by the sensor, so that a client can be reminded: the object is not explorable by the vehicle-road coordination device, for example a car or a living being (e.g. a person) not equipped with the vehicle-road coordination device.
In this embodiment, the first display feature, the second display feature and the third display feature are different from each other, so that the driver can be remarkably reminded of the state in the current driving scene, and clear judgment can be made by the driver.
Further, the first display characteristic, the second display characteristic and the third display characteristic respectively comprise one or more of animation effect, color, lines, thickness and characters.
In this embodiment, the first display characteristic is fast flashing, thick frame, and orange, the second display characteristic is thin frame, green, and the third display characteristic is slow flashing, thin frame, and black. Again, the faster flash may be set to 1 second flash and the slower flash may be set to 5 seconds flash.
In the driving scene image display method provided by the embodiment, a sensor sensing object and a vehicle road cooperative sensing object are respectively obtained; determining a first target object, a second target object and a third target object according to the sensor sensing object and the vehicle-road cooperative sensing object, wherein the first target object is simultaneously contained in the sensor sensing object and the vehicle-road cooperative sensing object, the second target object is only contained in the vehicle-road cooperative sensing object, and the third target object is only contained in the sensor sensing object; and determining the display object according to the first target object, the second target object and the third target object, wherein the algorithm has low complexity, small calculation amount and low cost for displaying the driving scene image.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a driving scene image display device according to an embodiment of the present application. The driving scene image display apparatus provided by the present embodiment includes a perception object module 41, a target object module 42, and a display object module 43.
The sensing object module 41 is configured to obtain a sensor sensing object and a vehicle road cooperative sensing object respectively.
The target object module 42 is configured to determine a first target object, a second target object, and a third target object according to the sensor sensing object and the vehicle-road cooperative sensing object, where the first target object is included in the sensor sensing object and the vehicle-road cooperative sensing object at the same time, the second target object is included in the vehicle-road cooperative sensing object only, and the third target object is included in the sensor sensing object only.
The display object module 43 is configured to determine a display object according to the first target object, the second target object, and the third target object.
In an alternative embodiment, the target object module 42 is further configured to:
determining the first target object according to the range of the first characteristic value;
determining the second target object according to the range of the second characteristic value;
and determining the third target object according to the first target object, the second target object and the vehicle-road cooperative sensing object.
In an alternative embodiment, the target object module 42 is further configured to: and determining the second target object as the display object by using the vehicle-road cooperative perception object.
In an alternative embodiment, the target object module 42 is further configured to: and if the current vehicle-road cooperative sensing function is normal, determining the second target object as the display object by using the vehicle-road cooperative sensing object.
In an alternative embodiment, the target object module 42 is further configured to: determining the third target object as the display object by using the sensor perception object.
In an optional embodiment, the driving scene image display apparatus provided in this embodiment further includes: and the first road image module 44, the first road image module 44 is configured to determine a road image according to the map information and the current position of the vehicle.
In an alternative embodiment, the first road image module 44 is further configured to: and displaying road information of the vehicle on the map, wherein the road information comprises a road view, traffic facilities and marks representing the current vehicle within a preset range of the position of the vehicle.
In an optional embodiment, the driving scene image display apparatus provided in this embodiment further includes: a second road image module 45, the second road image module 45 being configured to display the display object determined based on the first target object on the road image with the first display characteristic, display the display object determined based on the second target object on the road image with the second display characteristic, and display the display object determined based on the third target object on the road image with the third display characteristic.
The first display characteristic, the second display characteristic, and the third display characteristic are different from each other.
Further, the first display characteristic, the second display characteristic and the third display characteristic respectively comprise one or more of animation effect, color, line, thickness and characters. For example, in this embodiment, the first display characteristic is fast flickering, thick frame, and orange, the second display characteristic is thin frame, and green, and the third display characteristic is slow flickering, thin frame, and black. Again, the faster flash may be set to 1 second flash and the slower flash may be set to 5 seconds flash.
The driving scene image display means may be a computer program (including program code) running on a computer device, for example, the travel track determination means is an application software; the driving track determining device can be used for executing corresponding steps in the driving scene image display method provided by the embodiment of the application.
In some possible embodiments, the driving scene image display Device may be implemented by a combination of hardware and software, and the driving track determination Device provided in the embodiments of the present application may be a processor in the form of a hardware decoding processor, which is programmed to execute the driving track determination method provided in the embodiments of the present application, for example, the processor in the form of a hardware decoding processor may be one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), programmable Logic Devices (PLDs), complex Programmable Logic Devices (CPLDs), field Programmable Gate Arrays (FPGAs), or other electronic components.
In some possible embodiments, the driving scene image display device may be implemented in a software manner, which may be a software in the form of a program, a plug-in, and the like, and includes a series of modules, including; the driving scene image display method comprises a perception object module, a target object module and a display object module, wherein the perception object module, the target object module and the display object module are used for realizing the driving scene image display method provided by the embodiment of the invention.
The driving scene image display device provided by the embodiment respectively acquires a sensor sensing object and a vehicle road cooperative sensing object; determining a first target object, a second target object and a third target object according to the sensor sensing object and the vehicle-road cooperative sensing object, wherein the first target object is simultaneously contained in the sensor sensing object and the vehicle-road cooperative sensing object, the second target object is only contained in the vehicle-road cooperative sensing object, and the third target object is only contained in the sensor sensing object; and the display object is determined according to the first target object, the second target object and the third target object, the algorithm complexity is low, the calculated amount is small, and the cost for displaying the driving scene image is low.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an electronic device provided in an embodiment of the present application, and as shown in fig. 5, an electronic device 1000 in the embodiment may include: the processor 1001, the network interface 1004, and the memory 1005, in addition, the electronic device 1000 may further include: a user interface 1003, and at least one communication bus 1002. The communication bus 1002 is used to implement connection communication among these components. The user interface 1003 may include a Display screen (Display) and a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a standard wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1004 may be a high-speed RAM memory or a non-volatile memory, such as at least one disk memory. The memory 1005 may alternatively be at least one memory device located remotely from the processor 1001. As shown in fig. 5, the memory 1005, which is a kind of computer-readable storage medium, may include therein an operating system, a network communication module, a user interface module, and a device control application program.
In the electronic device 1000 shown in fig. 5, the network interface 1004 may provide network communication functions; the user interface 1003 is an interface for providing input to a user; and the processor 1001 may be configured to invoke the device control application stored in the memory 1005 to implement:
respectively acquiring a sensor perception object and a vehicle road cooperative perception object;
determining a first target object, a second target object and a third target object according to the sensor sensing object and the vehicle-road cooperative sensing object, wherein the first target object is simultaneously contained in the sensor sensing object and the vehicle-road cooperative sensing object, the second target object is only contained in the vehicle-road cooperative sensing object, and the third target object is only contained in the sensor sensing object;
and determining a display object according to the first target object, the second target object and the third target object.
In some possible embodiments, the processor 1001 is configured to:
determining a first target object, a second target object and a third target object according to the sensor perception object and the vehicle-road cooperative perception object comprises:
calibrating the sensor sensing object by a first characteristic value and a second characteristic value, and calibrating the vehicle-road cooperative sensing object by a first characteristic value and a second characteristic value;
determining the first target object according to the range of the first characteristic value;
determining the second target object according to the range of the second characteristic value;
and determining the third target object according to the first target object, the second target object and the vehicle-road cooperative perception object.
In some possible embodiments, the processor 1001 is configured to:
determining a display object according to the first target object comprises:
and determining the first target object as the display object by using the vehicle path cooperative perception object.
In some possible embodiments, the processor 1001 is configured to:
determining a display object according to the second target object comprises:
and if the current vehicle-road cooperative sensing function is normal, determining the second target object as the display object by using the vehicle-road cooperative sensing object.
In some possible embodiments, the processor 1001 is configured to:
determining a display object according to the third target object comprises:
determining the third target object as the display object with the sensor perception object.
In some possible embodiments, the processor 1001 is configured to:
before respectively acquiring the sensor perception object and the vehicle-road cooperative perception object, the method comprises the following steps:
and determining a road image according to the map information and the current position of the vehicle.
In some possible embodiments, the processor 1001 is configured to:
determining the road image according to the map information and the current position of the vehicle comprises:
and displaying road information of the vehicle on the map, wherein the road information comprises a road view, traffic facilities and marks representing the current vehicle within a preset range of the position of the vehicle.
In some possible embodiments, the processor 1001 is configured to:
after determining the display object, the method comprises:
the display object determined based on the first target object is displayed on the road image with a first display characteristic, the display object determined based on a second target object is displayed on the road image with a second display characteristic, and the display object determined based on a third target object is displayed on the road image with a third display characteristic.
In some possible embodiments, the first display characteristic, the second display characteristic, and the third display characteristic are different from each other.
In some possible embodiments, the first display characteristic, the second display characteristic, and the third display characteristic respectively include one or more of animation effect, color, line, thickness, and text.
It should be understood that in some possible embodiments, the processor 1001 may be a Central Processing Unit (CPU), and the processor may be other general-purpose processors, DSPs, ASICs, FPGAs, or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The memory may include both read-only memory and random access memory and provides instructions and data to the processor. A portion of the memory may also include non-volatile random access memory. For example, the memory may also store device type information.
In a specific implementation, the electronic device 1000 may execute, through each built-in functional module thereof, the implementation manner provided in each step in fig. 2, which may be specifically referred to as the implementation manner provided in each step, and is not described herein again.
The electronic equipment respectively acquires a sensor sensing object and a vehicle road cooperative sensing object; determining a first target object, a second target object and a third target object according to the sensor sensing object and the vehicle-road cooperative sensing object, wherein the first target object is simultaneously contained in the sensor sensing object and the vehicle-road cooperative sensing object, the second target object is only contained in the vehicle-road cooperative sensing object, and the third target object is only contained in the sensor sensing object; and determining the display object according to the first target object, the second target object and the third target object, wherein the algorithm has low complexity, small calculation amount and low cost for displaying the driving scene image.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and the computer program is executed by a processor to implement the method provided in each step in fig. 2, which may specifically refer to the implementation manner provided in each step, and is not described herein again.
The computer-readable storage medium may be an internal storage unit of the travel track determination device provided in any one of the foregoing embodiments, for example, a hard disk or a memory of an electronic device. The computer readable storage medium may also be an external storage device of the electronic device, such as a plug-in hard disk, a Smart Memory Card (SMC), a Secure Digital (SD) card, a flash card (flash card), and the like, which are provided on the electronic device. The computer readable storage medium may further include a magnetic disk, an optical disk, a read-only memory (ROM), a Random Access Memory (RAM), and the like. Further, the computer readable storage medium may also include both an internal storage unit and an external storage device of the electronic device. The computer-readable storage medium is used for storing the computer program and other programs and data required by the electronic device. The computer readable storage medium may also be used to temporarily store data that has been output or is to be output.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the electronic device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided by the steps of fig. 2.
The computer program product respectively acquires a sensor sensing object and a vehicle road cooperative sensing object; determining a first target object, a second target object and a third target object according to the sensor sensing object and the vehicle-road cooperative sensing object, wherein the first target object is simultaneously contained in the sensor sensing object and the vehicle-road cooperative sensing object, the second target object is only contained in the vehicle-road cooperative sensing object, and the third target object is only contained in the sensor sensing object; and determining the display object according to the first target object, the second target object and the third target object, wherein the algorithm has low complexity, small calculation amount and low cost for displaying the driving scene image.
It will be understood by those skilled in the art that the foregoing is only an exemplary embodiment of the present invention, and is not intended to limit the invention to the particular forms disclosed, since various modifications, substitutions and improvements within the spirit and scope of the invention are possible and within the scope of the appended claims.

Claims (13)

1. A driving scene image display method, characterized by comprising the steps of:
respectively acquiring a sensor perception object and a vehicle road cooperative perception object;
determining a first target object, a second target object and a third target object according to the sensor sensing object and the vehicle-road cooperative sensing object, wherein the first target object is simultaneously contained in the sensor sensing object and the vehicle-road cooperative sensing object, the second target object is only contained in the vehicle-road cooperative sensing object, and the third target object is only contained in the sensor sensing object;
and determining a display object according to the first target object, the second target object and the third target object.
2. The driving scene image display method according to claim 1, wherein the step of determining the first target object, the second target object, and the third target object based on the sensor perception object and the vehicle route cooperative perception object includes:
calibrating the sensor sensing object by a first characteristic value and a second characteristic value, and calibrating the vehicle-road cooperative sensing object by a first characteristic value and a second characteristic value;
determining the first target object according to the range of the first characteristic value;
determining the second target object according to the range of the second characteristic value;
and determining the third target object according to the first target object, the second target object and the vehicle-road cooperative sensing object.
3. The driving scene image display method according to claim 1, wherein the step of determining a display object based on the first target object includes:
and determining the first target object as the display object by using the vehicle-road cooperative perception object.
4. The driving scene image display method according to claim 3, wherein the step of determining a display object based on the second target object includes:
and if the current vehicle-road cooperative sensing function is normal, determining the second target object as the display object by using the vehicle-road cooperative sensing object.
5. The driving scene image display method according to claim 4, wherein the step of determining a display object according to the third target object includes:
determining the third target object as the display object by using the sensor perception object.
6. The driving scene image display method according to claim 1, wherein the step before the sensor perception object and the vehicle-road cooperative perception object are respectively acquired includes:
and determining a road image according to the map information and the current position of the vehicle.
7. The driving scene image display method according to claim 6, wherein the step of determining the road image based on the map information and the current position of the vehicle includes:
and displaying road information of the vehicle on the map, wherein the road information comprises a road view, traffic facilities and marks representing the current vehicle within a preset range of the position of the vehicle.
8. The driving scene image display method according to claim 6, characterized in that the step after determining the display object includes:
the display object determined based on the first target object is displayed on the road image with a first display characteristic, the display object determined based on a second target object is displayed on the road image with a second display characteristic, and the display object determined based on a third target object is displayed on the road image with a third display characteristic.
9. The driving scene image display method according to claim 8, wherein the first display characteristic, the second display characteristic, and the third display characteristic are different from each other.
10. The driving scene image display method according to claim 8, wherein the first display characteristic, the second display characteristic, and the third display characteristic each include one or more of an animation effect, a color, a line, a thickness, and a character.
11. A driving scene image display apparatus, characterized by comprising:
the sensing object module is used for respectively acquiring a sensor sensing object and a vehicle road cooperative sensing object;
the target object module is used for determining a first target object, a second target object and a third target object according to the sensor sensing object and the vehicle-road cooperative sensing object, wherein the first target object is simultaneously contained in the sensor sensing object and the vehicle-road cooperative sensing object, the second target object is only contained in the vehicle-road cooperative sensing object, and the third target object is only contained in the sensor sensing object;
and the display object module is used for determining a display object according to the first target object, the second target object and the third target object.
12. An electronic device characterized by comprising a memory and a processor, the memory having stored therein computer-readable instructions which, when executed by the processor, cause the processor to carry out the steps of the driving scene image display method according to any one of claims 1 to 10.
13. A computer storage medium having computer-readable instructions stored therein, which, when executed by one or more processors, cause the one or more processors to perform the steps of the driving scene image display method according to any one of claims 1 to 10.
CN202211053332.7A 2022-08-31 2022-08-31 Driving scene image display method, device, equipment and storage medium Pending CN115633085A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211053332.7A CN115633085A (en) 2022-08-31 2022-08-31 Driving scene image display method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211053332.7A CN115633085A (en) 2022-08-31 2022-08-31 Driving scene image display method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115633085A true CN115633085A (en) 2023-01-20

Family

ID=84903336

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211053332.7A Pending CN115633085A (en) 2022-08-31 2022-08-31 Driving scene image display method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115633085A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111753765A (en) * 2020-06-29 2020-10-09 北京百度网讯科技有限公司 Detection method, device and equipment of sensing equipment and storage medium
CN112418092A (en) * 2020-11-23 2021-02-26 中国第一汽车股份有限公司 Fusion method, device, equipment and storage medium for obstacle perception
CN114596706A (en) * 2022-03-15 2022-06-07 阿波罗智联(北京)科技有限公司 Detection method and device of roadside sensing system, electronic equipment and roadside equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111753765A (en) * 2020-06-29 2020-10-09 北京百度网讯科技有限公司 Detection method, device and equipment of sensing equipment and storage medium
CN112418092A (en) * 2020-11-23 2021-02-26 中国第一汽车股份有限公司 Fusion method, device, equipment and storage medium for obstacle perception
CN114596706A (en) * 2022-03-15 2022-06-07 阿波罗智联(北京)科技有限公司 Detection method and device of roadside sensing system, electronic equipment and roadside equipment

Similar Documents

Publication Publication Date Title
CN111524357B (en) Method for fusing multiple data required for safe driving of vehicle
JP7456455B2 (en) Driving assistance system, method for providing driving assistance, and driving assistance device
CN108417087B (en) Vehicle safe passing system and method
US10377301B2 (en) Lamp light control method and apparatus, computer storage medium and in-vehicle device
US11422561B2 (en) Sensor system for multiple perspective sensor data sets
WO2018128946A1 (en) Method for providing vulnerable road user warnings in a blind spot of a parked vehicle
US11996018B2 (en) Display control device and display control program product
CN109795407B (en) Vehicle-mounted intelligent reminding method and device and vehicle
CN114375467B (en) System and method for detecting an emergency vehicle
CN111508276B (en) High-precision map-based V2X reverse overtaking early warning method, system and medium
US10522041B2 (en) Display device control method and display device
CN113710530A (en) Method for operating a driver information system in an autonomous vehicle and driver information system
CN113085852A (en) Behavior early warning method and device for automatic driving vehicle and cloud equipment
CN109263541A (en) A kind of vehicle-mounted early warning system, vehicle-mounted method for early warning and computer storage medium
CN114373295A (en) Driving safety early warning method, system, storage medium and equipment
US20230132904A1 (en) Method, Apparatus, And System for Processing Vehicle-Road Collaboration Information
CN110658809A (en) Method and device for processing travelling of movable equipment and storage medium
CN112824150A (en) System and method for communicating anticipated vehicle maneuvers
CN115257527A (en) Tail lamp display control method and device and vehicle
JP2016038838A (en) Travel control device and travel control method
CN116631208A (en) Self-adaptive interaction system and method for stereoscopic virtual image
CN116142178A (en) Vehicle auxiliary driving method, system, medium and electronic equipment
CN115633085A (en) Driving scene image display method, device, equipment and storage medium
CN113932828A (en) Navigation method, terminal, server, electronic device and storage medium
CN114973706B (en) Vehicle-road cooperative communication method and device, traffic signal control equipment and road side equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination