CN114820971B - Graphical expression method for describing complex driving environment information - Google Patents

Graphical expression method for describing complex driving environment information Download PDF

Info

Publication number
CN114820971B
CN114820971B CN202210479395.2A CN202210479395A CN114820971B CN 114820971 B CN114820971 B CN 114820971B CN 202210479395 A CN202210479395 A CN 202210479395A CN 114820971 B CN114820971 B CN 114820971B
Authority
CN
China
Prior art keywords
information
vehicle
road
color
speed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210479395.2A
Other languages
Chinese (zh)
Other versions
CN114820971A (en
Inventor
詹军
叶昊
王战古
仲昭辉
陈浩源
杨凯
曹子坤
江勐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN202210479395.2A priority Critical patent/CN114820971B/en
Publication of CN114820971A publication Critical patent/CN114820971A/en
Application granted granted Critical
Publication of CN114820971B publication Critical patent/CN114820971B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Architecture (AREA)
  • Remote Sensing (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a graphic expression method for describing complex driving environment information, which is characterized in that environment information is layered, each environment information layer is respectively subjected to graphic expression, after all the environment information layers are subjected to graphic expression, each layer of graphics are respectively stored, the graphics of each environment information layer are sequentially overlapped according to a overlooking covering sequence as required, corresponding coordinate system transformation is carried out on the overlapped graphics according to the running speed and the running direction of a body vehicle, and the comprehensive environment information of the graphic expression which takes the body vehicle as the center and changes along with time is displayed. The invention describes the environment perception information from different sensors by using a graphical expression method after comprehensive cognition, solves the problem of unified expression of driving environment, and can describe the surrounding driving environment information of the vehicle in the driving process more comprehensively and effectively.

Description

Graphical expression method for describing complex driving environment information
Technical Field
The invention designs an expression mode of complex driving environment information for an automatic driving vehicle, and particularly relates to a graphical expression method for describing the driving environment information of the vehicle.
Background
Autopilot is an important development direction of the automotive industry, and decision planning is a core part of implementing autopilot tasks. In recent years, decision-making planning algorithms based on artificial intelligence gradually become the mainstream of research, wherein a convolutional neural network method shows strong advantages when extracting image features, and many decision-making planning methods based on artificial intelligence adopt a convolutional neural network to extract environmental information features by taking images as model inputs.
In order to acquire complete information, the current environment sensing mostly depends on a plurality of sensors, and the obtained environment information has great difference in data structure and information type, so that the cross-modal expression of the environment information is not facilitated; meanwhile, the original perception information contains a large amount of invalid information irrelevant to decision making, so that the information characteristic extraction efficiency and the robustness of a decision model are poor. Therefore, in the actual application process, the expression mode which is non-unified for the driving environment information is unfavorable for the migration and the use of the decision model in different working condition scenes and different perception information.
The current methods for representing environmental information mainly include traditional grid map, topological map and high-precision map methods, and a medium-level expression mode for "mid to mid" which appears in recent years. Wang Yongsheng et al use topological maps in the paper "autonomous parking path coordination and optimization strategy based on topological maps" to describe the topology information of the path nodes in the parking area, thus realizing the development of the autonomous parking strategy; radu daniscu et al in paper Modeling and Tracking the Driving Environment With a Particle-Based Occupancy Grid use a grid map to express surrounding obstacle location and motion state information and to predict future locations of the vehicle; nemanja Djuric et al in the paper "Uncertitry-aware Short-term Motion Prediction of Traffic Actors for Autonomous Driving" use a high-precision grid map to express surrounding driving environments, use RGB colors to distinguish the own vehicle from other vehicles, and use color saturation to present the positions of the own vehicle at different moments to realize the expression of dynamic information in a single picture; in the paper ChauffeurNet Learning to Drive by Imitating the Best and Synthesizing the Worst, mayan Bansal et al describes road maps, traffic lights, speed limits, global paths, locations of surrounding vehicles and own vehicles at different moments in driving environment information with different geometric elements, and replaces the original sensor data in this expression mode to imitate learning the driving behavior of the vehicle.
By analyzing the existing methods, some imperfect places are found, firstly, the traditional expression method of the environment model does not intuitively and graphically express traffic rule information, for example, a grid map can only express the states of surrounding obstacles and can not describe the constraint of traffic rules on the running path of a vehicle; in addition, the later-appearing intermediate expression mode does not provide a clear layered structure for driving environment information, and the expressed environment information is incomplete, wherein the form of the bird's eye view does not consider the fluctuation and gradient information of the road, and meanwhile, the information which is obtained by reasoning and affects the driving behavior of the vehicle, such as the driving style of surrounding vehicles, is not considered.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a graphical representation method for describing complex driving environment information, which is characterized in that the environment perception information from different sensors is comprehensively recognized and then is described by using the graphical representation method, so that the problem of unified driving environment representation is solved, and the driving environment information around the vehicle in the driving process can be more comprehensively and effectively described. The driving environment information expressed in unified form includes not only the objects and logic information actually existing in the environment such as the surrounding vehicle position, traffic rules, etc., but also the reasoning information obtained by reasoning. The graphical method is adopted to express the environmental information, so that the method is more visual, and is more beneficial to realizing decision planning of automatic driving by using an artificial intelligence method based on image feature extraction.
The invention aims at realizing the following technical scheme:
a graphical representation method for describing complex driving environment information comprises the following steps:
step 1, layering environment information, including:
1.1 Road layer): the method comprises the steps of including the central line shape, the road width and the tangential and radial elevation information of the road surface in the running environment of the vehicle;
1.2 Traffic rules layer): the method comprises the steps of including the forbidden and limited information of the road traffic right and road traffic which are defined by traffic signs and traffic regulations in the running environment of the vehicle;
1.3 Object layer): the method comprises the steps of including position, size and motion state information of a static object and a moving object of a owning entity in a vehicle driving environment;
1.4 Weather layer): including weather environment information that occurs in a driving environment of the vehicle;
1.5 A reasoning information layer): the method comprises the steps of including vehicle future position and posture obtained by reasoning in a vehicle driving environment and vehicle driving style information;
step 2, respectively carrying out graphical expression on the environment information layers classified in the step 1:
2.1 Carrying out graphical representation of environmental information on the road layer;
2.2 Patterning and expressing the traffic rule layer;
2.3 Patterning the object layer;
2.4 Carrying out graphical representation of environmental information on the weather layer;
2.5 Patterning the reasoning information layer;
and 3, after all the environmental information layers complete the graphical representation, respectively storing the graphics of each layer, sequentially stacking the graphics of each environmental information layer according to the overlooking covering sequence as required, carrying out corresponding coordinate system transformation on the stacked graphics according to the running speed and the running direction of the body vehicle, and displaying the comprehensive environmental information of the graphical representation which takes the body vehicle as the center and changes along with time.
Further, the step 2.1) includes: the boundary of the road area is expressed by using a solid line, the elevation information of the road is expressed by using a YUV color model, wherein the value of Y is reserved, U represents the radial angle of the road surface compared with the horizontal plane, and V represents the tangential angle of the road surface compared with the horizontal plane, and the specific expression mode is as follows:
U=((angle_t+20)/40-0.5)*0.3 (1)
V=((angle_v+20)/40-0.5)*0.3 (2)
in the formula, U is blue chromaticity U in a YUV color model;
angle_t—the radial angle of the road surface compared to the horizontal;
v-red chromaticity V in YUV color model;
angle_v—the tangential angle of the road pavement compared to the horizontal.
Further, in the step 2.1), in the actual use process, if the elevation information of the road uses the YUV color model, the transformation to the RGB color model is required, and the transformation relationship is as follows:
R=(Y+1.4075V)/1.5 (3)
G=(Y-0.3455U-0.7169V)/1.5 (4)
B=(Y+1.779B)/1.5 (5)
in the formula, Y is gray Y in a YUV color model;
u-blue chromaticity U in YUV color model;
v-red chromaticity V in YUV color model;
R-R in RGB color model, i.e., the brightness of red;
G-G in RGB color model, i.e., the brightness of green;
B-B in RGB color model, i.e. the brightness of blue.
Further, the step 2.2) includes: using solid lines and broken lines which are the same as the types of the road marking in reality to express the lane information on the actual road; meanwhile, the road traffic right and traffic prohibition and limitation information is converted into the limitation of the running speed of the vehicle and the modification of the driving style, and then the speed limitation information is mapped into the H value of the HUV in the road surface color.
Further, when the information of the speed limit is mapped to the H value of the HUV in the road surface color, the change of the speed limit is mapped to the gray scale of the road expression, and the mapping relation is as follows:
Figure GDA0004187832750000031
wherein Y is the gray value Y of the YUV color model of the road surface;
speed limit -speed limitation of the road;
speed limitmax -maximum value of speed limit of the road.
Further, the step 2.3) includes: the method comprises the steps of expressing dimension characteristic information of an object by using graphic frames with different shapes, expressing the relative speed of the object relative to a vehicle by using a filling color, and expressing the relative speed of the object relative to the vehicle in the transverse direction, the longitudinal direction and the vertical direction under a coordinate system of the vehicle by using red, green and blue colors respectively by using the filling color.
Further, the relative speed of the object to the vehicle is expressed by using a filling color made of RGB, and the mapping relationship between the filling color and the relative speed is as follows:
Figure GDA0004187832750000041
color, matrix of internal filling color;
speed x -relative speed of the vehicle with respect to the subject vehicle in the x-direction;
speed y -relative speed of the vehicle in y-direction with respect to the subject vehicle;
speed z -relative speed of the vehicle in the z-direction with respect to the subject vehicle;
speed_max—the maximum value of the relative speed of the vehicle with respect to the subject vehicle in all directions.
Further, the step 2.4) includes: two separate rectangular boxes are used to express rain, snow, fog information, one of which expresses fog with a fill color made of RGBA color, where RGB represents the color of the fog and transparency is used to represent visibility at that moment; another box uses lines to represent precipitation, where the solid line represents precipitation, the dashed line represents precipitation, and the number of lines is used to represent the amount of precipitation in the area at that moment, and the width and direction of the lines are used to represent the mean size of the particles of precipitation at that moment and the direction of precipitation particles, i.e. the falling direction of raindrops and snowflakes.
Further, in the step 2.4, a filling color made of a rectangular RGBA color is used for expression, wherein three parameters of RGB are used for expressing the color of fog, and the transparency a is used for expressing the visibility, and the specific mapping relationship is as follows:
rec_color=[fog_r/255*stab,fog_g/255*stab,fog_b/255*stab] (8)
line_num=round(density*10/1000000) (9)
line_width=particle_size (10)
line_orientation=wind_direction (11)
wherein, fog_r fog_g_b is the RGB color of fog;
stab—the normalized value obtained by dividing the visibility of the current environment by 1000;
rec_color—RGB color matrix;
line_num—number of lines;
round—rounding operation;
density-the number of particles in a range;
line_width—width of line;
particle_size—particle size
line orientation-direction of line;
wind_direction—particle falling direction.
Further, the step 2.5) includes: converting the information obtained by reasoning into the boundary and filling geometric features of the graphs in the steps 2.1) to 2.4).
By describing the environment information in the expression mode, the invention has the following beneficial effects:
(1) A unified expression mode is provided for various environment sensing information from various sensors, so that the environment sensing information of different data types and data structures can be simultaneously described, and cross-scene and cross-modal information fusion is realized;
(2) Because the processed environmental information is selectively described, the expression of the information is more concise and effective, and the environmental information with little influence on decision behavior is omitted;
(3) The complex environment information is completely described by using a unified graphical expression method, which is favorable for developing a subsequent decision algorithm by using an image feature extraction method and a deep learning-based method, and is convenient for migration application of the decision algorithm aiming at different input data or using different methods under different scenes.
Drawings
FIG. 1 is a flow chart depicting a graphical representation of complex driving environment information in accordance with the present invention
FIG. 2 is a diagram showing the effect of expressing the road layer size information
FIG. 3 is a schematic illustration of the effect of traffic regulation layer traffic indication information
FIG. 4 is a diagram showing the effect of expressing the object layer information
FIG. 5 is a weather layer expression effect display
FIG. 6 is a diagram showing the effect of reasoning information layer expression
Detailed Description
The following will describe the practical embodiment of the expression method of the present invention in detail with reference to the drawings and examples.
A graphical representation method for describing complex driving environment information mainly comprises the following steps:
and step 1, layering the environment information. Note that the layering process is environment information, and is not an entity of an object in the environment, that is, for example, the speed limit expressed by the sign is a traffic rule layer, and the sign itself belongs to the object layer as an obstacle entity in the driving environment. The final layering result is:
1.1 Road layer): the method comprises the steps of including the center line shape, the road width and the tangential and radial elevation information of the road surface in the running environment of the vehicle;
1.2 Traffic rules layer): the vehicle running environment comprises the forbidden and limited information of the road traffic right and road traffic which are defined by traffic signs and traffic regulations;
1.3 Object layer): the method comprises the steps of including the position, the size and the motion state information of a static object and a moving object of a possession entity in a vehicle driving environment;
1.4 Weather layer): weather environment information including rain, fog, snow occurring in the driving environment of the vehicle;
1.5 A reasoning information layer): the information obtained by reasoning in the driving environment of the vehicle is included, and the information comprises the future position and the gesture of the vehicle and the driving style information of the vehicle.
Step 2, respectively carrying out graphical expression on the environment information layers classified in the step 1:
2.1 The road layer environment information is graphically expressed. The graphical representation method of the road layer driving environment information is to use a black solid line to represent the boundary of the road area to represent the non-traversable attribute under the normal running non-emergency dodging condition, as shown in fig. 2, the road boundary is represented by the black solid line. The elevation information of the road is expressed by using a YUV color model with a smaller storage space, wherein the value of Y is reserved, U represents the radial angle of the road surface compared with the horizontal plane, V represents the tangential angle of the road surface compared with the horizontal plane, and the road gradient and the inclination angle are generally smaller in consideration of the actual driving environment, so that the specific implementation is as follows:
U=((angle_t+20)/40-0.5)*0.3 (1)
V=((angle_v+20)/40-0.5)*0.3 (2)
in U-YUV color model, blue color U
angle_t-radial angle of road pavement relative to horizontal
Red chromaticity V in V-YUV color model
angle_v-tangential angle of road pavement compared to horizontal
In the practical use process, the conversion from YUV to RGB color model may be needed, and as the conversion from YUV to RGB is not in one-to-one correspondence, the color parameter range needs to be adjusted, and the transformation relationship is given as follows:
R=(Y+1.4075V)/1.5 (3)
G=(Y-0.3455U-0.7169V)/1.5 (4)
B=(Y+1.779B)/1.5 (5)
in Y-YUV color model, gray Y
Blue chromaticity U in U-YUV color model
Red chromaticity V in V-YUV color model
R-R in RGB color model, i.e. brightness of red
G-G in RGB color model, i.e. brightness of green
B-B in RGB color model, i.e. brightness of blue
Of course, the elevation information in the two directions is not enough to solve three brightness parameters of the practically used RGB colors, the missing Y value is specified in the speed limiting part of the third step, the specific effect is that the inclination angle of the road in the tangential radial direction is 0, the speed limit is 60km/h, and the road surface filling color YUV array is [0.20,0.00,0.00] and the RGB array is [0.13,0.13,0.13] in combination with the expression method of the road gradient information in the third step; when the tangential tilt angle of the road becomes 10 degrees, the fill color YUV array becomes [0.20,0.00,0.00], and the RGB color array becomes [0.13,0.12,0.22].
2.2 A traffic rule layer is graphically expressed. The display mode of the traffic rules mainly comprises three modes of road marking, traffic signal lamp or traffic police command and traffic sign board, the traffic marking on the road comprises lane lines, stop lines and boundary lines, the symbols comprise solid lines and broken lines and are used for indicating the areas where vehicles can run, and the specific expression method is that the solid lines and the broken lines which are the same as the types of the road marking in reality are used for expressing the lane information on the actual road. The signal of the traffic signal lamp can be converted into the change of the limiting speed of the intersection of the vehicle target road, namely the limiting speed of the road and the stop line of the intersection part can be adjusted, specifically, the limiting speed is normal when the traffic signal lamp is in green, the stop line can change the color into white, namely the stop line disappears to permit the vehicle to pass, the limiting speed of the road is adjusted to 0 when the traffic signal lamp is in red, and meanwhile, the color of the stop line is changed into pure black to indicate that the line cannot pass through. Traffic signs are generally classified into main signs and auxiliary signs. The main marks are divided into warning marks, forbidden marks, indicating marks, road indicating marks, tourist area marks, operation area marks and notification marks, the influence degree of the marks on decision behaviors in the running process of the vehicle can be divided into three types, the first type of marks have no forced effect on the running of the vehicle, the second type of marks have direct and instant influence on the motion state of the main vehicle, and the third type of marks can influence the possible decision actions in the subsequent running process by influencing the driving decision style of the vehicle, and a detailed classification list is shown in the following table.
Figure GDA0004187832750000071
/>
Figure GDA0004187832750000081
The change of the limiting speed caused by the second type of traffic sign can be mapped to the gray level of road expression, and the specific mapping relation when the method is actually applied is shown in the following formula:
Figure GDA0004187832750000082
in Y, gray value Y of YUV color model of road surface
speed limit -speed limit of road
speed limitmax Maximum value of the speed limit of the road, here taken as 120km/h
The practical effect is shown in the figure 3, the inclination angles of the lower section of the road in the tangential radial direction are all 0, the speed limit is 30km/h, and the RGB array of the road surface filling color is [0.33,0.33,0.33] by combining the expression method of the road gradient information in the third step; the inclination angles of the upper sections of the road in tangential radial directions are all 0, the speed limit is 60km/h, and the RGB array of the road surface filling color is [0.13,0.13,0.13] by combining the expression method of the road gradient information in the third step.
The change of the driving style score caused by the third kind of traffic sign can be mapped to the gray level of the border frame line of the host vehicle, the actual mapping relationship is the same as the above formula, and the effect will be shown in the drawing of the sixth step.
2.3 The object layer is graphically expressed and comprises a static object and a movable object. Wherein the stationary object comprises a road sign, a building or the like, and the movable object comprises surrounding pedestrians, vehicles or the like. The object is represented by a simple geometric figure conforming to the shape of the object, and the moving speed of the object relative to the body vehicle is expressed by filling colors made of RGB, and the mapping relation between the filling colors and the relative speed is as follows:
Figure GDA0004187832750000091
color, matrix of internal filling color;
speed x -relative speed of the vehicle with respect to the subject vehicle in the x-direction;
speed y -relative speed of the vehicle in y-direction with respect to the subject vehicle;
speed z -relative speed of the vehicle in the z-direction with respect to the subject vehicle;
speed_max—maximum value of relative speed of vehicle with respect to host vehicle in all directions
The specific effect is shown in fig. 4, wherein the rectangle outside the road boundary shown in the left side diagram represents a stationary building, while the rectangle shown in the right side diagram is another road vehicle moving on the road, the actual effect of filling the color is color, and examples of the color array are shown in the following table.
Figure GDA0004187832750000092
2.4 The weather layer environment information is graphically expressed. The impact of weather on vehicles is diverse, including the impact on driver vision and sensor perception effects, among others. The rain and snow are classified into precipitation weather, and the parameters include the size of falling particles, the number of particles in the range, the falling direction of the particles, and the like, which are expressed in terms of line width in a rectangle, the number of lines, and the direction of the lines, respectively. The invention uses the filling color of rectangular RGBA color to express, wherein three parameters of RGB are used to express the color of fog, and the transparency A is used to express the visibility. The specific mapping relation is shown as follows:
rec_color=[fog_r/255*stab,fog_g/255*stab,fog_b/255*stab](8)line_num=round(density*10/1000000) (9)
line_width=particle_size (10)
line_orientation=wind_direction (11)
wherein, fog_r fog_g_b is the RGB color of fog;
stab—the normalized value obtained by dividing the visibility of the current environment by 1000;
rec_color—RGB color matrix;
line_num—number of lines;
round—rounding operation;
density-the number of particles in a range;
line_width—width of line;
particle_size—particle size
line orientation-direction of line;
wind_direction-particle falling direction
The expression effect is shown in fig. 5, and the information contained in the graph is shown in the following table in comparison with the geometric elements.
Figure GDA0004187832750000101
2.5 Is a graphical representation of the inference information layer. The graphical representation method of the environment information of the reasoning information layer is to convert the information obtained by reasoning into the boundary and the filled geometric features of the graphics in the second step to the fifth step.
In this embodiment, the aggressive degree score of the inferred driving style of the vehicle is expressed by using the gray scale of the rectangular frame line of the vehicle, as shown in the left graph of fig. 6, the driving style on the left side is more conservative, so that the gray scale is lower, and the driving style on the right side is more aggressive, so that the gray scale is higher; the deduced future position of the vehicle after 0.5S will be represented in the figure as a solid rectangle without border lines, and the filling color of the rectangle will also be represented in the same way as in the fourth step to show the relative velocity at the future point in time compared to the present one, with the effect shown in the right-hand graph of fig. 6.
And 3, after all the environment information layers are subjected to graphical expression, respectively storing each layer of graphics for subsequent work, sequentially stacking five layers of geometric figures expressing the environment information according to a overlooking covering sequence, and finally carrying out corresponding coordinate system transformation on all the geometric figures according to the running speed and the running direction of the body vehicle, so that the final comprehensive environment information of the graphical expression taking the body vehicle as the center and changing along with time can be intuitively displayed.

Claims (9)

1. A graphical representation method for describing complex driving environment information, comprising the steps of:
step 1, layering environment information, including:
1.1 Road layer): the method comprises the steps of including the central line shape, the road width and the tangential and radial elevation information of the road surface in the running environment of the vehicle;
1.2 Traffic rules layer): the method comprises the steps of including the forbidden and limited information of the road traffic right and road traffic which are defined by traffic signs and traffic regulations in the running environment of the vehicle;
1.3 Object layer): the method comprises the steps of including position, size and motion state information of a static object and a moving object of a owning entity in a vehicle driving environment;
1.4 Weather layer): including weather environment information that occurs in a driving environment of the vehicle;
1.5 A reasoning information layer): the method comprises the steps of including vehicle future position and posture obtained by reasoning in a vehicle driving environment and vehicle driving style information;
step 2, respectively carrying out graphical expression on the environment information layers classified in the step 1:
2.1 Carrying out graphical representation of environmental information on the road layer; the step 2.1) comprises: the boundary of the road area is expressed by using a solid line, the elevation information of the road is expressed by using a YUV color model, wherein the value of Y is reserved, U represents the radial angle of the road surface compared with the horizontal plane, and V represents the tangential angle of the road surface compared with the horizontal plane, and the specific expression mode is as follows:
U= ((angle _t +20)/40-0.5) *0.3 (1)
V= ((angle _v+20)/40-0.5) *0.3 (2)
in the formula, U is blue chromaticity U in a YUV color model;
angle_t—the radial angle of the road surface compared to the horizontal;
v-red chromaticity V in YUV color model;
angle_v—the tangential angle of the road pavement compared to the horizontal;
2.2 Patterning and expressing the traffic rule layer;
2.3 Patterning the object layer;
2.4 Carrying out graphical representation of environmental information on the weather layer;
2.5 Patterning the reasoning information layer;
and 3, after all the environmental information layers complete the graphical representation, respectively storing the graphics of each layer, sequentially stacking the graphics of each environmental information layer according to the overlooking covering sequence as required, carrying out corresponding coordinate system transformation on the stacked graphics according to the running speed and the running direction of the body vehicle, and displaying the comprehensive environmental information of the graphical representation which takes the body vehicle as the center and changes along with time.
2. The graphical representation method for describing complex driving environment information according to claim 1, wherein in the step 2.1), if the elevation information of the road uses the YUV color model, the transformation to the RGB color model is required, and the transformation relationship is as follows:
R=(Y+1.4075V)/1.5 (3)
G=(Y-0.3455U-0.7169V)/1.5 (4)
B=(Y+1.779B)/1.5 (5)
in the formula, Y is gray Y in a YUV color model;
u-blue chromaticity U in YUV color model;
v-red chromaticity V in YUV color model;
R-R in RGB color model, i.e., the brightness of red;
G-G in RGB color model, i.e., the brightness of green;
B-B in RGB color model, i.e. the brightness of blue.
3. A method of graphically representing information describing complex driving environments as claimed in claim 1, wherein said step 2.2) includes: using solid lines and broken lines which are the same as the types of the road marking in reality to express the lane information on the actual road; meanwhile, the road traffic right and traffic prohibition and limitation information is converted into the limitation of the running speed of the vehicle and the modification of the driving style, and then the speed limitation information is mapped into the H value of the HUV in the road surface color.
4. A graphic expression method for describing complex driving environment information according to claim 3, wherein when said information of speed limitation is mapped to the H value of HUV in road surface color, the change of speed limitation is mapped to the gray level of road expression, and the mapping relationship is as follows:
Figure FDA0004187832740000021
wherein Y is the gray value Y of the YUV color model of the road surface;
speed limit -speed limitation of the road;
speed limitmax -maximum value of speed limit of the road.
5. A method of graphically representing information describing complex driving environments as claimed in claim 1, wherein said step 2.3) includes: the method comprises the steps of expressing dimension characteristic information of an object by using graphic frames with different shapes, expressing the relative speed of the object relative to a vehicle by using a filling color, and expressing the relative speed of the object relative to the vehicle in the transverse direction, the longitudinal direction and the vertical direction under a coordinate system of the vehicle by using red, green and blue colors respectively by using the filling color.
6. The graphic representation method for describing complex driving environment information according to claim 5, wherein the relative speed of the object with respect to the host vehicle is expressed by using a filling color made of RGB, and the mapping relationship between the filling color and the relative speed is as follows:
Figure FDA0004187832740000031
color, matrix of internal filling color;
speed x -relative speed of the vehicle with respect to the subject vehicle in the x-direction;
speed y -relative speed of the vehicle in y-direction with respect to the subject vehicle;
speed z -relative speed of the vehicle in the z-direction with respect to the subject vehicle;
speed_max—the maximum value of the relative speed of the vehicle with respect to the subject vehicle in all directions.
7. A method of graphically representing information describing complex driving environments as claimed in claim 1, wherein said step 2.4) includes: two separate rectangular boxes are used to express rain, snow, fog information, one of which expresses fog with a fill color made of RGBA color, where RGB represents the color of the fog and transparency is used to represent visibility at that moment; another box uses lines to represent precipitation, where the solid line represents precipitation, the dashed line represents precipitation, and the number of lines is used to represent the amount of precipitation in the area at that moment, and the width and direction of the lines are used to represent the mean size of the particles of precipitation at that moment and the direction of precipitation particles, i.e. the falling direction of raindrops and snowflakes.
8. The graphical representation method for describing complex driving environment information according to claim 7, wherein in the step 2.4, a filling color of a rectangular RGBA color is used for representation, wherein three parameters of RGB are used for representing the color of fog, and the transparency a is used for representing the visibility, and the specific mapping relationship is as follows:
rec_color=[fog_r/255*stab,fog_g/255*stab,fog_b/255*stab] (8)
line_num=round(density*10/1000000) (9)
line_width=particle_size (10)
line_orientation=wind_direction (11)
wherein, fog_r fog_g_b is the RGB color of fog;
stab—the normalized value obtained by dividing the visibility of the current environment by 1000;
rec_color—RGB color matrix;
line_num—number of lines;
round—rounding operation;
density-the number of particles in a range;
line_width—width of line;
particle_size—particle size
line orientation-direction of line;
wind_direction—particle falling direction.
9. A method of graphically representing information describing complex driving environments as claimed in claim 1, wherein said step 2.5) includes: converting the information obtained by reasoning into the boundary and filling geometric features of the graphs in the steps 2.1) to 2.4).
CN202210479395.2A 2022-05-05 2022-05-05 Graphical expression method for describing complex driving environment information Active CN114820971B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210479395.2A CN114820971B (en) 2022-05-05 2022-05-05 Graphical expression method for describing complex driving environment information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210479395.2A CN114820971B (en) 2022-05-05 2022-05-05 Graphical expression method for describing complex driving environment information

Publications (2)

Publication Number Publication Date
CN114820971A CN114820971A (en) 2022-07-29
CN114820971B true CN114820971B (en) 2023-06-09

Family

ID=82512024

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210479395.2A Active CN114820971B (en) 2022-05-05 2022-05-05 Graphical expression method for describing complex driving environment information

Country Status (1)

Country Link
CN (1) CN114820971B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021000800A1 (en) * 2019-06-29 2021-01-07 华为技术有限公司 Reasoning method for road drivable region and device
WO2021148113A1 (en) * 2020-01-22 2021-07-29 Automotive Artificial Intelligence (Aai) Gmbh Computing system and method for training a traffic agent in a simulation environment

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014152339A1 (en) * 2013-03-14 2014-09-25 Robert Bosch Gmbh Time and environment aware graphical displays for driver information and driver assistance systems
CN108062864A (en) * 2016-11-09 2018-05-22 奥迪股份公司 A kind of traffic scene visualization system and method and vehicle for vehicle
CN108010360A (en) * 2017-12-27 2018-05-08 中电海康集团有限公司 A kind of automatic Pilot context aware systems based on bus or train route collaboration
CN108225364B (en) * 2018-01-04 2021-07-06 吉林大学 Unmanned automobile driving task decision making system and method
CN110007675B (en) * 2019-04-12 2021-01-15 北京航空航天大学 Vehicle automatic driving decision-making system based on driving situation map and training set preparation method based on unmanned aerial vehicle
CN111539112B (en) * 2020-04-27 2022-08-05 吉林大学 Scene modeling method for automatically driving vehicle to quickly search traffic object
CN112101120B (en) * 2020-08-18 2024-01-05 沃行科技(南京)有限公司 Map model based on automatic driving application scene and application method thereof
CN113895464B (en) * 2021-12-07 2022-04-08 武汉理工大学 Intelligent vehicle driving map generation method and system fusing personalized driving style

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021000800A1 (en) * 2019-06-29 2021-01-07 华为技术有限公司 Reasoning method for road drivable region and device
WO2021148113A1 (en) * 2020-01-22 2021-07-29 Automotive Artificial Intelligence (Aai) Gmbh Computing system and method for training a traffic agent in a simulation environment

Also Published As

Publication number Publication date
CN114820971A (en) 2022-07-29

Similar Documents

Publication Publication Date Title
US11334753B2 (en) Traffic signal state classification for autonomous vehicles
CN110758243B (en) Surrounding environment display method and system in vehicle running process
Hirabayashi et al. Traffic light recognition using high-definition map features
US20180074493A1 (en) Method and device for producing vehicle operational data based on deep learning techniques
JP4595759B2 (en) Environment recognition device
CN108959321A (en) Parking lot map constructing method, system, mobile terminal and storage medium
CN111899515B (en) Vehicle detection system based on wisdom road edge calculates gateway
US11574462B1 (en) Data augmentation for detour path configuring
US20220146277A1 (en) Architecture for map change detection in autonomous vehicles
CN110015293B (en) Low-dimensional determination of bounding regions and motion paths
CN110414418A (en) A kind of Approach for road detection of image-lidar image data Multiscale Fusion
CN113525357B (en) Automatic parking decision model optimization system and method
CN116597690B (en) Highway test scene generation method, equipment and medium for intelligent network-connected automobile
US20240005642A1 (en) Data Augmentation for Vehicle Control
WO2022246352A1 (en) Automatic generation of vector map for vehicle navigation
WO2023213155A1 (en) Vehicle navigation method and apparatus, computer device and storage medium
CN109186624A (en) A kind of unmanned vehicle traveling right of way planing method based on friendship specification beam
US20230056589A1 (en) Systems and methods for generating multilevel occupancy and occlusion grids for controlling navigation of vehicles
CN115146873A (en) Vehicle track prediction method and system
CN114820971B (en) Graphical expression method for describing complex driving environment information
US20240001849A1 (en) Data Augmentation for Driver Monitoring
CN117372991A (en) Automatic driving method and system based on multi-view multi-mode fusion
Nuhel et al. Developing a self-driving autonomous car using artificial intelligence algorithm
CN115257785A (en) Automatic driving data set manufacturing method and system
CN115610442A (en) Composite scene for implementing autonomous vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant