WO2021227520A1 - 可视化界面的显示方法、装置、电子设备和存储介质 - Google Patents

可视化界面的显示方法、装置、电子设备和存储介质 Download PDF

Info

Publication number
WO2021227520A1
WO2021227520A1 PCT/CN2020/140611 CN2020140611W WO2021227520A1 WO 2021227520 A1 WO2021227520 A1 WO 2021227520A1 CN 2020140611 W CN2020140611 W CN 2020140611W WO 2021227520 A1 WO2021227520 A1 WO 2021227520A1
Authority
WO
WIPO (PCT)
Prior art keywords
target vehicle
model
display
object model
point cloud
Prior art date
Application number
PCT/CN2020/140611
Other languages
English (en)
French (fr)
Inventor
车春回
潘超
陈广庆
区彦开
钟华
韩旭
Original Assignee
广州文远知行科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州文远知行科技有限公司 filed Critical 广州文远知行科技有限公司
Priority to US17/925,121 priority Critical patent/US20230184560A1/en
Publication of WO2021227520A1 publication Critical patent/WO2021227520A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3679Retrieval, searching and output of POI information, e.g. hotels, restaurants, shops, filling stations, parking facilities
    • G01C21/3682Retrieval, searching and output of POI information, e.g. hotels, restaurants, shops, filling stations, parking facilities output of POI information on a road map
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/3635Guidance using 3D or perspective road maps
    • G01C21/3638Guidance using 3D or perspective road maps including 3D objects and buildings
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3453Special cost functions, i.e. other than distance or default speed limit of road segments
    • G01C21/3492Special cost functions, i.e. other than distance or default speed limit of road segments employing speed data or traffic data, e.g. real-time or historical
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3667Display of a road map
    • G01C21/367Details, e.g. road map scale, orientation, zooming, illumination, level of detail, scrolling of road map or positioning of current position marker
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3691Retrieval, searching and output of information related to real-time traffic, weather, or environmental conditions
    • G01C21/3694Output thereof on a road map
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3697Output of additional, non-guidance related information, e.g. low fuel level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Definitions

  • This application relates to the field of automatic driving technology, for example, to a display method, device, electronic device, and storage medium of a visual interface.
  • the electronic map is displayed on the visual interface, and the point cloud data of the surrounding environment of the car is obtained through the sensors on the car, and the point cloud data is input into the pre-trained classification model to obtain the surrounding areas of the car.
  • the classification result of the object for example, the classification result can be a vehicle, a pedestrian, a bicycle, an ice cream cone, etc., and then the classification result is matched with the corresponding model and displayed on the map to realize the visual display of the driving environment.
  • the classification model is accurate in classifying certain types of objects (such as vehicles) and inaccurate in classifying other types of objects (such as objects other than vehicles). , Even unable to classify, and further cause the wrong model to be displayed for inaccurate objects in the visualization interface or even no model, which undoubtedly reduces the user experience.
  • This application provides a visual interface display method, device, electronic equipment and storage medium to solve the problem of inaccurate classification of objects in related technologies, resulting in the display of incorrect models or even non-display models for objects with inaccurate classification on the visual interface, reducing The problem of user experience.
  • an embodiment of the present application provides a method for displaying a visual interface, including:
  • the object model is displayed on the map, wherein the first object model is displayed for the first object detected by the target vehicle; the first object model that includes at least point cloud data is displayed for the non-first object detected by the target vehicle Two object model.
  • it also includes:
  • the displaying a first object model for the first object detected by the target vehicle includes:
  • the first object model is displayed according to the position of the first object.
  • the displaying a second object model including at least point cloud data for the non-first object detected by the target vehicle includes:
  • the second object model including at least the point cloud is displayed according to the position of the second object.
  • it also includes:
  • Display task progress information of the target vehicle performing the driving task where the task progress information includes at least one of a progress bar, a distance traveled, and a travel time.
  • it also includes:
  • the traffic light information is used to indicate the status of the traffic light detected by the target vehicle; and/or,
  • it also includes:
  • the first object model is on the driving route, the first object model is highlighted.
  • the displayed navigation information generated for the target vehicle includes:
  • the distance from the target vehicle to the destination is displayed.
  • an embodiment of the present application also provides a visual interface display device, including:
  • the driving task determination module is used to determine the target vehicle to perform the driving task
  • the map display module is used to display a map within a preset range according to the real-time position of the target vehicle
  • the object model display module is used to display the object model on the map, wherein the first object model is displayed for the first object detected by the target vehicle, and the non-first object detected by the target vehicle is displayed At least a second object model including point cloud data.
  • an embodiment of the present application also provides an electronic device, the electronic device including:
  • One or more processors are One or more processors;
  • Memory used to store one or more programs
  • the one or more processors implement the visual interface display method provided in the embodiments of the present application.
  • an embodiment of the present application further provides a computer-readable storage medium on which a computer program is stored, wherein the computer program is executed by a processor to implement the visual interface display method provided by the embodiment of the present application.
  • the embodiment of the application displays a map within a preset range according to the real-time position of the target vehicle, and displays the object model on the map, where the first object model is displayed for the first object detected by the target vehicle ,
  • the second object model including at least point cloud data for the non-first object detected by the target vehicle, which solves the problem that the classification model does not accurately or cannot classify the non-first object, causing the non-first object to be displayed on the visualization interface
  • the wrong model does not even show any model problems.
  • the first object model can be displayed for the first object detected by the target vehicle and that is accurately classified.
  • the non-first object that is inaccurate or cannot be classified is displayed at least including the first object of the point cloud data.
  • the second object model which can display models for the first and non-first objects detected by the target vehicle, and does not need to classify the non-first objects but displays the second object model containing at least the point cloud for the non-first objects, It reduces the amount of data rendered by the non-first object model, and even does not need to render the model for the non-first object, which realizes that while displaying the model for the detected object, the speed of model rendering is increased, and the user experience is improved.
  • FIG. 1 is a flowchart of a method for displaying a visual interface provided in Embodiment 1 of this application;
  • FIG. 2 is a flowchart of a method for displaying a visual interface provided in the second embodiment of the present application
  • FIG. 3 is a schematic diagram of a visualization interface of an embodiment of the application.
  • FIG. 4 is a flowchart of a method for displaying a visual interface provided in the third embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a display device for a visual interface provided in the fourth embodiment of the application.
  • FIG. 6 is a schematic structural diagram of an electronic device according to Embodiment 5 of this application.
  • Fig. 1 is a flowchart of a method for displaying a visual interface provided in the first embodiment of the application. This embodiment is applicable to the situation where the driving environment is displayed on the visual interface.
  • the method can be executed by the display device of the visual interface.
  • the interface display device can be implemented by software and/or hardware, and can be configured in the electronic device provided in the embodiment of the present application. The method specifically includes the following steps:
  • the target vehicle can be a self-driving vehicle, and the self-driving vehicle (also called a driverless car, self-driving car, or robot car) can perceive the environment and navigate without human input.
  • Autonomous vehicles can be equipped with high-precision GPS navigation systems and lidar for obstacle detection.
  • Autonomous vehicles can also be configured to use technologies such as cameras, radar, light detection and ranging (LIDAR), GPS and other sensors to sense Its surrounding environment, and display the surrounding environment in the visual interface.
  • LIDAR light detection and ranging
  • the autopilot program module can control the steering wheel, accelerator, brake and other equipment of the autopilot vehicle, and the autopilot vehicle can drive automatically without manual intervention.
  • the embodiments of the present application can determine whether the target vehicle is performing a driving task, and when it is determined that the target vehicle is performing a driving task, task information of the driving task can be further obtained.
  • the task information may include the starting point and end point information of the target vehicle, and may also include the target vehicle.
  • the path planning strategy from the start point to the end point, the display content of the visual interface, and other information. Among them, the start point and end point information can be the coordinates of the start point and the end point.
  • the path planning strategy can be the shortest time, the shortest path, or the least cost.
  • the visual interface The display content may be user-customized content that needs to be displayed.
  • the display content of the visualization interface may include a task progress bar, elapsed travel time, map display mode, travel speed, and so on.
  • S102 Display a map within a preset range according to the real-time position of the target vehicle.
  • the map may be a three-dimensional electronic map containing models of fixed objects such as buildings, roads, trees, etc., generated in advance based on semantic maps.
  • the target vehicle When it is determined that the target vehicle is performing the driving task, it may be obtained through the positioning system installed on the target vehicle The location of the target vehicle itself.
  • the location of the target vehicle can be obtained through the GPS positioning system, or after the laser radar on the target vehicle scans the surrounding environment to obtain the point cloud data, the point cloud data is combined with the pre-generated point cloud map
  • the location of the target vehicle is determined by matching, or the real-time location of the target vehicle is acquired through other positioning sensors.
  • the embodiment of the present application does not limit the way of acquiring the real-time location of the target vehicle.
  • a map containing the real-time location within a preset range can be retrieved from the electronic map database according to the location and displayed in the visualization interface.
  • the preset range can be the target A circular range with a preset radius centered on the position of the vehicle may also be a fan-shaped range predetermined in front of the target vehicle centered on the position of the target vehicle.
  • the point cloud is input into the pre-trained classification model to obtain the classification result of each object in the point cloud.
  • the first object may be an object whose classification result is a vehicle
  • the non-first object may be an object other than a vehicle as a classification result, and/or an object without a classification result.
  • the first object may also be a classification model.
  • the objects that can be accurately classified are not limited to vehicles. Exemplarily, if there are other vehicles, pedestrians, bicycles, cyclists, ice cream cones, etc.
  • the classification model for vehicles, pedestrians, bicycles and ice cream cones can give clear classification results, while for riding It is impossible to determine whether the cyclist is a bicycle or a pedestrian.
  • the visual interface other vehicles around the target vehicle have more reference meaning for automatic driving. You can use the vehicle as the first object in the visual interface and display that it can reflect the vehicle.
  • the size of the first object model, for the non-first object, the second object model can be displayed on the visualization interface to express that the target vehicle detects the non-first object.
  • the first object model may be a three-dimensional model of another vehicle detected by the target vehicle, and the three-dimensional model may be a solid model of another vehicle.
  • the first object model may be a frame model, and the frame model may be a three-dimensional rectangular frame.
  • the second object model may be a point cloud model, that is, for the first object, a three-dimensional rectangular box matching the first object is displayed on the map, for non-first objects, the point cloud of the non-first object is displayed on the map, and the second
  • the object model can also be a model containing at least a point cloud, that is, the second object model can be a hybrid model of a point cloud and a solid model, because the non-first object is presented directly on the map in the form of a point cloud or at least a part of a point cloud.
  • the first object model is displayed on the map for the first object detected by the target vehicle, and the second object model is displayed for the non-first object detected by the target vehicle.
  • the classification model is inaccurate or unable to classify non-first objects, causing the wrong model to be displayed for non-first objects in the visualization interface or even not displaying any model problems.
  • the first object model, the non-first object that is not classified accurately or that cannot be classified displays the second object model that contains at least point cloud data, which can display models for the first object detected by the target vehicle and the non-first object without matching
  • the non-first object is classified, but the second object model containing at least the point cloud is displayed for the non-first object, which reduces the amount of data rendered by the non-first object model, and does not even need to render the model for the non-first object.
  • the model for the detected object it also improves the rendering speed of the model and improves the user experience.
  • Fig. 2 is a flowchart of a method for displaying a visual interface provided in the second embodiment of the application. This embodiment is optimized on the basis of the aforementioned first embodiment. The method specifically includes the following steps:
  • the driving task executed by the target vehicle upon receiving the display request of the visualization interface, the driving task executed by the target vehicle is determined.
  • the target vehicle can be an autonomous vehicle, and a driving task list can be established for the vehicle.
  • the driving task list stores the time for the vehicle to perform each driving task.
  • the driving task list is determined If there is a driving task executed at the current time, if it exists, further obtain the task information of the driving task.
  • the task information is preset and stored information.
  • the task information may include the starting point and ending point information of the target vehicle. It can also include information such as the path planning strategy of the target vehicle from the start point to the end point, and the display content of the visual interface.
  • S202 Display a map within a preset range according to the real-time position of the target vehicle.
  • a map display mode option or a map display model switching button can be provided on the visualization interface.
  • the map display mode can include global mode and local mode.
  • the global mode is the mode that displays the map including the start and end points of the target vehicle, and the local mode is the display target.
  • the mode of the map within the preset range of the current location of the vehicle.
  • the map display mode can also be 3D and 2D display modes, that is, display a three-dimensional map or a two-dimensional map; the map display mode can also be a third-party perspective or a driver's perspective In the display mode, the driver’s perspective is the perspective viewed from the driving position, and the third-party perspective can be a perspective other than the target vehicle, as shown in Figure 3 for the map viewed from the third perspective.
  • the map may be a three-dimensional electronic map containing models of fixed objects such as buildings, roads, trees, etc., pre-generated based on semantic maps.
  • the preset may be determined according to the real-time location of the target vehicle The map within the range, and the map within the preset range is displayed in the visual interface in the visual angle selected by the user, where the preset range can be a circular range with a preset radius centered on the location of the target vehicle, or it can be The position of the target vehicle is the center and the preset fan-shaped range in front of the target vehicle.
  • S203 Display a vehicle model for the target vehicle on the map according to the real-time position of the target vehicle.
  • a vehicle model may be set for the target vehicle in advance, and the vehicle model may be a three-dimensional model of the target vehicle or a frame model or the like.
  • the vehicle model of the target vehicle can be displayed at the real-time location of the target vehicle on the map.
  • the real-time location of the target vehicle on the map The vehicle model 10 of the target vehicle is displayed.
  • S204 Acquire environmental information detected when the target vehicle executes the driving task.
  • the target vehicle can be equipped with sensors such as lidar, millimeter-wave radar, camera, infrared sensor, etc.
  • the target vehicle can detect the surrounding environment of the target vehicle through at least one of the above sensors to obtain various sensor data as the environment during driving. information.
  • at least one lidar is installed on the target vehicle, and the lidar installed on the target vehicle emits a laser signal during the execution of the driving task of the target vehicle, and the laser signal is diffusely reflected by various objects in the scene around the target vehicle After returning to the lidar, the lidar performs processing such as noise reduction and sampling on the received laser signal to obtain the point cloud as environmental information.
  • the target vehicle can also take images with the camera according to the preset cycle, and further calculate the distance from each object in the image to the target vehicle based on the captured image combined with the image ranging algorithm as environmental information, or perform semantic segmentation on the image to obtain the semantics in the image
  • the information is used as environmental information.
  • semantic segmentation is performed on the image to obtain semantically segmented areas such as traffic lights, vehicles, and pedestrians as environmental information.
  • the camera may be one of a monocular camera, a binocular camera, and a multi-lens camera.
  • S205 Identify the location and type of the first object in the environmental information.
  • the environmental information may include the point cloud obtained by the sensor, and the classification model may be pre-trained to classify the various objects forming the point cloud.
  • the point cloud of various objects may be obtained and the object belongs to
  • the trained classification model can identify the classification of each object from the point cloud after inputting the point cloud.
  • the environmental information can also be the image taken by the camera, which can obtain images of various objects, and mark the classification of the object as training data to train the classification model.
  • the trained classification model can identify each object from the image after inputting the image.
  • the classification to which the object belongs, or the environmental information can be radar data of millimeter-wave radar, and the radar data can be used to train the classification model.
  • the environmental information data can include point clouds, images, radar data and other sensor data. A variety of sensor data is used to train the classification model, and the embodiment of the present application does not impose restrictions on what kind of data is used to train the classification model.
  • the object may be an object around the target vehicle.
  • the object may be other vehicles, pedestrians, bicycles, traffic lights, ice cream cones, etc., around the target vehicle.
  • the first object may be a vehicle.
  • the point cloud can be input into the pre-trained classification model to identify the type of vehicle as the first object, and it can be obtained through point cloud registration.
  • the position of the first object in the point cloud can be the position of the first object relative to the target vehicle, or the position of the first object in the world coordinate system, where the number of the first object can be one or more , That is, identify all vehicles around the target vehicle from the point cloud, and determine the location of each vehicle.
  • S206 Acquire a first adapted first object model according to the type of the first object.
  • a first object model may be set for the first object in advance, and the first object model may be a three-dimensional model of the first object, or a frame model representing the outline size of the first object.
  • the model of the first object is a frame model 20.
  • the outline size of the first object can be determined through the point cloud, for example, the length, width, and height dimensions of the first object can be determined, and then according to the length, width, and height dimensions of the first object, find the fit from the frame model library
  • the frame model of the size is used as the first object model of the first object, so that the first object model of the adapted size can be displayed for the first objects of different sizes.
  • the first object can be divided into large vehicles and small vehicles according to its external dimensions.
  • Large vehicles can include trucks, buses or other large-scale construction vehicles, and small vehicles can include small passenger cars, vans, etc.
  • the outline size of the first object determines the type of vehicle to which the first object model belongs, so that the user can understand the types of vehicles around the target vehicle to decide whether to perform manual intervention, for example, when there are more trucks in a port or industrial area, the user It can be known from the visual interface that the target vehicle is driving on a road with many trucks, so that it can be determined whether to switch from the automatic driving mode to the remote control driving mode.
  • the environmental information detected by the sensor on the target vehicle can be input into a pre-trained detection model, and the classification result, position, size, and orientation of each object can be obtained through the detection model.
  • Speed, acceleration, etc. when the classification result of the object is a vehicle, the object is the first object, and the shape size data of the first object is input into the renderer to render the frame model as the first object model, and the frame is rendered by the shape size
  • the model has a small amount of data and a simple model, which can increase the speed of obtaining the first object model.
  • S207 Display the first object model according to the position of the first object.
  • the first object model is displayed at the location of the first object on the map, so that the vehicle model of the target vehicle is displayed on the visualization interface, and the first object model of the first object around the vehicle model is displayed, for example, in Determine the orientation of the first object model, for example, after determining the direction of the front of the car, the first object model can be displayed at the position of the first object according to the orientation, that is, the orientation of the front of the vehicle can be reflected on the first object model, so that It can be clearly understood on the visual interface whether the vehicle is driving in the same direction or in the opposite direction.
  • the shape feature or mark of the front can be added to the end of the front in the frame model, and the tail is added to the end of the frame in the frame model. The shape feature or mark etc.
  • the vehicle model 10 of the target vehicle and the first object model 20 of the first objects around the vehicle model 10 are displayed on the map.
  • the second object may be an object other than the first object.
  • the first object is a vehicle
  • the second object is a pedestrian, a bicycle, a telephone pole, an ice cream cone, etc., other than a vehicle.
  • the environmental information may include point clouds, which may be input into a pre-trained classification model to identify the type that is not the type to which the first object belongs or objects that cannot be classified, and the type is not the type to which the first object belongs or includes Objects that cannot be classified are regarded as the second object, that is, the non-first object, and the position of the second object in the point cloud is obtained through point cloud registration.
  • the environmental information can also include the image taken by the camera, the scan data of the millimeter wave radar, etc.
  • the environment information can be input into a pre-trained detection model to obtain classification results and positions of various objects, and objects that are classified differently from the first object can be used as the second object.
  • the environmental information includes the point cloud obtained by the lidar and the image taken by the camera.
  • the target detection algorithm can be used to identify the second object in the image. After the camera and the lidar are jointly calibrated, the image The identified second object is projected into the point cloud, so that the point cloud of the second object can be separated from the point cloud obtained by the lidar.
  • the sensors on the target vehicle obtain multiple frames of environmental information according to a preset period, and store the environmental information obtained in each period in chronological order
  • the classification model to identify at least one second object from each frame of environmental information, and extract the point clouds of all the second objects through the frame environmental information, and then combine all
  • the point cloud of the second object is input into the pre-trained point cloud separation model to separate the point cloud of each second object, and the point cloud of each second object obtained from the multi-frame environment information is smoothed, and the smoothing process will be performed
  • the latter point cloud serves as the final point cloud of the second object.
  • the point cloud separation model can be trained by acquiring the point clouds of multiple second objects, so that the point cloud separation model can separate the point cloud of each object from the point clouds of the multiple objects.
  • smoothing the point cloud may include point cloud preprocessing and point cloud smoothing, where the point cloud preprocessing may include removing outliers, removing noise points, and removing distortion points, etc.
  • the smoothing process may include Mean filtering and smoothing. Specifically, for each point in the point cloud of each second object, the average value of the point in the point cloud of each second object obtained from multiple frames of environmental information can be calculated, for example, to calculate adjacent The average value of the three-dimensional coordinates of a certain point in the point cloud of each second object obtained from two or more frames of environmental information is taken as the result of the smoothing process.
  • the smoothing processing may also be median filtering smoothing, Gaussian filtering smoothing, etc.
  • the embodiment of the present application does not limit the smoothing processing method of the point cloud.
  • the embodiment of the present application first preprocesses the point cloud of the second object, which can remove invalid points and noise points, and improve the accuracy of the point cloud. Furthermore, by smoothing the point cloud of the second object, the second object can be obtained.
  • the smooth point cloud of the object can achieve a good display effect on the visualization interface when displaying the point cloud of the second object.
  • S210 Display a second object model including at least the point cloud according to the position of the second object.
  • the second object model may be a point cloud model, that is, the point cloud model of the second object is displayed directly at the position of the second object on the map, as shown in FIG. 3, and the point cloud is displayed in FIG. 3.
  • Model 70 The embodiment of the present application does not need to explicitly classify the second object, nor does it need to match the model for the second object, which improves the display efficiency of the model of the second object.
  • a display template preset for the second object may be obtained, and the display template may include a modified model of an entity, and the point cloud of the second object is displayed on the modified model, wherein Displaying the point cloud of the second object on the modified model includes: scaling the point cloud of the second object so that the projection contour of the point cloud of the second object on the ground is surrounded by the projection contour of the modified model.
  • the modified model is a disc
  • the second object is an ice cream cone. The point cloud corresponding to the ice cream cone can be zoomed, and the zoomed point cloud can be displayed on the disc.
  • displaying the point cloud of the second object on the modification model includes: calculating the outline size of the point cloud, adjusting the size of the modification model according to the outline size, and displaying it in the adjusted modification model Point cloud.
  • the modified model may be a cylindrical space, the bottom of the cylindrical space is solid, and the upper space is transparent, the diameter of the cylindrical space can be adjusted according to the projection contour of the point cloud on the ground and the cylindrical space can be adjusted by the height of the point cloud
  • the point cloud can be contained in the cylindrical space.
  • the point cloud of the pedestrian is displayed on the entity at the bottom of the cylindrical space, so that the outline size of the pedestrian can be understood from the visualization interface according to the outline of the cylindrical space.
  • a map within a preset range is displayed according to the real-time position of the target vehicle and the vehicle model of the target vehicle is displayed on the map.
  • the first object is identified from the environmental information.
  • Location and type match the first object model according to the type of the first object and display it on the map, identify the location and type of the second object from the environmental information, extract the point cloud of the second object, and display the second object on the map
  • the second object model including the point cloud solves the problem that the classification model does not accurately or cannot classify the non-first object, which causes the wrong model to be displayed for the non-first object in the visualization interface or even does not display any model. It can be used by the target vehicle.
  • the first object that is detected and classified accurately displays the first object model
  • the second object that is inaccurate or cannot be classified displays the second object model containing the point cloud, which can be the first object and the second object detected by the target vehicle.
  • the object display model does not need to classify the second object, but displays the second object model containing at least the point cloud for the non-first object, reducing the amount of data rendered by the non-first object model, even without the non-first object Rendering the model improves the speed of model rendering, thereby improving the user experience.
  • FIG. 4 is a flowchart of a method for displaying a visual interface provided in the third embodiment of this application. This embodiment is optimized based on the aforementioned first embodiment. The method specifically includes the following steps:
  • S301 Determine the target vehicle to perform the driving task.
  • S302 Display a map within a preset range according to the real-time position of the target vehicle.
  • the task progress information may be the progress information of the target vehicle in executing the driving task, and the task progress information may be at least one of a progress bar, a traveled distance, and a traveled time.
  • the progress bar can be generated based on the traveled distance and the total distance, and the traveled distance can be counted by the odometer on the target vehicle.
  • the task progress information 30 is displayed on the visual interface.
  • a progress bar may be included.
  • the progress bar expresses the execution progress of the driving task, and may also include the distance traveled, that is, the target vehicle starts to execute
  • the distance traveled after the driving task can also include the elapsed time, that is, the total time traveled by the target vehicle after starting to perform the driving task.
  • the task progress information can also be expressed in other forms such as percentages.
  • the display method of the progress information is not limited.
  • the driving task can be a task in which the target vehicle travels from a designated starting point to a designated end point.
  • the driving route is planned in real time in combination with the environmental information detected by the sensors on the target vehicle and the driving route is displayed on the map.
  • plan a driving route from the starting point to the end point is planned according to the environmental information detected by the sensor.
  • the driving route 50 can be displayed in the form of a light strip in the driving direction of the target vehicle, so that the driving route 50 is clearly distinguished from the road markings such as zebra crossings and lane lines on the map, which is beneficial for users from the map. Identify the driving route.
  • the first object may be a vehicle detected by the target vehicle, and it may be determined whether to highlight the first object model of the first object according to the degree of interference of the first object to the driving of the target vehicle.
  • the vehicles detected by the target vehicle may be vehicles around the target vehicle, and the interference degree may be the vehicles detected within a preset range around the target vehicle.
  • the target vehicle detects vehicles in a circular area with a preset radius centered on the target vehicle, and obtains the distance between all vehicles in the circular area and the target vehicle, and when the distance is less than a preset threshold, the vehicle is determined If the vehicle is an interfering vehicle, you can highlight the model of the interfering vehicle in the circular area, that is, highlight the first object model.
  • the vehicle in front of the target vehicle brakes sharply or its driving speed decreases Causes the distance between the vehicle in front and the target vehicle to decrease.
  • the distance is less than the preset threshold, it indicates that the vehicle in front is on the driving route required by the target vehicle and the distance is less than the preset threshold.
  • the first object model is to warn the user that the vehicle interferes with the driving of the target vehicle.
  • the vehicle next to the target vehicle changes lanes and approaches the target vehicle.
  • the distance between the next vehicle and the target vehicle is less than the preset threshold, if the target vehicle is still driving in the current direction, a collision may occur .
  • the first object model of the vehicle next to the target vehicle can be highlighted to warn that the nearby vehicle interferes with the normal driving of the target vehicle.
  • the embodiment of the present application can highlight the interfering vehicles in a circular area with a preset radius centered on the target vehicle, so as to facilitate the user to perform manual supervision or manual intervention in time, and improve the driving safety of the target vehicle.
  • the distance from the vehicles around the target vehicle to the target vehicle can be calculated. If the distance is less than the preset threshold, the distance between the highlighted surrounding vehicles and the target vehicle is less than the predicted value.
  • the first object model of the vehicle with a threshold value is used to warn that there are surrounding vehicles interfering with the target vehicle to change lanes, so as to facilitate the user to conduct manual supervision or manual intervention in time, and to improve the driving safety of the target vehicle.
  • the brightness of the highlighted first object model can be determined according to the degree of interference, for example, the color of the highlighted display is gradual according to the distance.
  • the highlight color is red. The smaller the distance, the red color The deeper it is, the smaller it is on the contrary, so that the user can learn the degree of interference of surrounding vehicles to the target vehicle from the brightness of the highlighted color.
  • S307. Display traffic light information, where the traffic light information is used to indicate the status of the traffic light detected by the target vehicle.
  • a camera is installed on the target vehicle.
  • the image can be obtained by capturing the traffic light of the intersection that the target vehicle needs to pass through the camera, and the image can be recognized to obtain the status of the traffic light, and the status of the traffic light can be displayed on the virtual traffic light of the visualization interface. middle.
  • traffic light information 60 can be displayed in the upper right corner of the visual interface.
  • the target traffic light can be determined and displayed from the multiple traffic lights according to the location and driving route of the target vehicle
  • the status of the target traffic light for example, when the next driving path of the target vehicle is to continue driving straight from the current position, the traffic light in front of the target vehicle is used as the target traffic light, and the status of the target traffic light is recognized and displayed on the visualization interface, or When the next driving path of the target vehicle is turning, the traffic light in the turning direction of the target vehicle is used as the target traffic light, and the status of the target traffic light is recognized and displayed on the visualization interface.
  • the embodiment of the application determines the target traffic light from multiple traffic lights, which can avoid recognizing the status of multiple traffic lights, reduces the amount of data for image recognition, improves the display speed of traffic light information, and reduces the amount of traffic light displayed on the visual interface.
  • the quantity makes the visual interface more concise.
  • the status of the pedestrian traffic light can be determined first, and the status of the traffic light in front of the target vehicle can be determined by the status of the pedestrian traffic light.
  • the pedestrian traffic lights at both ends of the zebra crossing acquire the image of the pedestrian traffic light, recognize the image to obtain the pedestrian traffic light status, and determine the traffic light status in front of the target vehicle to indicate the target vehicle to drive according to the pedestrian traffic light status.
  • the pedestrian traffic light is green
  • the status of the traffic light in front of the target vehicle for indicating the target vehicle is determined to be red.
  • the status of the traffic light in front of the target vehicle for indicating the target vehicle is determined to be green, which can be advanced
  • the traffic light information is displayed, or the front traffic light is blocked by the vehicle in front, the camera cannot obtain the image of the front traffic light, the traffic light information is determined by the pedestrian traffic light information next to it.
  • the navigation information may be the driving speed of the target vehicle, the distance from the target vehicle to the destination, the steering reminder information of the driving route, the lane change reminder information of the vehicle during driving, etc., and the navigation information may be displayed on the visual interface middle.
  • the steering reminder information can be the display of the steering mark and the distance from the target vehicle to the steering position on the visual interface
  • the driving speed can be the display of text or a virtual speedometer on the visual interface
  • the vehicle lane change reminder information can be broadcast by voice through the speaker .
  • the navigation information 40 is the steering reminder information of the driving route and the driving speed of the target vehicle.
  • the sensor on the target vehicle can also sense the light intensity of the surrounding environment, and adjust the display mode of the visualization interface according to the light intensity.
  • the display mode can include night mode or day mode.
  • the current time can also be determined according to the current time. Whether it is day or night, in order to realize the display mode switching between the night mode and the day mode, so that the visual interface can be displayed according to the light intensity of the environment, and the viewing comfort of the human eye is improved.
  • the first object model is displayed on the map for the first object detected by the target vehicle, and the first object that contains at least the point cloud data is displayed for the non-first object detected by the target vehicle.
  • the two-object model solves the problem of inaccurate or inaccurate classification of non-first objects by the classification model, causing the wrong model to be displayed for non-first objects in the visualization interface or even not displaying any models. It can be detected and classified by the target vehicle
  • the accurate first object displays the first object model
  • the inaccurate or unclassified non-first object displays the second object model.
  • the target vehicle can display the model for the first and non-first objects detected by the target vehicle, and there is no need to The non-first object is classified, but the second object model containing at least the point cloud is displayed for the non-first object, which reduces the amount of data rendered by the non-first object model, and does not even need to render the model for the non-first object. While displaying the model for the detected object, it also improves the rendering speed of the model and improves the user experience.
  • the driving route, traffic light information, and navigation information are displayed for the target vehicle on the visualization interface to realize the visualization of driving data.
  • the first object model of the first object is highlighted to warn the user that the first object blocks the driving of the target vehicle, which is beneficial to the user Carry out manual supervision or manual intervention in time to improve the driving safety of the target vehicle.
  • FIG. 5 is a schematic structural diagram of a display device for a visual interface provided in the fourth embodiment of the application.
  • the device may specifically include the following modules:
  • the driving task determination module 401 is used to determine the target vehicle to perform the driving task; the map display module 402 is used to display a map within a preset range according to the real-time position of the target vehicle; the object model display module 403 is used to display on the map The object model, wherein a first object model is displayed for the first object detected by the target vehicle; and a second object model including at least point cloud data is displayed for the non-first object detected by the target vehicle.
  • it also includes:
  • the vehicle model display module is used to display the vehicle model for the target vehicle on the map according to the real-time position of the target vehicle.
  • the object model display module 403 includes:
  • the point cloud acquisition sub-module is used to acquire the environmental information detected by the target vehicle during the execution of the driving task; the first object recognition sub-module is used to identify the location and the location of the first object in the environmental information Type; a first object model matching sub-module for obtaining the adapted first object model according to the type of the first object; a first object model display sub-module for displaying the first object model according to the position of the first object The first object model.
  • the object model display module 403 includes:
  • the point cloud acquisition sub-module is used to acquire the environmental information detected by the target vehicle when performing the driving task; the second object recognition sub-module is used to identify the location and the second object in the environmental information Type; point cloud extraction sub-module, used to extract the point cloud of the second object from the environmental information; second object model display sub-module, used to display at least the point cloud content according to the position of the second object The second object model.
  • it also includes:
  • the task progress information display module is configured to display task progress information of the target vehicle in executing the driving task, wherein the task progress information includes at least one of a progress bar, a traveled distance, and a traveled time.
  • it also includes:
  • the information display module is used to display the driving route generated for the target vehicle on the map; and/or the traffic light information display module is used to display traffic light information, which is used to indicate where the target vehicle is located. The status of the detected traffic light; and/or the navigation information display module is used to display the navigation information generated for the target vehicle.
  • it also includes:
  • the highlight display module is configured to highlight the first object model when the first object model is on the driving route.
  • the navigation information display module includes:
  • the speed display sub-module is used to display the speed of the target vehicle when performing the driving task; the distance display sub-module is used to display the distance of the target vehicle to the destination.
  • the display device of the visual interface provided by the embodiment of the present application can execute the display method of the visual interface provided by any embodiment of the present application, and has functional modules and beneficial effects corresponding to the execution method.
  • the device may specifically include: a processor 500, a memory 501, a display screen 502 with a touch function, an input device 503, an output device 504, and a communication device 505.
  • the number of processors 500 in the device may be one or more, and one processor 500 is taken as an example in FIG. 6.
  • the number of memories 501 in the device may be one or more.
  • one memory 501 is taken as an example.
  • the processor 500, the memory 501, the display screen 502, the input device 503, the output device 504, and the communication device 505 of the device may be connected through a bus or other methods. In FIG. 6, the connection through a bus is taken as an example.
  • the memory 501 can be used to store software programs, computer-executable programs, and modules, such as the program instructions/modules corresponding to the visual interface display method described in any embodiment of the present application (for example, the above-mentioned visual interface
  • the memory 501 may mainly include a storage program area and a storage data area.
  • the storage program area may store an operating device and at least one function. Required applications; the data storage area can store data created according to the use of the device, etc.
  • the memory 501 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage devices.
  • the memory 501 may further include a memory remotely provided with respect to the processor 500, and these remote memories may be connected to the device through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, corporate intranets, local area networks, mobile communication networks, and combinations thereof.
  • the display screen 502 is a display screen 502 with a touch function, which may be a capacitive screen, an electromagnetic screen or an infrared screen.
  • the display screen 502 is used to display data according to instructions of the processor 500, and is also used to receive touch operations on the display screen 502, and send corresponding signals to the processor 500 or other devices.
  • the display screen 502 is an infrared screen, it also includes an infrared touch frame.
  • the infrared touch frame is arranged around the display screen 502. It can also be used to receive infrared signals and send the infrared signals to the processor. 500 or other equipment.
  • the communication device 505 is used to establish a communication connection with other devices, and it may be a wired communication device and/or a wireless communication device.
  • the input device 503 can be used to receive inputted number or character information, and generate key signal input related to user settings and function control of the device.
  • the output device 504 may include audio equipment such as a speaker. It should be noted that the specific composition of the input device 503 and the output device 504 can be set according to actual conditions.
  • the processor 500 executes various functional applications and data processing of the device by running the software programs, instructions, and modules stored in the memory 501, that is, realizes the display method of the above-mentioned visual interface.
  • the processor 500 when the processor 500 executes one or more programs stored in the memory 501, it specifically implements the steps of the visual interface display method provided in the embodiment of the present application.
  • the sixth embodiment of the present application also provides a computer-readable storage medium on which a computer program is stored.
  • the program is executed by a processor
  • the visualization interface display method in any embodiment of the present application can be realized, and the method may specifically include :
  • Determine the target vehicle to perform a driving task display a map within a preset range according to the real-time position of the target vehicle; display an object model on the map, wherein the first object model is displayed for the first object detected by the target vehicle; A second object model including at least point cloud data is displayed for the non-first object detected by the target vehicle.
  • An embodiment of the application provides a storage medium containing computer-executable instructions.
  • the computer-executable instructions are not limited to the method operations described above, and can also execute the visual interface provided by any embodiment of the application applied to the device. Show related operations in the method.
  • this application can be implemented by software and necessary general-purpose hardware, or can be implemented by hardware.
  • the technical solution of this application can essentially be embodied in the form of a software product, and the computer software product can be stored in a computer-readable storage medium, such as a computer floppy disk, read-only memory (ROM), random access Random Access Memory (RAM), flash memory (FLASH), hard disk or optical disk, etc., including several instructions to make a computer device (which can be a personal computer, server, or network device, etc.) execute the various embodiments of this application The display method of the visual interface.
  • the various units and modules included are only divided according to the functional logic, but are not limited to the above division, as long as the corresponding function can be realized; in addition, the function of each functional unit
  • the specific names are only for the convenience of distinguishing each other, and are not used to limit the scope of protection of this application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Environmental & Geological Engineering (AREA)
  • Ecology (AREA)
  • Atmospheric Sciences (AREA)
  • Environmental Sciences (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)

Abstract

一种可视化界面的显示方法、装置、电子设备和存储介质,显示方法包括:确定目标车辆执行行驶任务(S101);根据目标车辆实时位置显示预设范围内地图(S102);在地图上为目标车辆所检测到的第一对象显示第一对象模型;为检测到的非第一对象显示至少包括点云数据的第二对象模型(S103)。

Description

可视化界面的显示方法、装置、电子设备和存储介质
本申请要求在2020年05月14日提交中国专利局、申请号为202010408219.0的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本申请涉及自动驾驶技术领域,例如涉及一种可视化界面的显示方法、装置、电子设备和存储介质。
背景技术
随着科技的发展,汽车自动驾驶技术正处在蓬勃发展的关键时期,为了提高用户体验,可对驾驶环境进行可视化处理来实时显示汽车周围的环境。
在相关技术中,通过可视化界面显示驾驶环境时,在可视化界面显示电子地图,并通过汽车上的传感器获得汽车周围环境的点云数据,将点云数据输入预先训练的分类模型中得到汽车周围各个物体的分类结果,例如分类结果可以是车辆、行人、自行车、雪糕筒等等,然后通过分类结果匹配相应的模型显示在地图上以实现驾驶环境的可视化显示。
在实际应用中,由于点云数据采集的精度或者分类模型的准确度不高,造成分类模型对某类物体(例如车辆)的分类准确,对其他类别物体(例如车辆以外的物体)分类不准确,甚至无法进行分类,进一步导致在可视化界面为分类不准确的物体显示错误的模型甚至不显示任何模型,这无疑降低了用户体验。
发明内容
本申请提供一种可视化界面的显示方法、装置、电子设备和存储介质,以解决相关技术中对物体分类不准确,导致在可视化界面为分类不准确的物体显示错误的模型甚至不显示模型,降低了用户体验的问题。
第一方面,本申请实施例提供了一种可视化界面的显示方法,包括:
确定目标车辆执行行驶任务;
根据所述目标车辆实时位置显示预设范围内地图;
在所述地图上显示对象模型,其中,为所述目标车辆所检测到的第一对象显示第一对象模型;为所述目标车辆所检测到的非第一对象显示至少包括点云数据的第二对象模型。
可选地,还包括:
根据所述目标车辆的实时位置在所述地图上为所述目标车辆显示车辆模型。
可选地,所述为所述目标车辆所检测到的第一对象显示第一对象模型,包括:
获取所述目标车辆在执行所述行驶任务时所检测到的环境信息;
识别所述环境信息中所述第一对象的位置和类型;
根据所述第一对象的类型获取适配的第一对象模型;
根据所述第一对象的位置显示所述第一对象模型。
可选地,所述为所述目标车辆所检测到的非第一对象显示至少包括点云数据的第二对象模型,包括:
获取所述目标车辆在执行所述行驶任务时所检测到的环境信息;
识别所述环境信息中所述第二对象的位置和类型;
从所述环境信息中提取所述第二对象的点云;
根据所述第二对象的位置显示至少包含所述点云的第二对象模型。
可选地,还包括:
显示所述目标车辆执行所述行驶任务的任务进度信息,其中,所述任务进度信息包括进度条、已行驶的距离、已行驶的时间中的至少一者。
可选地,还包括:
在所述地图上显示为所述目标车辆生成的行驶路线;和/或,
显示红绿灯信息,所述红绿灯信息用于表示所述目标车辆所检测到的红绿灯的状态;和/或,
显示为所述目标车辆生成的导航信息。
可选地,还包括:
当所述第一对象模型在所述行驶路线上时,高亮显示所述第一对象模型。
可选地,所述显示为所述目标车辆生成的导航信息,包括:
显示所述目标车辆在执行所述行驶任务时的速度;
和/或,
显示所述目标车辆到目的地的距离。
第二方面,本申请实施例还提供了一种可视化界面的显示装置,包括:
行驶任务确定模块,用于确定目标车辆执行行驶任务;
地图显示模块,用于根据所述目标车辆实时位置显示预设范围内地图;
对象模型显示模块,用于在所述地图上显示对象模型,其中,为所述目标车辆所检测到的第一对象显示第一对象模型,为所述目标车辆所检测到的非第一对象显示至少包括点云数据的第二对象模型。
第三方面,本申请实施例还提供了一种电子设备,所述电子设备包括:
一个或多个处理器;
存储器,用于存储一个或多个程序;
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现本申请实施例提供的可视化界面的显示方法。
第四方面,本申请实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,其中,该计算机程序被处理器执行时实现本申请实施例提供的可视化界面的显示方法。
本申请实施例在确定目标车辆执行行驶任务时,根据目标车辆实时位置显示预设范围内地图,并在地图上显示对象模型,其中,为目标车辆所检测到的第一对象显示第一对象模型,为目标车辆所检测到的非第一对象显示至少包括点云数据的第二对象模型,解决了分类模型对非第一对象分类不准确或者无法分类,造成在可视化界面为非第一对象显示错误的模型甚至不显示任何模型问题,能够为目标车辆所检测到的、分类准确的第一对象显示第一对象模型,分类不准确或者无法分类的非第一对象显示至少包括点云数据的第二对象模型,既可以为目标车辆检测到的第一对象和非第一对象显示模型,又无需对非第一对象进行分类而是为非第一对象显示至少包含点云的第二对象模型,减少了非第一对象模型渲染的数据量,甚至无需为非第一对象渲染模型,实现了在为检测到的对象显示模型的同时又提高了模型渲染的速度,提高了用户体验。
附图说明
图1为本申请实施例一提供的一种可视化界面的显示方法的流程图;
图2是本申请实施例二提供的一种可视化界面的显示方法的流程图;
图3为本申请实施例的可视化界面的示意图;
图4是本申请实施例三提供的一种可视化界面的显示方法的流程图;
图5为本申请实施例四提供的一种可视化界面的显示装置的结构示意图;
图6为本申请实施例五提供的一种电子设备的结构示意图。
具体实施方式
下面结合附图和实施例对本申请作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释本申请,而非对本申请的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与本申请相关的部分而非全部结构。
实施例一
图1为本申请实施例一提供的一种可视化界面的显示方法的流程图,本实施例可适用于在可视化界面显示驾驶环境的情况,该方法可以由可视化界面的显示装置来执行,该可视化界面的显示装置可以由软件和/或硬件实现,可配置在本申请实施例提供的电子设备中,该方法具体包括如下步骤:
S101、确定目标车辆执行行驶任务。
目标车辆可以为自动驾驶车辆,自动驾驶车辆(也称为无人驾驶汽车、自动驾驶汽车或机器人汽车)能够感知环境并在没有人为输入的情况下进行导航。自动驾驶车辆可以配备高精度GPS导航***和用于探测障碍物的激光雷达,自动驾驶车辆还可以配置为使用诸如相机、雷达、光检测和测距(LIDAR)、GPS和其他传感器的技术来感知其周围环境,并将周围环境显示在可视化界面中。
自动驾驶车辆在自动驾驶模式下,可以由自动驾驶程序模块控制自动驾驶车辆的方向盘、油门以及刹车等设备,不需人工干预即可实现自动驾驶车辆自动行驶。
本申请实施例可以确定目标车辆是否在执行行驶任务,在确定目标车辆执行行驶任务时可以进一步获取行驶任务的任务信息,该任务信息可以包括目标车辆行驶的起点和终点信息,还可以包括目标车辆从起点到终点的路径规划策略、可视化界面的显示内容等信息,其中,起点和终点信息可以是起点和终点的坐标,路径规划策略可以是时间最短、路径最短或者费用最少等策略,可视化界面的显示内容可以是用户定制的需要显示的内容,例如,可视化界面的显示内容可以包括任务进度条、已行驶时间、地图显示模式、行驶速度等。
在获取到任务信息后,可以通过起点和终点的信息获取包含起点和终点的全局地图,按照地图显示模式显示全局地图,并根据路径规划策略规划从起点到终点的行驶路径,将行驶路径显示在地图上,同时在可视化界面上显示其他需要显示的内容。
S102、根据所述目标车辆实时位置显示预设范围内地图。
本申请实施例中,地图可以是根据语义地图预先生成的包含建筑、路面、 树木等固定对象的模型的三维电子地图,当确定目标车辆执行行驶任务时,可以通过目标车辆上安装的定位***获取目标车辆自身的位置,可选地,可通过GPS定位***获取目标车辆的位置,或者通过目标车辆上的激光雷达扫描周围环境获得点云数据后,将该点云数据与预先生成的点云地图匹配确定目标车辆的位置,或者通过其他定位传感器获取目标车辆的实时位置,本申请实施例对获取目标车辆的实时位置的方式不加以限制。
在获取到目标车辆的实时位置后,可以根据该位置从电子地图数据库中调取包含该实时位置的、预设范围内的地图显示在可视化界面中,可选地,预设范围可以是以目标车辆的位置为中心的预设半径的圆形范围,还可以是目标车辆的位置为中心的、目标车辆前方预设的扇形范围。
S103、在所述地图上显示对象模型,其中,为所述目标车辆所检测到的第一对象显示第一对象模型,为所述目标车辆所检测到的非第一对象显示至少包括点云数据的第二对象模型。
本申请实施例中目标车辆通过激光雷达获取到周围环境的点云后,将点云输入预先训练好的分类模型中以获得点云中各个对象的分类结果。其中,第一对象可以是分类结果为车辆的对象,非第一对象可以是分类结果为车辆以外的对象,和/或无分类结果的对象,在实际应用中,第一对象还可以是分类模型能够准确进行分类的对象而不仅仅限于车辆。示例性地,目标车辆周围有其他车辆、行人、自行车、骑着自行车的人、雪糕筒等,则对于车辆、行人、自行车和雪糕筒等分类模型可以给出明确的分类结果,而对于骑着自行车的人的分类属于自行车还是属于行人则无法确定,而且在可视化界面中,目标车辆周围的其他车辆对自动驾驶更具有参考意义,可以在可视化界面中将车辆作为第一对象并显示能够体现车辆大小的第一对象模型,对于非第一对象,可以在可视化界面上显示第二对象模型以表达目标车辆检测到非第一对象。
其中,第一对象模型可以为目标车辆检测到的其他车辆的三维模型,该三维模型可以是其他车辆的实体模型,可选地,第一对象模型是边框模型,边框模型可以是三维矩形框,第二对象模型可以为点云模型,即对于第一对象,在地图上显示与第一对象匹配的三维矩形框,对于非第一对象,在地图上显示非第一对象的点云,第二对象模型还可以是至少包含点云的模型,即第二对象模型可以为点云和实体模型的混合模型,由于在地图上直接以点云形态或者至少部分为点云形态来呈现非第一对象,无需根据数据来模拟非第一对象的模型,即减少了非第一对象模型渲染的数据量,甚至无需为非第一对象渲染模型,提高了模型渲染速度。
本申请实施例在目标车辆执行行驶任务时,在地图上为目标车辆所检测到 的第一对象显示第一对象模型,为目标车辆所检测到的非第一对象显示第二对象模型,解决了分类模型对非第一对象分类不准确或者无法分类,造成在可视化界面为非第一对象显示错误的模型甚至不显示任何模型问题,能够为目标车辆所检测到的、分类准确的第一对象显示第一对象模型,分类不准确或者无法分类的非第一对象显示至少包含点云数据的第二对象模型,既可以为目标车辆检测到的第一对象和非第一对象显示模型,又无需对非第一对象进行分类,而是为非第一对象显示至少包含点云的第二对象模型,减少了非第一对象模型渲染的数据量,甚至无需为非第一对象渲染模型,实现了在为检测到的对象显示模型的同时又提高了模型渲染的速度,提高了用户体验。
实施例二
图2为本申请实施例二提供的一种可视化界面的显示方法的流程图,本实施例以前述实施例一为基础进行优化,该方法具体包括如下步骤:
S201、确定目标车辆执行行驶任务。
在本申请实施例中,在接收到可视化界面的显示请求时,确定目标车辆执行的行驶任务。具体地,目标车辆可以为自动驾驶车辆,可以为该车辆建立行驶任务列表,该行驶任务列表中存储了车辆执行各个行驶任务的时间,当接收到可视化界面的显示请求时,确定在行驶任务列表中是否存在当前时间执行的行驶任务,若存在,进一步获取该行驶任务的任务信息,其中,任务信息为预先设置的、存储的信息,例如,任务信息可以包括目标车辆行驶的起点和终点信息,还可以包括目标车辆从起点到终点的路径规划策略、可视化界面的显示内容等信息。
S202、根据所述目标车辆实时位置显示预设范围内地图。
其中,可以在可视化界面提供地图显示模式选项或者地图显示模型切换按键,地图显示模式可以包括全局模式和局部模式,全局模式为显示包括目标车辆的起点和终点的地图的模式,局部模式为显示目标车辆当前所在位置的预设范围内的地图的模式,当然,地图显示模式还可以是3D和2D显示模式,即显示三维地图或者二维地图;地图显示模式还可以是第三者视角或者司机视角显示模式,司机视角即从驾驶位所观看的视角,第三者视角可以是目标车辆以外的视角,如图3所示为第三视角观看的地图。
本申请实施例中,地图可以是根据语义地图预先生成的包含建筑、路面、树木等固定对象的模型的三维电子地图,当用户选择显示3D局部地图时,可以根据目标车辆的实时位置确定预设范围内地图,并将该预设范围内的地图以用户选择的视角显示在可视化界面中,其中,预设范围可以是以目标车辆的位置 为中心的预设半径的圆形范围,还可以是目标车辆的位置为中心的、目标车辆前方预设的扇形范围。
S203、根据所述目标车辆的实时位置在所述地图上为所述目标车辆显示车辆模型。
在本申请实施例中,可以预先为目标车辆设置车辆模型,该车辆模型可以是目标车辆的三维模型,还可以是边框模型等。在根据目标车辆的实时位置显示预设范围内地图后,可以在地图上目标车辆的实时位置显示目标车辆的车辆模型,示例性地,如图3所示,在地图上目标车辆所在的实时位置显示目标车辆的车辆模型10。
S204、获取所述目标车辆在执行所述行驶任务时所检测到的环境信息。
具体地,目标车辆上可以安装有激光雷达、毫米波雷达、相机、红外传感器等传感器,目标车辆在行驶过程中可以通过以上至少一种传感器检测目标车辆的周围环境获得多种传感数据作为环境信息。示例性地,目标车辆上安装有至少一个激光雷达,目标车辆在执行行驶任务过程中,安装在目标车辆上的激光雷达发射激光信号,该激光信号被目标车辆周围场景中的各种对象漫反射后返回该激光雷达,激光雷达对接收到的激光信号进行降噪、采样等处理后获得点云作为环境信息。
目标车辆还可以通过相机按照预设周期拍摄图像,进一步根据拍摄到的图像结合图像测距算法计算图像中的各个对象到目标车辆的距离作为环境信息,或者对图像进行语义分割获得图像中的语义信息作为环境信息,例如,对图像进行语义分割获得红绿灯、车辆、行人等语义分割区域作为环境信息。其中,相机可以为单目相机、双目相机、多目相机中的一种。
S205、识别所述环境信息中所述第一对象的位置和类型。
在本申请实施例中,环境信息可以包括通过传感器获得的点云,可以预先训练分类模型来对形成点云的各种对象进行分类,例如,可以获取各种对象的点云,并标注对象所属的分类后作为训练数据来训练分类模型,训练好的分类模型在输入点云后可以从点云中识别各个对象所属的分类。
当然,环境信息还可以是相机拍摄的图像,可以获取各种对象的图像,并标注对象所属的分类后作为训练数据来训练分类模型,训练好的分类模型在输入图像后可以从图像中识别各个对象所属的分类,或者环境信息可以是毫米波雷达的雷达数据,可以采用雷达数据来训练分类模型,可选地,环境信息数据可以包括点云、图像、雷达数据等多种传感数据,可以采用多种传感数据来训练分类模型,本申请实施例对采用何种数据来训练分类模型不加以限制。
本申请实施例中,对象可以为目标车辆周围的物体,例如,对象可以是目标车辆周围的其他车辆、行人、自行车、交通灯、雪糕筒等。可选地,第一对象可以为车辆,当环境信息为点云时,可以将点云输入预先训练的分类模型中识别出类型为车辆的对象作为第一对象,同时可以通过点云配准获得点云中第一对象的位置,该位置可以是第一对象相对于目标车辆的位置,也可以是第一对象在世界坐标系中的位置,其中,第一对象的数量可以是一个或者一个以上,即从点云中识别出目标车辆周围的所有车辆,并确定各个车辆的位置。
S206、根据所述第一对象的类型获取适配的第一对象模型。
本申请的可选实施例中可以预先为第一对象设置第一对象模型,该第一对象模型可以是第一对象的三维模型,还可以是表示第一对象外形尺寸的边框模型。如图3所示,第一对象的模型为边框模型20。在实际应用中,可以通过点云确定第一对象的外形尺寸,例如确定第一对象的长、宽、高尺寸,然后根据第一对象的长、宽、高尺寸从边框模型库中查找适配尺寸的边框模型作为第一对象的第一对象模型,从而可以为不同尺寸大小的第一对象显示适配尺寸大小的第一对象模型。进一步地,可以按照外形尺寸将第一对象分为大型车辆和小型车辆,大型车辆可以包括货车、公共汽车或者其他体积较大的工程车辆,小型车量可以包括小型客车、面包车等,从而可以按照第一对象的外形尺寸确定第一对象模型所属的车辆类型,使得用户可以了解目标车辆周围的车辆类型以决定是否进行人工干预,例如,当在港口或工业区存在货车较多的路况时,用户可以从可视化界面得知目标车辆行驶于货车较多的路况,从而可以决定是否从自动驾驶模式转换为远程控制驾驶模式。
在本申请的另一可选实施例中,可以将目标车辆上的传感器检测到的环境信息输入预先训练好的检测模型中,通过该检测模型获得各个对象的分类结果、位置、外形尺寸、朝向、速度和加速度等,当对象的分类结果为车辆时,该对象为第一对象,将第一对象的外形尺寸数据输入渲染器中渲染出边框模型做为第一对象模型,通过外形尺寸渲染边框模型,数据量少,模型简单,可以提高获取第一对象模型的速度。
S207、根据所述第一对象的位置显示所述第一对象模型。
具体地,在地图上第一对象所在的位置显示第一对象模型,从而在可视化界面上显示目标车辆的车辆模型,以及显示车辆模型周围的第一对象的第一对象模型,示例性地,在确定第一对象模型的朝向,例如确定车头的方向后,可以按照该朝向在第一对象所在的位置显示第一对象模型,即从第一对象模型上可以反映车辆的车头的朝向,从而可以在可视化界面上能够清楚地了解到车辆是同向行驶车辆还是对向行驶车辆,具体地,可以在边框模型中车头所在一端 增加车头的形状特征或者标记,在边框模型中车尾所在一端增加车尾的形状特征或者标记等。如图3所示,在地图上显示目标车辆的车辆模型10、显示车辆模型10周围的第一对象的第一对象模型20。
S208、识别所述环境信息中所述第二对象的位置和类型。
在本申请实施例中,第二对象可以是第一对象以外的对象,可选地,第一对象是车辆,第二对象是车辆以外的行人、自行车、电线杆、雪糕筒等。其中,环境信息可以包括点云,可以将点云输入预先训练好的分类模型中,以识别出类型不是第一对象所属的类型或者无法分类的对象,将类型不是第一对象所属的类型或者包含无法分类的对象作为第二对象,即非第一对象,并通过点云配准获得点云中第二对象的位置,当然,环境信息还可以包括相机拍摄的图像、毫米波雷达的扫描数据等,可以将环境信息输入预先训练的检测模型中获得各种对象的分类结果和位置,可以将与第一对象不同分类的对象作为第二对象。
S209、从所述环境信息中提取所述第二对象的点云。
本申请的一个实施例中,环境信息包括激光雷达获得的点云和相机拍摄的图像,可以采用目标检测算法识别图像中的第二对象,在将相机和激光雷达进行联合标定后,将图像中识别出的第二对象投射到点云中,从而可以从激光雷达获得的点云中分离出第二对象的点云。
在本申请的另一实施例中,目标车辆上的传感器(相机、激光雷达、毫米波雷达等)按照预设周期获取到多帧环境信息,将每个周期获取到的环境信息按照时间顺序存储至队列中,从队列中读取每帧环境信息输入分类模型中以从每帧环境信息中识别出至少一个第二对象,并通过帧环境信息提取出所有第二对象的点云,然后将所有第二对象的点云输入预先训练的点云分离模型中以分离出每个第二对象的点云,并对多帧环境信息获得的每个第二对象的点云进行平滑处理,将平滑处理后的点云作为第二对象的最终点云。其中,点云分离模型可以通过获取多个第二对象的点云来训练,使得点云分离模型可以从多个对象的点云中分离出每个对象的点云。
在本申请实施例中,对点云进行平滑处理可以包括点云预处理和点云平滑,其中,点云预处理可以是去掉离群点、去掉噪声点和去掉失真点等,平滑处理可以包括均值滤波平滑,具体地,对于每个第二对象的点云中的每个点,可以计算多帧环境信息获得的每个第二对象的点云中该点的平均值,例如,计算相邻两帧或者两帧以上环境信息获得的每个第二对象的点云中某个点的三维坐标的平均值作为平滑处理的结果。当然,平滑处理还可以是中值滤波平滑、高斯滤波平滑等,本申请实施例对点云的平滑处理方式不加以限制。
本申请实施例先对第二对象的点云进行预处理,可以去除无效点和噪声点,提高点云的准确度,更进一步地,对第二对象的点云进行平滑处理,可以获得第二对象的平滑点云,在显示第二对象的点云时,能够在可视化界面取得良好的显示效果。
S210、根据所述第二对象的位置显示至少包含所述点云的第二对象模型。
在本申请实施例中,第二对象模型可以是点云模型,即直接在地图上第二对象所在的位置显示第二对象的点云模型,如图3所示,在图3中显示点云模型70。本申请实施例无需为第二对象进行明确分类,也无需为第二对象匹配模型,提高了第二对象的模型的显示效率。
在本申请的可选实施例中,可以获取为第二对象预先设置的显示模板,该显示模板可以包括一实体的修饰模型,将第二对象的点云显示于该修饰模型上,其中,将第二对象的点云显示于该修饰模型上包括:将第二对象的点云进行缩放,以使得第二对象的点云在地面上的投轮廓被修饰模型的投影轮廓所包围。示例性地,修饰模型为一圆盘,第二对象为雪糕筒,可以将雪糕筒对应的点云缩放,将缩放后的点云显示在圆盘上。
在另一个示例中,将第二对象的点云显示于该修饰模型上包括:计算点云的外形轮廓尺寸,根据所述外形轮廓尺寸调整修饰模型的尺寸,将在调整后的修饰模型中显示点云。示例性地,修饰模型可以是一圆柱空间,该圆柱空间底部为实体,上部空间为透明的,则可以根据点云在地面上的投影轮廓调整圆柱空间的直径以及通过点云的高度调整圆柱空间的高,使得点云可以容纳在该圆柱空间中,例如在一圆柱空间的底部的实体上显示行人的点云,从而可以从可视化界面根据该圆柱空间的轮廓了解到行人的外形尺寸大小。
本申请实施例在目标车辆执行行驶任务时,根据目标车辆的实时位置显示预设范围内地图并在地图上显示目标车辆的车辆模型,在获取环境信息后,从环境信息中识别第一对象的位置和类型,根据第一对象的类型匹配第一对象模型显示在地图上,从环境信息中识别第二对象的位置和类型,并提取第二对象的点云,在地图上显示第二对象的包含点云的第二对象模型,解决了分类模型对非第一对象分类不准确或者无法分类,造成在可视化界面为非第一对象显示错误的模型甚至不显示任何模型问题,能够为目标车辆所检测到的、分类准确的第一对象显示第一对象模型,分类不准确或者无法分类的第二对象显示包含点云的第二对象模型,既可以为目标车辆检测到的第一对象和第二对象显示模型,又无需对第二对象进行分类,而是为非第一对象显示至少包含点云的第二对象模型,减少了非第一对象模型渲染的数据量,甚至无需为非第一对象渲染模型,提高了模型渲染的速度,进而提高了用户体验。
实施例三
图4为本申请实施例三提供的一种可视化界面的显示方法的流程图,本实施例以前述实施例一为基础进行优化,该方法具体包括如下步骤:
S301、确定目标车辆执行行驶任务。
S302、根据所述目标车辆实时位置显示预设范围内地图。
S303、在所述地图上显示对象模型,其中,为所述目标车辆所检测到的第一对象显示第一对象模型,为所述目标车辆所检测到的非第一对象显示至少包括点云数据的第二对象模型。
S304、显示所述目标车辆执行所述行驶任务的任务进度信息,其中,所述任务进度信息包括进度条、已行驶的距离、已行驶的时间中的至少一者。
本申请实施例中,任务进度信息可以是目标车辆执行行驶任务的进度信息,任务进度信息可以是进度条、已行驶的距离、已行驶的时间中的至少一项。其中,进度条可以根据已行驶距离和总距离来生成,已行驶的距离可以通过目标车辆上的里程计来统计。
如图3所示,在可视化界面显示任务进度信息30,在任务进度信息30中,可以包括进度条,该进度条表达了行驶任务的执行进度,还可以包括已行驶距离,即目标车辆开始执行行驶任务后所行驶的距离,还可以包括已行驶时间,即目标车辆开始执行行驶任务后所行驶的总时间,当然,任务进度信息还可以采用百分比等其他形式来表示,本申请实施例对任务进度信息的显示方式不加以限制。
S305、在所述地图上显示为所述目标车辆生成的行驶路线。
具体地,行驶任务可以为目标车辆从指定起点行驶到指定终点的任务,在确定起点和终点后,结合目标车辆上的传感器所检测到的环境信息实时规划行驶路线并将行驶路线显示在地图上,示例性地,在确定起点和终点后规划出从起点行驶到终点的行驶路线,在实时行驶的过程中,根据传感器所检测到的环境信息实时规划行驶过程中目标车辆所行驶的车道。如图3所示,可以在目标车辆的行驶方向上以光带的形式显示行驶路线50,从而使得行驶路线50与地图中的斑马线、车道线等道路标记线明显区分,有利于用户从地图上分辨出行驶路线。
S306、当所述第一对象模型在所述行驶路线上时,高亮显示所述第一对象模型。
在本申请实施例中,第一对象可以为目标车辆所检测到的车辆,可以根据 第一对象对目标车辆行驶的干扰度确定是否高亮该第一对象的第一对象模型。
在本申请的一个示例中,目标车辆直线行驶时,目标车辆所检测到的车辆可以是目标车辆周围的车辆,干扰度可以是目标车辆周围预设范围内所检测的车辆。可选地,目标车辆检测以目标车辆为中心的、预设半径的圆形区域内的车辆,获得该圆形区域内所有车辆与目标车辆的距离,当该距离小于预设阈值时,确定该车辆为干扰车辆,可以高亮显示该圆形区域内的干扰车辆的模型,即高亮显示第一对象模型,例如,当目标车辆做直线行驶时,目标车辆前方的车辆急刹车或者行驶速度下降造成前方车辆与目标车辆的距离减小,当该距离小于预设阈值时,表明前方车辆在目标车辆所需行驶的行驶路线上并且距离小于预设阈值,可以高亮显示行驶路线上前方车辆的第一对象模型以警示用户该车辆干扰了目标车辆行驶。
又例如,当目标车辆做直线行驶时,目标车辆旁边的车辆变道接近目标车辆,当旁边的车辆与目标车辆的距离小于预设阈值时,如果目标车辆仍然以目前方向行驶,有可能发生碰撞,则可以高亮显示目标车辆旁边的车辆的第一对象模型,以警示旁边车辆干扰了目标车辆的正常行驶。本申请实施例能够高亮显示以目标车辆为中心的、预设半径的圆形区域内的干扰车辆,以利于用户及时进行人工监督或者人工干预,提高目标车辆驾驶的安全性。
在本申请的另一示例中,当目标车辆变道时,可以计算目标车辆周围的车辆到目标车辆的距离,如果该距离小于预设阈值,则高亮周围车辆中与目标车辆的距离小于预设阈值的车辆的第一对象模型,以警示周围有车辆干扰到目标车辆变道,以利于用户及时进行人工监督或者人工干预,提高目标车辆驾驶的安全性。
进一步地,可以根据干扰度的大小确定高亮显示第一对象模型的亮度,例如,根据距离对高亮显示的颜色进行渐变,示例性地,高亮颜色为红色,距离越小,红色的颜色越深,反之越小,使得用户可以从高亮的颜色的亮度获知周围车辆对目标车辆的干扰度。
S307、显示红绿灯信息,所述红绿灯信息用于表示所述目标车辆所检测到的红绿灯的状态。
具体地,目标车辆上安装有相机,可以通过相机对目标车辆需要经过的路口的红绿灯进行拍摄获得图像,对该图像进行图像识别获取到红绿灯的状态,并将红绿灯状态显示在可视化界面的虚拟红绿灯中。如图3所示,可在可视化界面的右上角显示红绿灯信息60。
在本申请的可选实施例中,如果相机拍摄到多个红绿灯,例如,在十字路 口拍摄到多个红绿灯时,可以根据目标车辆的位置和行驶路线从多个红绿灯中确定出目标红绿灯并显示目标红绿灯的状态,示例性地,当目标车辆的下一行驶路径为从当前位置继续直线行驶时,将目标车辆前方的红绿灯作为目标红绿灯,并识别目标红绿灯的状态显示在可视化界面上,又或者目标车辆的下一行驶路径为转向行驶时,将目标车辆转向方向的红绿灯作为目标红绿灯,并识别目标红绿灯的状态显示在可视化界面上。本申请实施例从多个红绿灯中确定目标红绿灯,可以避免对多个红绿灯的状态进行识别,减少了图像识别的数据量,既提高了红绿灯信息的显示速度,又减少了在可视化界面显示红绿灯的数量,使得可视化界面更为简洁。
在本申请的另一可选实施例中,可以先确定人行红绿灯的状态,通过人行红绿灯的状态确定目标车辆前方的红绿灯的状态,具体地,当从地图上检测到目标车辆行驶方向上有斑马线时,确定斑马线两端的人行红绿灯,并获取该人行红绿灯的图像,对该图像进行识别获得人行红绿灯状态,根据该人行红绿灯状态确定目标车辆前方用于指示目标车辆行驶的红绿灯状态,示例性地,当人行红绿灯为绿色时,确定目标车辆前方用于指示目标车辆行驶的红绿灯状态为红色,当人行红绿灯为红色时,确定目标车辆前方用于指示目标车辆行驶的红绿灯状态为绿色,由此可以提前显示红绿灯信息,或者前方车辆遮挡前方红绿灯导致相机无法获得前方红绿灯图像时,通过旁边的人行红绿灯信息确定前方红绿灯信息。
S308、显示为所述目标车辆生成的导航信息。
本申请实施例中,导航信息可以是目标车辆的行驶速度、目标车辆到目的地的距离,行驶路线的转向提醒信息、行驶过程中的车辆变道提醒信息等,可以将导航信息显示在可视化界面中。其中,转向提醒信息可以是在可视化界面中显示转向标记以及目标车辆到转向位置的距离,行驶速度可以是在可视化界面显示文字或者虚拟速度表,车辆变道提醒信息可以通过扬声器以语音方式进行播报。如图3所示,导航信息40为行驶路线的转向提醒信息和目标车辆的行驶速度。
在本申请实施例中,目标车辆上的传感器还可以感知周围环境的光强度,根据光强度调整可视化界面的显示模式,显示模式可以包括黑夜模式或者白天模式,当然,还可以根据当前时间确定当前是白天还是黑夜,以实现显示模式在黑夜模式和白天模式之间切换,以使得可视化界面能够根据环境的光强度来进行显示,提高人眼观看的舒适度。
本申请实施例在目标车辆执行行驶任务时,在地图上为目标车辆所检测到的第一对象显示第一对象模型,为目标车辆所检测到的非第一对象显示至少包 含点云数据的第二对象模型,解决了分类模型对非第一对象分类不准确或者无法分类,造成在可视化界面为非第一对象显示错误的模型甚至不显示任何模型问题,能够为目标车辆所检测到的、分类准确的第一对象显示第一对象模型,分类不准确或者无法分类的非第一对象显示第二对象模型,既可以为目标车辆检测到的第一对象和非第一对象显示模型,又无需对非第一对象进行分类,而是为非第一对象显示至少包含点云的第二对象模型,减少了非第一对象模型渲染的数据量,甚至无需为非第一对象渲染模型,实现了在为检测到的对象显示模型的同时又提高了模型渲染的速度,提高了用户体验。
进一步地,在可视化界面上为目标车辆显示行驶路线、红绿灯信息和导航信息,实现了行驶数据的可视化。
更进一步地,当预设范围内所检测到的第一对象干扰目标车辆行驶时,高亮显示第一对象的第一对象模型,以警示用户该第一对象挡住了目标车辆行驶,以利于用户及时进行人工监督或者人工干预,提高目标车辆驾驶的安全性。
实施例四
图5为本申请实施例四提供的一种可视化界面的显示装置的结构示意图,该装置具体可以包括如下模块:
行驶任务确定模块401,用于确定目标车辆执行行驶任务;地图显示模块402,用于根据所述目标车辆实时位置显示预设范围内地图;对象模型显示模块403,用于在所述地图上显示对象模型,其中,为所述目标车辆所检测到的第一对象显示第一对象模型;为所述目标车辆所检测到的非第一对象显示至少包括点云数据的第二对象模型。
可选地,还包括:
车辆模型显示模块,用于根据所述目标车辆的实时位置在所述地图上为所述目标车辆显示车辆模型。
可选地,所述对象模型显示模块403包括:
点云获取子模块,用于获取所述目标车辆在执行所述行驶任务时所检测到的环境信息;第一对象识别子模块,用于识别所述环境信息中所述第一对象的位置和类型;第一对象模型匹配子模块,用于根据所述第一对象的类型获取适配的第一对象模型;第一对象模型显示子模块,用于根据所述第一对象的位置显示所述第一对象模型。
可选地,所述对象模型显示模块403包括:
点云获取子模块,用于获取所述目标车辆在执行所述行驶任务时所检测到 的环境信息;第二对象识别子模块,用于识别所述环境信息中所述第二对象的位置和类型;点云提取子模块,用于从所述环境信息中提取所述第二对象的点云;第二对象模型显示子模块,用于根据所述第二对象的位置显示至少包含点云的第二对象模型。
可选地,还包括:
任务进度信息显示模块,用于显示所述目标车辆执行所述行驶任务的任务进度信息,其中,所述任务进度信息包括进度条、已行驶的距离、已行驶的时间中的至少一者。
可选地,还包括:
信息显示模块,用于在所述地图上显示为所述目标车辆生成的行驶路线;和/或,交通灯信息显示模块,用于显示红绿灯信息,所述红绿灯信息用于表示所述目标车辆所检测到的红绿灯的状态;和/或,导航信息显示模块,用于显示为所述目标车辆生成的导航信息。
可选地,还包括:
高亮显示模块,用于当所述第一对象模型在所述行驶路线上时,高亮显示所述第一对象模型。
可选地,所述导航信息显示模块包括:
速度显示子模块,用于显示所述目标车辆在执行所述行驶任务时的速度;距离显示子模块,用于显示所述目标车辆到目的地的距离。
本申请实施例所提供的可视化界面的显示装置可执行本申请任意实施例所提供的可视化界面的显示方法,具备执行方法相应的功能模块和有益效果。
实施例五
参照图6,示出了本申请一个示例中的一种电子设备的结构示意图。如图6所示,该设备具体可以包括:处理器500、存储器501、具有触摸功能的显示屏502、输入装置503、输出装置504以及通信装置505。该设备中处理器500的数量可以是一个或者多个,图6中以一个处理器500为例。该设备中存储器501的数量可以是一个或者多个,图6中以一个存储器501为例。该设备的处理器500、存储器501、显示屏502、输入装置503、输出装置504以及通信装置505可以通过总线或者其他方式连接,图6中以通过总线连接为例。
存储器501作为一种计算机可读存储介质,可用于存储软件程序、计算机可执行程序以及模块,如本申请任意实施例所述的可视化界面的显示方法对应的程序指令/模块(例如,上述可视化界面的显示装置中的行驶任务确定模块401、 地图显示模块402和对象模型显示模块403),存储器501可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作装置、至少一个功能所需的应用程序;存储数据区可存储根据设备的使用所创建的数据等。此外,存储器501可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实例中,存储器501可进一步包括相对于处理器500远程设置的存储器,这些远程存储器可以通过网络连接至设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
显示屏502为具有触摸功能的显示屏502,其可以是电容屏、电磁屏或者红外屏。一般而言,显示屏502用于根据处理器500的指示显示数据,还用于接收作用于显示屏502的触摸操作,并将相应的信号发送至处理器500或其他装置。可选的,当显示屏502为红外屏时,其还包括红外触摸框,该红外触摸框设置在显示屏502的四周,其还可以用于接收红外信号,并将该红外信号发送至处理器500或者其他设备。
通信装置505,用于与其他设备建立通信连接,其可以是有线通信装置和/或无线通信装置。
输入装置503可用于接收输入的数字或者字符信息,以及产生与设备的用户设置以及功能控制有关的键信号输入。输出装置504可以包括扬声器等音频设备。需要说明的是,输入装置503和输出装置504的具体组成可以根据实际情况设定。
处理器500通过运行存储在存储器501中的软件程序、指令以及模块,从而执行设备的各种功能应用以及数据处理,即实现上述可视化界面的显示方法。
具体地,实施例中,处理器500执行存储器501中存储的一个或多个程序时,具体实现本申请实施例提供的可视化界面的显示方法的步骤。
实施例六
本申请实施例六还提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时可实现本申请任意实施例中的可视化界面的显示方法,该方法具体可以包括:
确定目标车辆执行行驶任务;根据所述目标车辆实时位置显示预设范围内地图;在所述地图上显示对象模型,其中,为所述目标车辆所检测到的第一对象显示第一对象模型;为所述目标车辆所检测到的非第一对象显示至少包括点云数据的第二对象模型。
本申请实施例所提供的一种包含计算机可执行指令的存储介质,其计算机 可执行指令不限于如上所述的方法操作,还可以执行本申请应用于设备上任意实施例所提供的可视化界面的显示方法中的相关操作。
需要说明的是,对于装置、电子设备、存储介质实施例而言,由于其与方法实施例基本相似,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
通过以上关于实施方式的描述,本申请可借助软件及必需的通用硬件来实现,也可以通过硬件实现。本申请的技术方案本质上可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如计算机的软盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、闪存(FLASH)、硬盘或光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述的可视化界面的显示方法。
上述可视化界面的显示装置的实施例中,所包括的各个单元和模块只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,各功能单元的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。

Claims (11)

  1. 一种可视化界面的显示方法,包括:
    确定目标车辆执行行驶任务;
    根据所述目标车辆实时位置显示预设范围内地图;
    在所述地图上显示对象模型,其中,为所述目标车辆所检测到的第一对象显示第一对象模型;为所述目标车辆所检测到的非第一对象显示至少包括点云数据的第二对象模型。
  2. 根据权利要求1所述的方法,还包括:
    根据所述目标车辆的实时位置在所述地图上为所述目标车辆显示车辆模型。
  3. 根据权利要求1所述的方法,其中,所述为所述目标车辆所检测到的第一对象显示第一对象模型,包括:
    获取所述目标车辆在执行所述行驶任务时所检测到的环境信息;
    识别所述环境信息中所述第一对象的位置和类型;
    根据所述第一对象的类型获取适配的第一对象模型;
    根据所述第一对象的位置显示所述第一对象模型。
  4. 根据权利要求1所述的方法,其中,所述为所述目标车辆所检测到的非第一对象显示至少包括点云数据的第二对象模型,包括:
    获取所述目标车辆在执行所述行驶任务时所检测到的环境信息;
    识别所述环境信息中所述第二对象的位置和类型;
    从所述环境信息中提取所述第二对象的点云;
    根据所述第二对象的位置显示至少包含所述点云的第二对象模型。
  5. 根据权利要求1-4任一项所述的方法,还包括:
    显示所述目标车辆执行所述行驶任务的任务进度信息,其中,所述任务进度信息包括进度条、已行驶的距离、已行驶的时间中的至少一者。
  6. 根据权利要求1-4任一项所述的方法,还包括:
    在所述地图上显示为所述目标车辆生成的行驶路线;和/或,
    显示红绿灯信息,所述红绿灯信息用于表示所述目标车辆所检测到的红绿灯的状态;和/或,
    显示为所述目标车辆生成的导航信息。
  7. 根据权利要求6所述的方法,还包括:
    当所述第一对象模型在所述行驶路线上时,高亮显示所述第一对象模型。
  8. 根据权利要求6所述的方法,其中,所述显示为所述目标车辆生成的导航信息,包括:
    显示所述目标车辆在执行所述行驶任务时的速度;
    和/或,
    显示所述目标车辆到目的地的距离。
  9. 一种可视化界面的显示装置,包括:
    行驶任务确定模块,用于确定目标车辆执行行驶任务;
    地图显示模块,用于根据所述目标车辆实时位置显示预设范围内地图;
    对象模型显示模块,用于在所述地图上显示对象模型,其中,为所述目标车辆所检测到的第一对象显示第一对象模型;为所述目标车辆所检测到的非第一对象显示至少包括点云数据的第二对象模型。
  10. 一种电子设备,包括:
    一个或多个处理器;
    存储器,用于存储一个或多个程序;
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-8中任一项所述的可视化界面的显示方法。
  11. 一种计算机可读存储介质,存储有计算机程序,其中,所述计算机程序被处理器执行时实现如权利要求1-8中任一项所述的可视化界面的显示方法。
PCT/CN2020/140611 2020-05-14 2020-12-29 可视化界面的显示方法、装置、电子设备和存储介质 WO2021227520A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/925,121 US20230184560A1 (en) 2020-05-14 2020-12-29 Visual interface display method and apparatus, electronic device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010408219.0A CN111595357B (zh) 2020-05-14 2020-05-14 可视化界面的显示方法、装置、电子设备和存储介质
CN202010408219.0 2020-05-14

Publications (1)

Publication Number Publication Date
WO2021227520A1 true WO2021227520A1 (zh) 2021-11-18

Family

ID=72185587

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/140611 WO2021227520A1 (zh) 2020-05-14 2020-12-29 可视化界面的显示方法、装置、电子设备和存储介质

Country Status (3)

Country Link
US (1) US20230184560A1 (zh)
CN (1) CN111595357B (zh)
WO (1) WO2021227520A1 (zh)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111595357B (zh) * 2020-05-14 2022-05-20 广州文远知行科技有限公司 可视化界面的显示方法、装置、电子设备和存储介质
CN113392796A (zh) * 2021-06-29 2021-09-14 广州小鹏汽车科技有限公司 显示方法、显示装置、车辆和计算机可读存储介质
CN114371900A (zh) * 2022-01-06 2022-04-19 阿维塔科技(重庆)有限公司 一种车辆壁纸生成方法、装置及计算机可读存储介质
CN114546575A (zh) * 2022-02-25 2022-05-27 阿波罗智联(北京)科技有限公司 对象显示方法、装置、设备、存储介质以及程序产品
CN114973726A (zh) * 2022-05-09 2022-08-30 广州文远知行科技有限公司 一种自动驾驶交通灯的可视化方法、装置、设备及存储介质
CN115206122B (zh) * 2022-07-26 2024-01-12 广州文远知行科技有限公司 轨迹显示方法、装置、存储介质及计算机设备
CN115761464B (zh) * 2022-11-03 2023-09-19 中山大学 一种水下机器人作业环境与状态的评估方法

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106802954A (zh) * 2017-01-18 2017-06-06 中国科学院合肥物质科学研究院 无人车语义地图模型构建方法及其在无人车上的应用方法
US20180348346A1 (en) * 2017-05-31 2018-12-06 Uber Technologies, Inc. Hybrid-View Lidar-Based Object Detection
US10297152B1 (en) * 2017-10-27 2019-05-21 Waymo Llc Displaying sensor data and supplemental data as a mask for autonomous vehicles
CN110057373A (zh) * 2019-04-22 2019-07-26 上海蔚来汽车有限公司 用于生成高精细语义地图的方法、装置和计算机存储介质
CN110542908A (zh) * 2019-09-09 2019-12-06 阿尔法巴人工智能(深圳)有限公司 应用于智能驾驶车辆上的激光雷达动态物体感知方法
CN110789533A (zh) * 2019-09-25 2020-02-14 华为技术有限公司 一种数据呈现的方法及终端设备
CN111144211A (zh) * 2019-08-28 2020-05-12 华为技术有限公司 点云显示方法和装置
CN111595357A (zh) * 2020-05-14 2020-08-28 广州文远知行科技有限公司 可视化界面的显示方法、装置、电子设备和存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016201670A1 (en) * 2015-06-18 2016-12-22 Bayerische Motoren Werke Aktiengesellschaft Method and apparatus for representing map element and method and apparatus for locating vehicle/robot
AU2015404215B2 (en) * 2015-08-06 2019-01-03 Accenture Global Services Limited Vegetation management for power line corridor monitoring using computer vision
CN105675008A (zh) * 2016-01-08 2016-06-15 北京乐驾科技有限公司 一种导航显示方法及***
CN108806472B (zh) * 2017-05-03 2021-05-28 腾讯科技(深圳)有限公司 电子地图中的道路渲染方法、装置和处理方法、装置
US10580299B2 (en) * 2017-10-13 2020-03-03 Waymo Llc Lane change notification
KR102434580B1 (ko) * 2017-11-09 2022-08-22 삼성전자주식회사 가상 경로를 디스플레이하는 방법 및 장치
CN110274611B (zh) * 2019-06-24 2022-09-23 腾讯科技(深圳)有限公司 信息显示方法、装置、终端及存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106802954A (zh) * 2017-01-18 2017-06-06 中国科学院合肥物质科学研究院 无人车语义地图模型构建方法及其在无人车上的应用方法
US20180348346A1 (en) * 2017-05-31 2018-12-06 Uber Technologies, Inc. Hybrid-View Lidar-Based Object Detection
US10297152B1 (en) * 2017-10-27 2019-05-21 Waymo Llc Displaying sensor data and supplemental data as a mask for autonomous vehicles
CN110057373A (zh) * 2019-04-22 2019-07-26 上海蔚来汽车有限公司 用于生成高精细语义地图的方法、装置和计算机存储介质
CN111144211A (zh) * 2019-08-28 2020-05-12 华为技术有限公司 点云显示方法和装置
CN110542908A (zh) * 2019-09-09 2019-12-06 阿尔法巴人工智能(深圳)有限公司 应用于智能驾驶车辆上的激光雷达动态物体感知方法
CN110789533A (zh) * 2019-09-25 2020-02-14 华为技术有限公司 一种数据呈现的方法及终端设备
CN111595357A (zh) * 2020-05-14 2020-08-28 广州文远知行科技有限公司 可视化界面的显示方法、装置、电子设备和存储介质

Also Published As

Publication number Publication date
US20230184560A1 (en) 2023-06-15
CN111595357A (zh) 2020-08-28
CN111595357B (zh) 2022-05-20

Similar Documents

Publication Publication Date Title
WO2021227520A1 (zh) 可视化界面的显示方法、装置、电子设备和存储介质
WO2021226776A1 (zh) 一种车辆可行驶区域检测方法、***以及采用该***的自动驾驶车辆
US11001196B1 (en) Systems and methods for communicating a machine intent
CN111874006B (zh) 路线规划处理方法和装置
CN108073168B (zh) 对自主驾驶车辆进行决策的评估体系
JP6592074B2 (ja) 車両制御装置、車両制御方法、プログラム、および情報取得装置
CN105835886B (zh) 驾驶辅助装置
US11670087B2 (en) Training data generating method for image processing, image processing method, and devices thereof
JP6800575B2 (ja) 自己の乗り物のドライバを支援する方法およびシステム
KR102613839B1 (ko) 긴급 차량들의 검출
CN107450529A (zh) 用于自动驾驶车辆的改进的物体检测
WO2020098004A1 (zh) 车道通行状态提醒方法及设备
WO2021057344A1 (zh) 一种数据呈现的方法及终端设备
US20210389133A1 (en) Systems and methods for deriving path-prior data using collected trajectories
WO2022041869A1 (zh) 路况提示方法、装置、电子设备、存储介质及程序产品
JP6613265B2 (ja) 予測装置、車両、予測方法およびプログラム
WO2023179028A1 (zh) 一种图像处理方法、装置、设备及存储介质
CN112825127B (zh) 生成用于自动驾驶标记的紧密2d边界框的方法
CN113602282A (zh) 车辆驾驶和监测***及将情境意识维持在足够水平的方法
JP2019027996A (ja) 車両用表示方法及び車両用表示装置
JP5355209B2 (ja) ナビゲーション装置、自車の走行車線の判定方法および判定プログラム
CN112735163B (zh) 确定目标物体静止状态的方法、路侧设备、云控平台
CN115257813B (zh) 通过施工障碍物的智能驾驶控制方法及车辆
US10864856B2 (en) Mobile body surroundings display method and mobile body surroundings display apparatus
WO2023179030A1 (zh) 一种道路边界检测方法、装置、电子设备、存储介质和计算机程序产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20936021

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20936021

Country of ref document: EP

Kind code of ref document: A1