WO2023093056A1 - Vehicle control - Google Patents

Vehicle control Download PDF

Info

Publication number
WO2023093056A1
WO2023093056A1 PCT/CN2022/103221 CN2022103221W WO2023093056A1 WO 2023093056 A1 WO2023093056 A1 WO 2023093056A1 CN 2022103221 W CN2022103221 W CN 2022103221W WO 2023093056 A1 WO2023093056 A1 WO 2023093056A1
Authority
WO
WIPO (PCT)
Prior art keywords
blind area
sub
information
blind
point cloud
Prior art date
Application number
PCT/CN2022/103221
Other languages
French (fr)
Chinese (zh)
Inventor
李经纬
王哲
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2023093056A1 publication Critical patent/WO2023093056A1/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/09Taking automatic action to avoid collision, e.g. braking and steering
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions

Definitions

  • This disclosure relates to the field of computer technology and, in particular, to vehicle control.
  • Embodiments of the present disclosure at least provide a vehicle control method, device, electronic equipment, and storage medium.
  • an embodiment of the present disclosure provides a vehicle control method, including: obtaining the first blind spot information corresponding to the first point cloud data in the historical frame collected by the radar installed on the target vehicle before collecting the current frame, and the The first object recognition result corresponding to the first point cloud data, and the second point cloud data in the current frame; based on the second point cloud data, the first blind area information, the first object recognition As a result, the second blind spot information corresponding to the second point cloud data is determined; based on the second blind spot information, the driving state of the target vehicle is controlled.
  • the embodiment of the present disclosure further provides a vehicle control device, including: an acquisition module, configured to acquire the first point cloud data corresponding to the first point cloud data in the history frame collected by the radar installed on the target vehicle before the current frame is collected. Blind area information, the first object recognition result corresponding to the first point cloud data, and the second point cloud data in the current frame; a determination module, configured to use the second point cloud data, the first The blind spot information and the first object recognition result determine the second blind spot information corresponding to the second point cloud data; the control module is configured to control the driving state of the target vehicle based on the second blind spot information.
  • an embodiment of the present disclosure further provides an electronic device, including: a processor, a memory, and a bus, the memory stores machine-readable instructions executable by the processor, and when the electronic device is running, the processing
  • the processor communicates with the memory through a bus, and when the machine-readable instructions are executed by the processor, the above-mentioned first aspect, or the steps in any possible implementation manner of the first aspect are executed.
  • embodiments of the present disclosure further provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the above-mentioned first aspect, or any of the first aspects of the first aspect, may be executed. Steps in one possible implementation.
  • Fig. 1 shows a flow chart of a vehicle control method provided by an embodiment of the present disclosure
  • Fig. 2 shows a schematic diagram of a dead zone provided by an embodiment of the present disclosure
  • Fig. 3 shows a schematic diagram of the state of the target object provided by the embodiment of the present disclosure
  • Fig. 4 shows a schematic diagram of a vehicle control device provided by an embodiment of the present disclosure
  • Fig. 5 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
  • the present disclosure provides a vehicle control method, device, electronic equipment, and storage medium.
  • the present disclosure first obtains the first point cloud data corresponding to the historical frame collected by the radar installed on the target vehicle before collecting the current frame.
  • determine the second blind spot information corresponding to the second point cloud data control the driving state of the target vehicle based on the second blind spot information.
  • the present disclosure can determine the second blind area information corresponding to the second point cloud data through the second point cloud data, the first blind area information, and the first object recognition result, and realize the monitoring of the blind area of the second point cloud data, so that it can be based on the first
  • the second blind spot information controls the driving state of the target vehicle and reduces the probability of dangerous accidents.
  • the embodiment of the present disclosure discloses a vehicle control method, which can be applied to electronic devices with computing capabilities, such as servers, on-board computers, and the like.
  • the vehicle control method may include the following steps:
  • the above-mentioned target vehicle can be an autonomous driving vehicle equipped with a radar, and the radar can be a lidar.
  • the lidar has the advantages of high resolution, high ranging accuracy, and good detection, and is the most important sensor on an autonomous vehicle.
  • the above-mentioned radar can continuously collect point cloud data frame by frame.
  • the above-mentioned current frame and historical frame can be two consecutive frames or two discontinuous frames. In some cases, the time interval between the two frames needs to be within the preset range to prevent the difference between the two frames of point cloud data from being too large, and improve the timeliness of the target object information that is subsequently inherited from the historical frames.
  • FIG. 2 it is a schematic diagram of a blind area provided by an embodiment of the present disclosure.
  • the target vehicle detects two obstacles through lidar, forming two sub-blind areas.
  • the position information of the blind area may include the coordinates of the blind area in the point cloud data and the height information of the blind area.
  • the above blind area information may include blind area location information and object information in the blind area.
  • Objects in the blind area may include observable objects and unobservable objects.
  • object information in the blind area includes observable object information and unobservable object information.
  • the unobservable object is the object completely blocked by the obstacle, which cannot be detected by the radar
  • the observable object is the object detected by the radar in the blind area, such as the obstacle itself causing the blind area, the object partially blocked by the obstacle, or Some objects behind obstacles are detected due to errors that cause gaps between the blind area and the obstacle contour.
  • the target object information in the above-mentioned blind area can be determined through the point cloud data.
  • the trained object recognition model can be used to identify the point cloud data to obtain the target object information in the point cloud data.
  • the target object information may include information such as the position, shape, orientation angle, speed, and category of the target object.
  • the first blind area information may include position information of the first blind area in the first point cloud data
  • the second blind area information may include position information of the second blind area in the second point cloud data.
  • the position information of the second blind area and the second object recognition result can be determined by an object recognition algorithm.
  • the position information of the second blind area corresponding to the second point cloud data may be determined through the following steps:
  • Position information of the second blind area is determined based on the wire harness information emitted by the radar and the determined information of the obstacle.
  • the above-mentioned radar can obtain the position information of each point constituting the outline of the obstacle in the set coordinate system. In this way, the outline information of the obstacle within the set range from the target vehicle can be obtained based on the point cloud data.
  • the above wire harness information may include the number of wire bundles of radio waves emitted by the radar at various rotation angles and the height to the ground, specifically, it may be represented by a pre-established wire height map.
  • the grid map of the ground surface area within the vehicle distance setting range under the bird's-eye view.
  • For each grid based on the wire harness information emitted by the radar, a line height map corresponding to the grid map is generated.
  • the line height map contains three dimension, the first two dimensions represent the row position and column position of each grid in the line height map, the third dimension represents the number of wire bundles contained in each grid, and the grid also records the grid The height of each harness contained within the grid.
  • the number of wire harnesses corresponding to the grid refers to the number of wire harnesses emitted by the radar device determined only according to the installation position, installation angle, and arrangement angle of the radar transmitter without considering the obstacles in the grid.
  • the number of wire bundles entering the grid for example, for each wire bundle entering the grid, the wire bundle can be translated to the position where it intersects with a straight line passing through the center point of the grid and perpendicular to the grid plane, The distance between the intersection point and the center point of the grid is taken as the harness height of the harness at the grid.
  • the position information of the second blind area can be determined through the information of obstacles within the set range of the target vehicle in the second point cloud data and the wire harness information emitted by the radar, so as to realize the accurate determination of the radar blind area.
  • the step of determining the position information of the first blind area of the first point cloud data may be the same as determining the position information of the second blind area.
  • the first blind area information may further include object information within the first blind area; the second blind area information may further include target object information within the second blind area.
  • determining the second blind area information corresponding to the second point cloud data may include:
  • the initial object information is updated to obtain the second Target object information in the blind zone.
  • the above initial object information may be object information inherited from the history frame.
  • the first blind area may include at least one first sub-blind area
  • the second blind area may include at least one second sub-blind area
  • the above initial object information may be determined through the following steps:
  • initial object information in each of the second sub-blind areas is determined.
  • the above-mentioned association relationship may include appearance (there is no such sub-blind area in the historical frame, and the sub-blind area exists in the current frame), disappearance (the sub-blind area exists in the historical frame, and there is no such sub-blind area in the current frame), one-to-one (the sub-blind area in the historical frame A sub-blind area corresponds to a sub-blind area in the current frame, and there are no other associated sub-blind areas), split (a sub-blind area in the historical frame is associated with multiple sub-blind areas in the current frame), fusion (multiple sub-blind areas in the historical frame are associated with the current Only one sub-blind area in the frame is associated), split plus fusion (splitting and fusion occur at the same time), according to the determined association relationship, it can be determined which target object information in the first sub-blind area should be inherited by the second sub-blind area.
  • each second sub-blind area and each first sub-blind area is determined through the position information of the first blind area and the second blind area, and then the initial object information in each second sub-blind area is determined according to the association relationship,
  • Each blind area in the historical frame can be associated with each blind area in the current frame in time sequence, so as to realize the conduction of object information in the blind area in the time dimension.
  • the association relationship between each second sub-blind area and each first sub-blind area may be determined through the following steps:
  • the association relationship between the second sub-blind zone and each first sub-blind zone is determined.
  • the position information of the first blind area in the historical frame can be converted to the coordinate system of the position information of the second blind area in the current frame, and then Determine the correlation matrix of m*n, if there is an overlapping area between a first sub-blind area and a second sub-blind area, that is, the area of the overlapping area is greater than 0, the corresponding position in the correlation matrix can be set to 1, otherwise it can be set to 0 , by solving the correlation matrix, the association relationship between each second sub-blind area and each first sub-blind area can be obtained, and the successor of each first sub-blind area (the second sub-blind area overlapping with the first sub-blind
  • identification information may be given to the sub-blind areas to reflect the association relationship between different sub-blind areas. For example, if a second sub-blind zone has no predecessor, new identification information can be assigned to it; if a second sub-blind zone has one and only one predecessor, the identification information of the predecessor can be inherited; if a second sub-blind zone has If there are multiple predecessors (fusion relationship), the latest identification information in the predecessor can be inherited, and the target object information of all predecessors can be inherited; if a first sub-blind zone has multiple successors (split relationship), then the first sub-blind zone can be New identification information is assigned to all successors of the sub-blind area; if the fusion and splitting relationships exist at the same time, new identification information can be assigned to the second sub-blind area of the successor.
  • each second sub-blind area in the at least one second sub-blind area After determining the association relationship, for each second sub-blind area in the at least one second sub-blind area, based on the object information in each of the first sub-blind areas, and the relationship between the second sub-blind area and each first sub-blind area To determine the initial object information in the second sub-blind area.
  • the predecessor of the second sub-blind zone can be determined through the association type (such as fusion, splitting, etc.) Initial object information in the second sub-blind zone.
  • the target object information of the predecessor of the second sub-blind zone can be used as the initial object information of the second sub-blind zone, if a second sub-blind zone has multiple predecessors, it can be All the precursor target object information of the second sub-blind area is fused as the initial object information of the second sub-blind area.
  • the association relationship between each second sub-blind area and each of the first sub-blind areas is determined, so that the determined The correlation between each sub-blind area is more accurate.
  • the initial object information is updated.
  • the initial object information can be updated through the following steps:
  • the updated initial object information is target object information within the second blind area.
  • the tracking method can include explicit tracking and implicit tracking.
  • the explicit tracking can use the object recognition result of the radar as the tracking result and record it in its In the observable object information of the corresponding blind area
  • implicit tracking can be for the unobservable object
  • the object recognition result of the object observed by the radar last time is used as the tracking result, and it is recorded in the unobservable object information of the corresponding blind area middle.
  • the first object recognition result and the second object recognition result can be used to track the object in the point cloud data to determine the same object in the first point cloud data and the second point cloud data, and the identified same object can be Use the same object identifier to represent, when it is determined that the appearance position of a target object is in a second sub-blind area, the target object can be added to the observable object information of the second sub-blind area, so as to implement the original object information Observable information updates for .
  • the unobservable object information in the initial object information can be updated.
  • an object is observed in the second point cloud data (that is, the object exists in the observable object information in the second sub-blind area or the object does not exist in any second sub-blind area)
  • the object exists in the observable object information of a second sub-blind area and the unobservable object information of a first sub-blind area at the same time, indicating that the object enters the first sub-blind area After the blind area, it is detected in the second blind area.
  • the method of explicit tracking can be adopted, and the second object recognition result of the object can be used as tracking information; if an object is observed in the second point cloud data and the object exists in the observable object information of a second sub-blind zone, but does not exist in the unobservable object information of any first sub-blind zone, it means that the object has just entered the second blind zone (such as the object’s Part of the object enters the second blind area), a new tracking target can be established for the second sub-blind area entered by the object, and the information of the unobservable object will not be updated temporarily; if an object is observed in the second point cloud data, and the object If it exists in the unobservable object information of a first sub-blind zone, and does not exist in the observable object information of any second sub-blind zone, it can be considered that the object has left the first blind zone, and the object is removed from the corresponding second sub-blind zone.
  • an object is observed in the second point cloud data, and the object does not exist in the observable object information of the second sub-blind area, and does not exist in the first sub-blind area
  • the unobservable object information it means that the object is outside the blind area, and the object will not be processed; if an object is not observed in the second point cloud data, and the object exists in the unobservable object information of the first sub-blind area In , it means that the object is still in the blind spot, and implicit tracking is used to continuously record the blind spot where the object is located.
  • the above objects can be tracked, so as to determine each Which target objects should exist in the unobservable object information in the second sub-blind area, realize the update of the unobservable object information in the initial object information, and finally obtain the target object information in the second blind area.
  • the type, quantity, size, orientation and other information of the target objects existing in each second sub-blind area of the second blind area can be known, and then the driving of the target vehicle can be controlled according to the above information state, reducing the risk of autonomous driving.
  • the driving state of the target vehicle can be controlled through the following steps:
  • the driving state of the target vehicle is controlled.
  • the above-mentioned danger levels can be related to the type and quantity of unobservable objects.
  • the types of unobservable objects in the second sub-blind zone are pedestrians and bicycles (types are road vulnerable persons)
  • their danger level is relatively high
  • the second sub-blind zone The greater the number of unobservable objects in the second sub-blind zone, the higher the risk level;
  • the risk level can also be related to the position information of the second sub-blind zone, for example, the closer the second sub-blind zone is to the future trajectory of the target vehicle. The closer it is, the higher the level of danger.
  • the above embodiment uses the position information of the second blind area, the position information of the first blind area, and the object information in the first blind area, so that the second blind area can inherit the object information in the first blind area of the historical frame, and obtain the object information in the second blind area.
  • the initial object information and then, based on the target object information in the first blind area, the second object recognition result, the first object recognition result, and the position information of the second blind area, the initial object information is updated to obtain the target object information in the second blind area
  • the target object information is used to determine the target object in the blind area, and to control the driving state of the vehicle according to the target object information.
  • the present disclosure also discloses a vehicle control device, each module in the device can implement each step in the positioning method of each of the above-mentioned embodiments, and can achieve the same beneficial effect, therefore, for The same part will not be repeated here.
  • the vehicle control device includes:
  • the obtaining module 410 is used to obtain the first blind area information corresponding to the first point cloud data in the historical frame collected by the radar installed on the target vehicle before collecting the current frame, and the first object identification corresponding to the first point cloud data result, and the second point cloud data in the current frame;
  • a determining module 420 configured to determine second blind area information corresponding to the second point cloud data based on the second point cloud data, the first blind area information, and the first object recognition result;
  • a control module 430 configured to control the driving state of the target vehicle based on the second blind spot information.
  • the first blind area information includes the position information of the first blind area in the first point cloud data
  • the second blind area information includes the location information of the second blind area in the second point cloud data. location information
  • the determination module 420 is specifically used for:
  • a second object recognition result corresponding to the second point cloud data and position information of the second blind area are determined.
  • the first blind area information further includes object information in the first blind area;
  • the second blind area information further includes target object information in the second blind area;
  • the determining module 420 When determining the second blind area information corresponding to the second point cloud data based on the second point cloud data, the first blind area information, and the first object recognition result, it is used to:
  • the initial object information is updated to obtain the second Target object information in the blind zone.
  • the determining module 420 is specifically configured to:
  • the position information of the second blind area is determined based on the wire harness information emitted by the radar and the information of the obstacle.
  • the first blind area includes at least one first sub-blind area
  • the second blind area includes at least one second sub-blind area
  • the determining module 430 is specifically used for:
  • the determining module 420 determines each second sub-blind area in the at least one second sub-blind area based on the position information of the first blind area and the position information of the second blind area.
  • any second sub-blind area in the at least one second sub-blind area based on the area of the overlapping area between the second sub-blind area and each first sub-blind area in the at least one first sub-blind area, determine the second An association relationship between a sub-blind zone and each first sub-blind zone in the at least one first sub-blind zone.
  • the determining module 420 determines the at least one second sub-blind area based on the association relationship and object information in each first sub-blind area of the at least one first sub-blind area When the initial object information in each of the second sub-blind areas is used for:
  • any second sub-blind area in the at least one second sub-blind area based on the object information in each first sub-blind area in the at least one first sub-blind area, and the relationship between the second sub-blind area and the at least one first sub-blind area
  • the association relationship among the first sub-blind areas in the sub-blind areas determines the initial object information in the second sub-blind areas.
  • the object information in the blind area includes observable object information and unobservable object information
  • the determining module 420 is based on the target object information in the first blind area, the second object recognition result, the first object recognition result, and the position information of the second blind area, and performs an operation on the initial object information. Updating, when the target object information in the second blind area is obtained, it is used for:
  • the updated initial object information is target object information within the second blind area.
  • control module 430 is specifically configured to:
  • the driving state of the target vehicle is controlled.
  • an embodiment of the present disclosure further provides an electronic device 500, as shown in FIG. 5 , which is a schematic structural diagram of the electronic device 500 provided by the embodiment of the present disclosure, including:
  • the first blind area information Based on the second point cloud data, the first blind area information, and the first object recognition result, determine second blind area information corresponding to the second point cloud data;
  • the driving state of the target vehicle is controlled.
  • Embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is run by a processor, the steps of the vehicle control method described in the above-mentioned method embodiments are executed.
  • the storage medium may be a volatile or non-volatile computer-readable storage medium.
  • Embodiments of the present disclosure also provide a computer program product, including a computer-readable storage medium storing program codes.
  • the instructions contained in the program codes can be used to execute the steps of the vehicle control method described in the method embodiments above. Specifically, Refer to the foregoing method embodiments, and details are not repeated here.
  • the computer program product may be specifically realized by hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), etc. wait.
  • a software development kit Software Development Kit, SDK
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the functions are realized in the form of software function units and sold or used as independent products, they can be stored in a non-volatile computer-readable storage medium executable by a processor.
  • the technical solution of the present disclosure is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disc and other media that can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Traffic Control Systems (AREA)

Abstract

A vehicle control method and apparatus, and an electronic device and a storage medium. The method comprises: acquiring first blind spot information corresponding to first point cloud data in a historical frame, which first blind spot information is collected, before the current frame is collected, by a radar that is mounted on a target vehicle, a first object recognition result corresponding to the first point cloud data, and second point cloud data in the current frame (S110); determining, on the basis of the second point cloud data, the first blind spot information and the first object recognition result, second blind spot information corresponding to the second point cloud data (S120); and controlling a traveling state of the target vehicle on the basis of the second blind spot information (S130).

Description

车辆控制vehicle control
相关申请的交叉引用Cross References to Related Applications
本申请要求在2021年11月29日提交至中国专利局、申请号为CN 202111437547.4的中国专利申请的优先权,其全部内容通过引用结合在本公开中。This application claims priority to a Chinese patent application with application number CN 202111437547.4 filed with the China Patent Office on November 29, 2021, the entire contents of which are incorporated in this disclosure by reference.
技术领域technical field
本公开涉及计算机技术领域,具体而言,涉及车辆控制。This disclosure relates to the field of computer technology and, in particular, to vehicle control.
背景技术Background technique
随着生活水平的提高,汽车已经成为了人类社会不可或缺的一部分,而高精度的车辆定位与导航是车辆智能化、自动化的重要部分,是智能车辆感知、控制、路径规划等模块的基础。With the improvement of living standards, cars have become an indispensable part of human society, and high-precision vehicle positioning and navigation are an important part of vehicle intelligence and automation, and are the basis of intelligent vehicle perception, control, path planning and other modules .
发明内容Contents of the invention
本公开实施例至少提供一种车辆控制方法、装置、电子设备及存储介质。Embodiments of the present disclosure at least provide a vehicle control method, device, electronic equipment, and storage medium.
第一方面,本公开实施例提供了一种车辆控制方法,包括:获取安装在目标车辆上的雷达在采集当前帧之前采集的历史帧中的第一点云数据对应的第一盲区信息、所述第一点云数据对应的第一对象识别结果、以及在所述当前帧中的第二点云数据;基于所述第二点云数据、所述第一盲区信息、所述第一对象识别结果,确定所述第二点云数据对应的第二盲区信息;基于所述第二盲区信息,控制所述目标车辆的行驶状态。In a first aspect, an embodiment of the present disclosure provides a vehicle control method, including: obtaining the first blind spot information corresponding to the first point cloud data in the historical frame collected by the radar installed on the target vehicle before collecting the current frame, and the The first object recognition result corresponding to the first point cloud data, and the second point cloud data in the current frame; based on the second point cloud data, the first blind area information, the first object recognition As a result, the second blind spot information corresponding to the second point cloud data is determined; based on the second blind spot information, the driving state of the target vehicle is controlled.
第二方面,本公开实施例还提供一种车辆控制装置,包括:获取模块,用于获取安装在目标车辆上的雷达在采集当前帧之前采集的历史帧中的第一点云数据对应的第一盲区信息、所述第一点云数据对应的第一对象识别结果、以及所述当前帧中的第二点云数据;确定模块,用于基于所述第二点云数据、所述第一盲区信息、所述第一对象识别结果,确定所述第二点云数据对应的第二盲区信息;控制模块,用于基于所述第二盲区信息,控制所述目标车辆的行驶状态。In the second aspect, the embodiment of the present disclosure further provides a vehicle control device, including: an acquisition module, configured to acquire the first point cloud data corresponding to the first point cloud data in the history frame collected by the radar installed on the target vehicle before the current frame is collected. Blind area information, the first object recognition result corresponding to the first point cloud data, and the second point cloud data in the current frame; a determination module, configured to use the second point cloud data, the first The blind spot information and the first object recognition result determine the second blind spot information corresponding to the second point cloud data; the control module is configured to control the driving state of the target vehicle based on the second blind spot information.
第三方面,本公开实施例还提供一种电子设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行上述第一方面,或第一方面中任一种可能的实施方式中的步骤。In a third aspect, an embodiment of the present disclosure further provides an electronic device, including: a processor, a memory, and a bus, the memory stores machine-readable instructions executable by the processor, and when the electronic device is running, the processing The processor communicates with the memory through a bus, and when the machine-readable instructions are executed by the processor, the above-mentioned first aspect, or the steps in any possible implementation manner of the first aspect are executed.
第四方面,本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上述第一方面,或第一方面中任一种可能的实施方式中的步骤。In a fourth aspect, embodiments of the present disclosure further provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the above-mentioned first aspect, or any of the first aspects of the first aspect, may be executed. Steps in one possible implementation.
为使本公开的上述目的、特征和优点能更明显易懂,下文特举较佳实施例,并配合所附附图,作详细说明如下。In order to make the above-mentioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments will be described in detail below together with the accompanying drawings.
附图说明Description of drawings
为了更清楚地说明本公开实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,此处的附图被并入说明书中并构成本说明书中的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。应当理解,以下附图仅示出了本公开的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附 图。In order to illustrate the technical solutions of the embodiments of the present disclosure more clearly, the following will briefly introduce the accompanying drawings used in the embodiments. The accompanying drawings here are incorporated into the specification and constitute a part of the specification. The drawings show the embodiments consistent with the present disclosure, and are used together with the description to explain the technical solutions of the present disclosure. It should be understood that the following drawings only show some embodiments of the present disclosure, and therefore should not be regarded as limiting the scope. For those skilled in the art, they can also make From these drawings other related drawings are obtained.
图1示出了本公开实施例所提供的一种车辆控制方法的流程图;Fig. 1 shows a flow chart of a vehicle control method provided by an embodiment of the present disclosure;
图2示出了本公开实施例所提供的盲区的示意图;Fig. 2 shows a schematic diagram of a dead zone provided by an embodiment of the present disclosure;
图3示出了本公开实施例所提供的目标对象状态的示意图;Fig. 3 shows a schematic diagram of the state of the target object provided by the embodiment of the present disclosure;
图4示出了本公开实施例所提供的一种车辆控制装置的示意图;Fig. 4 shows a schematic diagram of a vehicle control device provided by an embodiment of the present disclosure;
图5示出了本公开实施例所提供的一种电子设备的示意图。Fig. 5 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
具体实施方式Detailed ways
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中附图,对本公开实施例中的技术方案进行清楚、完整地描述,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。通常在此处结合附图描述和示出的本公开实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本公开的实施例的详细描述并非旨在限制要求保护的本公开的范围,而是仅仅表示本公开的选定实施例。基于本公开的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。In order to make the purpose, technical solutions and advantages of the embodiments of the present disclosure clearer, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present disclosure. The described embodiments are only the present invention. Some, but not all, embodiments are disclosed. The components of the disclosed embodiments generally described and illustrated herein in connection with the drawings may be arranged and designed in a variety of different configurations. Accordingly, the following detailed description of the embodiments of the present disclosure provided in the accompanying drawings is not intended to limit the scope of the claimed disclosure, but merely represents selected embodiments of the present disclosure. Based on the embodiments of the present disclosure, all other embodiments obtained by those skilled in the art without creative effort shall fall within the protection scope of the present disclosure.
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。It should be noted that like numerals and letters denote similar items in the following figures, therefore, once an item is defined in one figure, it does not require further definition and explanation in subsequent figures.
本文中术语“和/或”,仅仅是描述一种关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合,例如,包括A、B、C中的至少一种,可以表示包括从A、B和C构成的集合中选择的任意一个或多个元素。The term "and/or" in this article only describes an association relationship, which means that there can be three kinds of relationships, for example, A and/or B can mean: there is A alone, A and B exist at the same time, and B exists alone. situation. In addition, the term "at least one" herein means any one of a variety or any combination of at least two of the more, for example, including at least one of A, B, and C, which may mean including from A, Any one or more elements selected from the set formed by B and C.
一般情况下,可以基于车辆上装载的雷达来进行障碍物的定位,但是受到障碍物遮挡以及雷达自身垂直角分辨率的影响,导致雷达采集的点云数据中会存在雷达盲区,无法检测到盲区内的障碍物,给车辆的安全行驶带来风险。Under normal circumstances, obstacles can be located based on the radar mounted on the vehicle. However, due to the obstruction of obstacles and the influence of the vertical angular resolution of the radar itself, there will be radar blind spots in the point cloud data collected by the radar, and blind spots cannot be detected. Obstacles inside the vehicle will bring risks to the safe driving of the vehicle.
有鉴于此,本公开提供了一种车辆控制方法、装置、电子设备及存储介质,本公开首先获取安装在目标车辆上的雷达在采集当前帧之前采集的历史帧中的第一点云数据对应的第一盲区信息、所述第一点云数据对应的第一对象识别结果、以及在当前帧中的第二点云数据;然后,基于所述第二点云数据、所述第一盲区信息、第一对象识别结果,确定所述第二点云数据对应的第二盲区信息;最后,基于所述第二盲区信息,控制所述目标车辆的行驶状态。本公开通过第二点云数据、第一盲区信息及第一对象识别结果,能够确定第二点云数据对应的第二盲区信息,实现对第二点云数据的盲区进行监控,从而能够基于第二盲区信息控制目标车辆的行驶状态,降低发生危险事故的概率。In view of this, the present disclosure provides a vehicle control method, device, electronic equipment, and storage medium. The present disclosure first obtains the first point cloud data corresponding to the historical frame collected by the radar installed on the target vehicle before collecting the current frame. The first blind area information, the first object recognition result corresponding to the first point cloud data, and the second point cloud data in the current frame; then, based on the second point cloud data, the first blind area information . As a result of the first object recognition, determine the second blind spot information corresponding to the second point cloud data; finally, control the driving state of the target vehicle based on the second blind spot information. The present disclosure can determine the second blind area information corresponding to the second point cloud data through the second point cloud data, the first blind area information, and the first object recognition result, and realize the monitoring of the blind area of the second point cloud data, so that it can be based on the first The second blind spot information controls the driving state of the target vehicle and reduces the probability of dangerous accidents.
下面通过具体的实施例,对本公开公开的车辆控制方法、装置、电子设备及存储介质进行说明。The vehicle control method, device, electronic equipment, and storage medium disclosed in the present disclosure will be described below through specific embodiments.
如图1所示,本公开实施例公开了一种车辆控制方法,该方法可以应用于具有计算能力的电子设备上,例如服务器、车载计算机等。具体地,该车辆控制方法可以包括如下步骤:As shown in FIG. 1 , the embodiment of the present disclosure discloses a vehicle control method, which can be applied to electronic devices with computing capabilities, such as servers, on-board computers, and the like. Specifically, the vehicle control method may include the following steps:
S110、获取安装在目标车辆上的雷达在采集当前帧之前采集的历史帧中的第一点云数据对应的第一盲区信息、所述第一点云数据对应的第一对象识别结果、以及在当前帧中的第二点云数据。S110. Obtain the first blind area information corresponding to the first point cloud data in the historical frame collected by the radar installed on the target vehicle before collecting the current frame, the first object recognition result corresponding to the first point cloud data, and The second point cloud data in the current frame.
上述目标车辆可以为自动驾驶车辆,配置有雷达,雷达可以为激光雷达,激光雷达拥有解析度高、测距精度高、探测性好等优点,是自动驾驶汽车上最重要的传感器。上述雷达可以连续地一帧一帧地采集点云数据,上述当前帧与历史帧可以为连续的两帧,也可以为不连续的两帧,在当前帧与历史帧为不连续的两帧的情况下,两帧之间的时间间隔需要在预设范围内,防止两帧点云数据之间的区别过大,提高后续继承历史帧的目 标对象信息的时效性。The above-mentioned target vehicle can be an autonomous driving vehicle equipped with a radar, and the radar can be a lidar. The lidar has the advantages of high resolution, high ranging accuracy, and good detection, and is the most important sensor on an autonomous vehicle. The above-mentioned radar can continuously collect point cloud data frame by frame. The above-mentioned current frame and historical frame can be two consecutive frames or two discontinuous frames. In some cases, the time interval between the two frames needs to be within the preset range to prevent the difference between the two frames of point cloud data from being too large, and improve the timeliness of the target object information that is subsequently inherited from the historical frames.
受到障碍物遮挡的影响和激光雷达自身垂直角分辨率的限制,激光雷达采集的点云数据中会存在盲区。盲区可能存在一个或多个子盲区,各个子盲区的俯视图可以为形状不规则的多边形。参见图2所示,为本公开实施例提供的盲区的示意图。图2中,目标车辆通过激光雷达检测到两个障碍物,形成了两个子盲区。盲区的位置信息可以包括盲区在点云数据中的坐标以及盲区的高度信息。Due to the influence of obstacles and the limitation of the vertical angular resolution of the lidar itself, there will be blind spots in the point cloud data collected by the lidar. There may be one or more sub-blind areas in the blind area, and the top view of each sub-blind area may be an irregular polygon. Referring to FIG. 2 , it is a schematic diagram of a blind area provided by an embodiment of the present disclosure. In Figure 2, the target vehicle detects two obstacles through lidar, forming two sub-blind areas. The position information of the blind area may include the coordinates of the blind area in the point cloud data and the height information of the blind area.
上述盲区信息可以包括盲区位置信息及盲区内的对象信息,盲区内的对象可以包括可观测对象及不可观测对象,对应的,盲区内的对象信息包括可观测对象信息及不可观测对象信息。其中,不可观测对象为被障碍物完全遮挡的对象,雷达无法检测到,可观测对象为在盲区中被雷达检测到的对象,比如造成盲区的障碍物本身、被障碍物部分遮挡的对象、或由于误差导致盲区与障碍物轮廓不完全一致产生缝隙而检测到的障碍物后的某些对象。The above blind area information may include blind area location information and object information in the blind area. Objects in the blind area may include observable objects and unobservable objects. Correspondingly, object information in the blind area includes observable object information and unobservable object information. Among them, the unobservable object is the object completely blocked by the obstacle, which cannot be detected by the radar, and the observable object is the object detected by the radar in the blind area, such as the obstacle itself causing the blind area, the object partially blocked by the obstacle, or Some objects behind obstacles are detected due to errors that cause gaps between the blind area and the obstacle contour.
上述盲区内的目标对象信息可以通过点云数据确定,示例性的,可以利用训练好的对象识别模型,对点云数据进行识别,得到点云数据中的目标对象信息。The target object information in the above-mentioned blind area can be determined through the point cloud data. Exemplarily, the trained object recognition model can be used to identify the point cloud data to obtain the target object information in the point cloud data.
其中,目标对象信息可以包括目标对象的位置、形状、朝向角、速度、类别等信息。Wherein, the target object information may include information such as the position, shape, orientation angle, speed, and category of the target object.
S120、基于所述第二点云数据、所述第一盲区信息、第一对象识别结果,确定所述第二点云数据对应的第二盲区信息。S120. Based on the second point cloud data, the first blind area information, and the first object recognition result, determine second blind area information corresponding to the second point cloud data.
上述第一盲区信息可以包括第一点云数据中第一盲区的位置信息,上述第二盲区信息可以包括第二点云数据中第二盲区的位置信息。第二盲区的位置信息及第二对象识别结果可以通过对象识别算法来确定。The first blind area information may include position information of the first blind area in the first point cloud data, and the second blind area information may include position information of the second blind area in the second point cloud data. The position information of the second blind area and the second object recognition result can be determined by an object recognition algorithm.
在一种可能的实施例中,可以通过以下步骤确定第二点云数据对应的第二盲区的位置信息:In a possible embodiment, the position information of the second blind area corresponding to the second point cloud data may be determined through the following steps:
基于所述第二点云数据,确定在所述第二点云数据中距离所述目标车辆设定范围内的障碍物的信息;Based on the second point cloud data, determine information on obstacles within a set range from the target vehicle in the second point cloud data;
基于所述雷达发射的线束信息、以及确定的所述障碍物的信息,确定所述第二盲区的位置信息。Position information of the second blind area is determined based on the wire harness information emitted by the radar and the determined information of the obstacle.
上述雷达可以获取到构成障碍物轮廓的各个点在设定坐标系下的位置信息,按照该方式可以基于该点云数据,得到距离该目标车辆设定范围内的障碍物的轮廓信息。The above-mentioned radar can obtain the position information of each point constituting the outline of the obstacle in the set coordinate system. In this way, the outline information of the obstacle within the set range from the target vehicle can be obtained based on the point cloud data.
上述线束信息可以包括雷达在各个旋转角度发射的无线电波的线束的数目以及与地面的高度,具体可以通过预先建立的线高地图来表示,示例性地,可以预先构建包含目标车辆,且与目标车辆距离设定范围内的地表区域在鸟瞰图下的栅格地图,针对每个栅格,基于雷达发射的线束信息,生成与该栅格地图对应的线高地图,其中,线高地图包含三个维度,前两个维度表示每个栅格在该线高地图中的行位置和列位置,第三个维度表示每个栅格包含的线束数目,另外,该栅格中还记录有该栅格包含的每条线束在该栅格内的高度。The above wire harness information may include the number of wire bundles of radio waves emitted by the radar at various rotation angles and the height to the ground, specifically, it may be represented by a pre-established wire height map. The grid map of the ground surface area within the vehicle distance setting range under the bird's-eye view. For each grid, based on the wire harness information emitted by the radar, a line height map corresponding to the grid map is generated. The line height map contains three dimension, the first two dimensions represent the row position and column position of each grid in the line height map, the third dimension represents the number of wire bundles contained in each grid, and the grid also records the grid The height of each harness contained within the grid.
其中,该栅格对应的线束数目是指不考虑该栅格存在障碍物的情况下,仅根据雷达的安装位置、安装角度以及雷达发射器的布置角度,确定的雷达装置发射的线束中,射入该栅格的线束数目;示例性地,针对射入该栅格的每条线束,可以通过将该线束平移至与穿过栅格中心点且垂直于栅格平面的直线相交的位置后,将该交点与该栅格的中心点位置之间的距离作为该条线束在该栅格处的线束高度。Among them, the number of wire harnesses corresponding to the grid refers to the number of wire harnesses emitted by the radar device determined only according to the installation position, installation angle, and arrangement angle of the radar transmitter without considering the obstacles in the grid. The number of wire bundles entering the grid; for example, for each wire bundle entering the grid, the wire bundle can be translated to the position where it intersects with a straight line passing through the center point of the grid and perpendicular to the grid plane, The distance between the intersection point and the center point of the grid is taken as the harness height of the harness at the grid.
该实施方式,通过第二点云数据中距离目标车辆设定范围内的障碍物的信息及雷达发射的线束信息,可以确定第二盲区的位置信息,实现雷达盲区的精准确定。In this embodiment, the position information of the second blind area can be determined through the information of obstacles within the set range of the target vehicle in the second point cloud data and the wire harness information emitted by the radar, so as to realize the accurate determination of the radar blind area.
确定上述第一点云数据的第一盲区的位置信息的步骤可以与确定上述第二盲区的位置信息相同。The step of determining the position information of the first blind area of the first point cloud data may be the same as determining the position information of the second blind area.
上述第一盲区信息可以还包括所述第一盲区内的对象信息;上述第二盲区信息可以还包括所述第二盲区内的目标对象信息。The first blind area information may further include object information within the first blind area; the second blind area information may further include target object information within the second blind area.
基于所述第二点云数据、所述第一盲区信息、第一对象识别结果,确定所述第二点云数据对应的第二盲区信息,可以包括:Based on the second point cloud data, the first blind area information, and the first object recognition result, determining the second blind area information corresponding to the second point cloud data may include:
基于所述第二盲区的位置信息、所述第一盲区的位置信息,以及所述第一盲区内的对象信息,确定第二盲区内的初始对象信息;determining initial object information in the second blind area based on the position information of the second blind area, the position information of the first blind area, and the object information in the first blind area;
基于所述第一盲区内的对象信息、所述第二对象识别结果、所述第一对象识别结果、所述第二盲区的位置信息,对所述初始对象信息进行更新,得到所述第二盲区内的目标对象信息。Based on the object information in the first blind area, the second object recognition result, the first object recognition result, and the position information of the second blind area, the initial object information is updated to obtain the second Target object information in the blind zone.
上述初始对象信息可以为从历史帧继承得到的对象信息。一种可能的实施例中,第一盲区可以包括至少一个第一子盲区,第二盲区可以包括至少一个第二子盲区,可以通过以下步骤确定上述初始对象信息:The above initial object information may be object information inherited from the history frame. In a possible embodiment, the first blind area may include at least one first sub-blind area, and the second blind area may include at least one second sub-blind area, and the above initial object information may be determined through the following steps:
基于所述第一盲区的位置信息,以及所述第二盲区的位置信息,分别确定各个第二子盲区与各个第一子盲区之间的关联关系;Based on the position information of the first blind area and the position information of the second blind area, respectively determine the association relationship between each second sub-blind area and each first sub-blind area;
基于所述关联关系,以及各个所述第一子盲区内的目标对象信息,确定各个所述第二子盲区内的初始对象信息。Based on the association relationship and target object information in each of the first sub-blind areas, initial object information in each of the second sub-blind areas is determined.
上述关联关系可以包括出现(历史帧中无该子盲区,当前帧中存在该子盲区)、消失(历史帧中存在该子盲区,当前帧中无该子盲区)、一对一(历史帧中一子盲区与当前帧中一子盲区对应,且无其他相关联的子盲区)、***(历史帧中一子盲区与当前帧中多个子盲区关联)、融合(历史帧中多个子盲区与当前帧中仅一个子盲区关联)、***加融合(***与融合同时发生),根据确定的关联关系,可以确定第二子盲区应该继承哪个第一子盲区内的目标对象信息。The above-mentioned association relationship may include appearance (there is no such sub-blind area in the historical frame, and the sub-blind area exists in the current frame), disappearance (the sub-blind area exists in the historical frame, and there is no such sub-blind area in the current frame), one-to-one (the sub-blind area in the historical frame A sub-blind area corresponds to a sub-blind area in the current frame, and there are no other associated sub-blind areas), split (a sub-blind area in the historical frame is associated with multiple sub-blind areas in the current frame), fusion (multiple sub-blind areas in the historical frame are associated with the current Only one sub-blind area in the frame is associated), split plus fusion (splitting and fusion occur at the same time), according to the determined association relationship, it can be determined which target object information in the first sub-blind area should be inherited by the second sub-blind area.
该实施方式,通过第一盲区、第二盲区的位置信息,确定各个第二子盲区与各个第一子盲区之间的关联关系,再根据关联关系确定各个第二子盲区内的初始对象信息,能够将历史帧中的各个盲区和当前帧中的各个盲区在时序上进行关联,从而实现盲区内对象信息在时间维度上的传导。In this embodiment, the association relationship between each second sub-blind area and each first sub-blind area is determined through the position information of the first blind area and the second blind area, and then the initial object information in each second sub-blind area is determined according to the association relationship, Each blind area in the historical frame can be associated with each blind area in the current frame in time sequence, so as to realize the conduction of object information in the blind area in the time dimension.
在一些可能的实施例中,可以通过以下步骤确定各个第二子盲区与各个第一子盲区之间的关联关系:In some possible embodiments, the association relationship between each second sub-blind area and each first sub-blind area may be determined through the following steps:
基于所述第一盲区的位置信息及所述第二盲区的位置信息,确定各个所述第二子盲区是否与所述各个第一子盲区之间存在重叠区域;Based on the position information of the first blind area and the position information of the second blind area, determine whether there is an overlapping area between each of the second sub-blind areas and each of the first sub-blind areas;
针对任一第二子盲区,基于该第二子盲区与各个第一子盲区之间的重叠区域的面积,确定该第二子盲区与各个第一子盲区之间的关联关系。示例性的,以历史帧存在m个子盲区,当前帧存在n个子盲区为例,可以先将历史帧的第一盲区的位置信息转换到当前帧的第二盲区的位置信息的坐标系下,然后确定m*n的关联矩阵,若一第一子盲区与一第二子盲区之间存在重叠区域,即重叠区域的面积大于0,可以将关联矩阵中对应的位置设置为1,否则设置为0,通过求解该关联矩阵,能够得到各个第二子盲区与各个第一子盲区之间的关联关系,得到各个第一子盲区的后继(与该第一子盲区重叠的第二子盲区)和各个第二子盲区的前驱(与该第二子盲区重叠的第一子盲区)。For any second sub-blind zone, based on the area of the overlapping area between the second sub-blind zone and each first sub-blind zone, the association relationship between the second sub-blind zone and each first sub-blind zone is determined. Exemplarily, taking m sub-blind areas in the historical frame and n sub-blind areas in the current frame as an example, the position information of the first blind area in the historical frame can be converted to the coordinate system of the position information of the second blind area in the current frame, and then Determine the correlation matrix of m*n, if there is an overlapping area between a first sub-blind area and a second sub-blind area, that is, the area of the overlapping area is greater than 0, the corresponding position in the correlation matrix can be set to 1, otherwise it can be set to 0 , by solving the correlation matrix, the association relationship between each second sub-blind area and each first sub-blind area can be obtained, and the successor of each first sub-blind area (the second sub-blind area overlapping with the first sub-blind area) and each The predecessor of the second sub-blind zone (the first sub-blind zone overlapping with the second sub-blind zone).
示例性的,可以通过对子盲区赋予标识信息,以体现不同子盲区之间的关联关系。比如,若一第二子盲区无前驱,则可以为其分配新的标识信息;若一第二子盲区有且仅有一个前驱,则可以继承该前驱的标识信息;若一个第二子盲区有多个前驱(融合关系),则可以继承前驱中最新的标识信息,并继承所有前驱的目标对象信息;若一第一子盲区有多个后继(***关系),则可以为该第一子盲区的所有后继分配新的标识信息;若融合与***关系同时存在,则可以为后继的第二子盲区分配新的标识信息。Exemplarily, identification information may be given to the sub-blind areas to reflect the association relationship between different sub-blind areas. For example, if a second sub-blind zone has no predecessor, new identification information can be assigned to it; if a second sub-blind zone has one and only one predecessor, the identification information of the predecessor can be inherited; if a second sub-blind zone has If there are multiple predecessors (fusion relationship), the latest identification information in the predecessor can be inherited, and the target object information of all predecessors can be inherited; if a first sub-blind zone has multiple successors (split relationship), then the first sub-blind zone can be New identification information is assigned to all successors of the sub-blind area; if the fusion and splitting relationships exist at the same time, new identification information can be assigned to the second sub-blind area of the successor.
在确定关联关系后,针对所述至少一个第二子盲区中的每个第二子盲区,可以基于各个所述第一子盲区内的对象信息,以及该第二子盲区与各个第一子盲区之间的关联关系,确定该第二子盲区内的初始对象信息。After determining the association relationship, for each second sub-blind area in the at least one second sub-blind area, based on the object information in each of the first sub-blind areas, and the relationship between the second sub-blind area and each first sub-blind area To determine the initial object information in the second sub-blind area.
示例性的,可以通过该第二子盲区与各个第一子盲区之间的关联类型(如融合、分 裂等),确定该第二子盲区的前驱,然后基于各个前驱的目标对象信息,确定该第二子盲区内的初始对象信息。Exemplarily, the predecessor of the second sub-blind zone can be determined through the association type (such as fusion, splitting, etc.) Initial object information in the second sub-blind zone.
若一第二子盲区有且仅有一个前驱,可以将该第二子盲区的前驱的目标对象信息作为该第二子盲区的初始对象信息,若一个第二子盲区有多个前驱,可以将该第二子盲区的所有前驱的目标对象信息进行融合作为该第二子盲区的初始对象信息。If a second sub-blind zone has and only has one predecessor, the target object information of the predecessor of the second sub-blind zone can be used as the initial object information of the second sub-blind zone, if a second sub-blind zone has multiple predecessors, it can be All the precursor target object information of the second sub-blind area is fused as the initial object information of the second sub-blind area.
该实施方式,通过判断各个所述第二子盲区是否与各个所述第一子盲区之间存在重叠区域,来确定各个第二子盲区与各个第一子盲区之间的关联关系,使确定的各个子盲区之间的关联关系更加准确。In this embodiment, by judging whether there is an overlapping area between each of the second sub-blind areas and each of the first sub-blind areas, the association relationship between each second sub-blind area and each of the first sub-blind areas is determined, so that the determined The correlation between each sub-blind area is more accurate.
在当前帧继承历史帧的目标对象信息后,由于历史帧与当前帧之间存在一定时间差,雷达检测到的对象可能发生运动,一些对象可能进入到了盲区中,也可能离开盲区,需要对确定的初始对象信息进行更新,一些可能的实施例中,可以通过以下步骤更新初始对象信息:After the current frame inherits the target object information of the historical frame, due to a certain time difference between the historical frame and the current frame, the objects detected by the radar may move, and some objects may enter the blind area or leave the blind area. The initial object information is updated. In some possible embodiments, the initial object information can be updated through the following steps:
基于所述第一对象识别结果、第二对象识别结果,以及所述第二盲区的位置信息,更新所述初始对象信息中的可观测对象信息;updating the observable object information in the initial object information based on the first object recognition result, the second object recognition result, and the location information of the second blind area;
基于所述第一对象识别结果、第二对象识别结果、所述初始对象信息中的可观测对象信息,以及所述第一盲区内的目标对象信息中的不可观测对象信息,更新所述初始对象信息中的不可观测对象信息;updating the initial object based on the first object recognition result, the second object recognition result, the observable object information in the initial object information, and the unobservable object information in the target object information in the first blind area Unobservable object information in the information;
确定更新后的所述初始对象信息为所述第二盲区内的目标对象信息。It is determined that the updated initial object information is target object information within the second blind area.
该步骤可以通过对各个目标对象在不同帧之间进行跟踪来实现,跟踪方式可以包括显式跟踪及隐式跟踪,显式跟踪可以利用雷达的对象识别结果作为跟踪结果,并将其记录在其对应的盲区的可观测对象信息中,隐式跟踪可以是针对不可观测对象,将最近一次雷达观测到的该对象的对象识别结果作为跟踪结果,并将其记录在对应的盲区的不可观测对象信息中。This step can be realized by tracking each target object between different frames. The tracking method can include explicit tracking and implicit tracking. The explicit tracking can use the object recognition result of the radar as the tracking result and record it in its In the observable object information of the corresponding blind area, implicit tracking can be for the unobservable object, and the object recognition result of the object observed by the radar last time is used as the tracking result, and it is recorded in the unobservable object information of the corresponding blind area middle.
具体的,可以利用第一对象识别结果及第二对象识别结果,对点云数据中的对象进行跟踪,确定第一点云数据与第二点云数据中的同一对象,识别到的同一对象可以使用相同的对象标识进行表示,在确定一目标对象的出现位置处于一第二子盲区内时,可以将该目标对象添加至该第二子盲区的可观测对象信息中,实现对初始对象信息中的可观测对象信息更新。Specifically, the first object recognition result and the second object recognition result can be used to track the object in the point cloud data to determine the same object in the first point cloud data and the second point cloud data, and the identified same object can be Use the same object identifier to represent, when it is determined that the appearance position of a target object is in a second sub-blind area, the target object can be added to the observable object information of the second sub-blind area, so as to implement the original object information Observable information updates for .
进一步的,可以对初始对象信息中的不可观测对象信息进行更新,示例性的,若一对象在第二点云数据中被观测到(即该对象存在于第二子盲区的可观测对象信息中或该对象不存在于任一第二子盲区中),且该对象同时存在于一第二子盲区的可观测对象信息及一第一子盲区的不可观测对象信息中,说明该对象进入第一盲区后,又在第二盲区内被检测到,对于该对象,可以采用显式跟踪的方法,将该对象的第二对象识别结果作为跟踪信息;若一对象在第二点云数据中被观测到,且该对象存在于一第二子盲区的可观测对象信息中,并不存在于任一第一子盲区的不可观测对象信息中,则说明该对象刚刚进入第二盲区(例如该对象的一部分进入第二盲区),可以对该对象进入的第二子盲区建立新的跟踪目标,暂时不进行不可观测对象信息的更新;若一对象在第二点云数据中被观测到,且该对象存在于一第一子盲区的不可观测对象信息中,并且不存在于任一第二子盲区的可观测对象信息中,则可以认为该对象离开了第一盲区,将该对象从对应的第二子盲区的不可观测对象信息中移除;若一对象在第二点云数据中被观测到,且该对象不存在于第二子盲区的可观测对象信息,并且不存在于第一子盲区的不可观测对象信息中,则说明该对象在盲区外,不对该对象进行处理;若一对象没有在第二点云数据中被观测到,且该对象存在于一第一子盲区的不可观测对象信息中,说明该对象仍处于盲区中,采用隐式跟踪,持续记录该对象所处的盲区。Further, the unobservable object information in the initial object information can be updated. Exemplarily, if an object is observed in the second point cloud data (that is, the object exists in the observable object information in the second sub-blind area or the object does not exist in any second sub-blind area), and the object exists in the observable object information of a second sub-blind area and the unobservable object information of a first sub-blind area at the same time, indicating that the object enters the first sub-blind area After the blind area, it is detected in the second blind area. For this object, the method of explicit tracking can be adopted, and the second object recognition result of the object can be used as tracking information; if an object is observed in the second point cloud data and the object exists in the observable object information of a second sub-blind zone, but does not exist in the unobservable object information of any first sub-blind zone, it means that the object has just entered the second blind zone (such as the object’s Part of the object enters the second blind area), a new tracking target can be established for the second sub-blind area entered by the object, and the information of the unobservable object will not be updated temporarily; if an object is observed in the second point cloud data, and the object If it exists in the unobservable object information of a first sub-blind zone, and does not exist in the observable object information of any second sub-blind zone, it can be considered that the object has left the first blind zone, and the object is removed from the corresponding second sub-blind zone. If an object is observed in the second point cloud data, and the object does not exist in the observable object information of the second sub-blind area, and does not exist in the first sub-blind area In the unobservable object information, it means that the object is outside the blind area, and the object will not be processed; if an object is not observed in the second point cloud data, and the object exists in the unobservable object information of the first sub-blind area In , it means that the object is still in the blind spot, and implicit tracking is used to continuously record the blind spot where the object is located.
如图3所示,为本公开实施例所提供的目标对象状态的示意图,其中,“-”表示目标对象离开盲区,“=”表示目标对象保持当前状态,“+”表示目标对象进入盲区。As shown in FIG. 3 , it is a schematic diagram of the state of the target object provided by the embodiment of the present disclosure, wherein "-" indicates that the target object leaves the blind area, "=" indicates that the target object maintains the current state, and "+" indicates that the target object enters the blind area.
示例性的,如下表所示,为上述目标对象追踪采用的追踪方法的判断方式。其中,“Y”表示“是”,“F”表示“否”。Exemplarily, as shown in the following table, it is a judgment method of the tracking method adopted for the above-mentioned target object tracking. Among them, "Y" means "yes" and "F" means "no".
Figure PCTCN2022103221-appb-000001
Figure PCTCN2022103221-appb-000001
通过第一对象识别结果、第二对象识别结果、初始对象信息中的可观测对象信息,以及第一盲区内的目标对象信息中的不可观测对象信息,即可进行上述对象的追踪,从而确定各个第二子盲区的不可观测对象信息中应当存在哪些目标对象,实现对初始对象信息中不可观测对象信息的更新,最终得到第二盲区内的目标对象信息。Through the first object recognition result, the second object recognition result, the observable object information in the initial object information, and the unobservable object information in the target object information in the first blind area, the above objects can be tracked, so as to determine each Which target objects should exist in the unobservable object information in the second sub-blind area, realize the update of the unobservable object information in the initial object information, and finally obtain the target object information in the second blind area.
S130、基于所述第二盲区信息,控制所述目标车辆的行驶状态。S130. Based on the second blind spot information, control the driving state of the target vehicle.
根据得到第二盲区内的目标对象信息,即可得知第二盲区的各个第二子盲区中存在的目标对象的类型、数量、大小、朝向等信息,进而可以根据上述信息控制目标车辆的行驶状态,降低自动驾驶的风险。According to the target object information in the second blind area, the type, quantity, size, orientation and other information of the target objects existing in each second sub-blind area of the second blind area can be known, and then the driving of the target vehicle can be controlled according to the above information state, reducing the risk of autonomous driving.
在一些可能的实施例中,可以通过以下步骤控制目标车辆的行驶状态:In some possible embodiments, the driving state of the target vehicle can be controlled through the following steps:
基于所述第二盲区内的目标对象信息的不可观测对象信息,确定各个所述第二子盲区内的不可观测对象的类型及数量;Based on the unobservable object information of the target object information in the second blind area, determine the type and quantity of unobservable objects in each of the second sub-blind areas;
基于各个所述第二子盲区内的不可观测对象的类型及数量,确定各个所述第二子盲区对应的危险等级;Based on the type and quantity of unobservable objects in each of the second sub-blind areas, determine the risk level corresponding to each of the second sub-blind areas;
基于各个所述第二子盲区对应的危险等级,控制所述目标车辆的行驶状态。Based on the danger level corresponding to each of the second sub-blind spots, the driving state of the target vehicle is controlled.
上述危险等级可以与不可观测对象的类型及数量相关,比如,若第二子盲区内的不可观测对象的类型为行人、自行车(类型为道路弱势者),则其危险等级相对较高,而且第二子盲区内不可观测对象的数量越多,其危险等级就越高;进一步的,危险等级还可以与第二子盲区的位置信息相关,比如,第二子盲区距离目标车辆未来的行驶轨迹越近,其危险等级就越高。The above-mentioned danger levels can be related to the type and quantity of unobservable objects. For example, if the types of unobservable objects in the second sub-blind zone are pedestrians and bicycles (types are road vulnerable persons), their danger level is relatively high, and the second sub-blind zone The greater the number of unobservable objects in the second sub-blind zone, the higher the risk level; further, the risk level can also be related to the position information of the second sub-blind zone, for example, the closer the second sub-blind zone is to the future trajectory of the target vehicle. The closer it is, the higher the level of danger.
某个第二子盲区的危险等级越高,可以表示该第二子盲区附近越容易发生意外,如突然冲出车辆或行人,可以通过控制目标车辆在危险等级较高的位置进行减速,或调整目标车辆的行驶路线等方式,避免意外的发生,从而提高目标车辆自动驾驶的安全性。The higher the danger level of a second sub-blind zone, the more likely accidents will occur near the second sub-blind zone, such as suddenly rushing out of a vehicle or pedestrian, you can control the target vehicle to slow down at a position with a higher danger level, or adjust The driving route of the target vehicle and other methods can avoid accidents, thereby improving the safety of the target vehicle's automatic driving.
上述实施例利用第二盲区的位置信息、第一盲区的位置信息,以及第一盲区内的对象信息,使第二盲区能够继承历史帧的第一盲区内的对象信息,得到第二盲区内的初始 对象信息,之后,基于第一盲区内的目标对象信息、第二对象识别结果、第一对象识别结果、第二盲区的位置信息,对初始对象信息进行更新,得到所述第二盲区内的目标对象信息,实现确定盲区内的目标对象,并根据目标对象信息进行车辆行驶状态的控制。The above embodiment uses the position information of the second blind area, the position information of the first blind area, and the object information in the first blind area, so that the second blind area can inherit the object information in the first blind area of the historical frame, and obtain the object information in the second blind area. The initial object information, and then, based on the target object information in the first blind area, the second object recognition result, the first object recognition result, and the position information of the second blind area, the initial object information is updated to obtain the target object information in the second blind area The target object information is used to determine the target object in the blind area, and to control the driving state of the vehicle according to the target object information.
对应于上述车辆控制方法,本公开还公开了一种车辆控制装置,该装置中的各个模块能够实现上述各个实施例的定位方法中的每个步骤,并且能够取得相同的有益效果,因此,对于相同的部分这里不再进行赘述。具体地,如图4所示,车辆控制装置包括:Corresponding to the above-mentioned vehicle control method, the present disclosure also discloses a vehicle control device, each module in the device can implement each step in the positioning method of each of the above-mentioned embodiments, and can achieve the same beneficial effect, therefore, for The same part will not be repeated here. Specifically, as shown in Figure 4, the vehicle control device includes:
获取模块410,用于获取安装在目标车辆上的雷达在采集当前帧之前采集的历史帧中的第一点云数据对应的第一盲区信息、所述第一点云数据对应的第一对象识别结果、以及所述当前帧中的第二点云数据;The obtaining module 410 is used to obtain the first blind area information corresponding to the first point cloud data in the historical frame collected by the radar installed on the target vehicle before collecting the current frame, and the first object identification corresponding to the first point cloud data result, and the second point cloud data in the current frame;
确定模块420,用于基于所述第二点云数据、所述第一盲区信息、所述第一对象识别结果,确定所述第二点云数据对应的第二盲区信息;A determining module 420, configured to determine second blind area information corresponding to the second point cloud data based on the second point cloud data, the first blind area information, and the first object recognition result;
控制模块430,用于基于所述第二盲区信息,控制所述目标车辆的行驶状态。A control module 430, configured to control the driving state of the target vehicle based on the second blind spot information.
在一种可能的实施方式中,所述第一盲区信息包括所述第一点云数据中第一盲区的位置信息,所述第二盲区信息包括所述第二点云数据中第二盲区的位置信息;In a possible implementation manner, the first blind area information includes the position information of the first blind area in the first point cloud data, and the second blind area information includes the location information of the second blind area in the second point cloud data. location information;
所述确定模块420具体用于:The determination module 420 is specifically used for:
确定所述第二点云数据对应的第二对象识别结果及所述第二盲区的位置信息。A second object recognition result corresponding to the second point cloud data and position information of the second blind area are determined.
在一种可能的实施方式中,所述第一盲区信息还包括所述第一盲区内的对象信息;所述第二盲区信息还包括所述第二盲区内的目标对象信息;所述确定模块420在基于所述第二点云数据、所述第一盲区信息、所述第一对象识别结果,确定所述第二点云数据对应的第二盲区信息时,用于:In a possible implementation manner, the first blind area information further includes object information in the first blind area; the second blind area information further includes target object information in the second blind area; the determining module 420 When determining the second blind area information corresponding to the second point cloud data based on the second point cloud data, the first blind area information, and the first object recognition result, it is used to:
基于所述第二盲区的位置信息、所述第一盲区的位置信息,以及所述第一盲区内的对象信息,确定第二盲区内的初始对象信息;determining initial object information in the second blind area based on the position information of the second blind area, the position information of the first blind area, and the object information in the first blind area;
基于所述第一盲区内的对象信息、所述第二对象识别结果、所述第一对象识别结果、所述第二盲区的位置信息,对所述初始对象信息进行更新,得到所述第二盲区内的目标对象信息。Based on the object information in the first blind area, the second object recognition result, the first object recognition result, and the position information of the second blind area, the initial object information is updated to obtain the second Target object information in the blind zone.
在一种可能的实施方式中,所述确定模块420具体用于:In a possible implementation manner, the determining module 420 is specifically configured to:
基于所述第二点云数据,确定在所述第二点云数据中距离所述目标车辆设定范围内的障碍物的信息;Based on the second point cloud data, determine information on obstacles within a set range from the target vehicle in the second point cloud data;
基于所述雷达发射的线束信息、以及所述障碍物的信息,确定所述第二盲区的位置信息。The position information of the second blind area is determined based on the wire harness information emitted by the radar and the information of the obstacle.
在一种可能的实施方式中,所述第一盲区包括至少一个第一子盲区,所述第二盲区包括至少一个第二子盲区;In a possible implementation manner, the first blind area includes at least one first sub-blind area, and the second blind area includes at least one second sub-blind area;
所述确定模块430具体用于:The determining module 430 is specifically used for:
基于所述第一盲区的位置信息,以及所述第二盲区的位置信息,分别确定所述至少一个第二子盲区中各个第二子盲区与所述至少一个第一子盲区中各个第一子盲区之间的关联关系;Based on the position information of the first blind area and the position information of the second blind area, respectively determine each second sub-blind area in the at least one second sub-blind area and each first sub-blind area in the at least one first sub-blind area The relationship between blind spots;
基于确定的所述关联关系,以及所述至少一个第一子盲区中各个第一子盲区内的目标对象信息,确定所述至少一个第二子盲区中各个第二子盲区内的初始对象信息。Based on the determined association relationship and target object information in each of the at least one first sub-blind areas, determine initial object information in each of the at least one second sub-blind areas.
在一种可能的实施方式中,所述确定模块420在基于所述第一盲区的位置信息,以及所述第二盲区的位置信息,分别确定所述至少一个第二子盲区中各个第二子盲区与所述至少一个第一子盲区中各个第一子盲区之间的关联关系时,用于:In a possible implementation manner, the determining module 420 determines each second sub-blind area in the at least one second sub-blind area based on the position information of the first blind area and the position information of the second blind area. When the association relationship between the blind area and each first sub-blind area in the at least one first sub-blind area is used for:
基于所述第一盲区的位置信息及所述第二盲区的位置信息,确定所述至少一个第二子盲区中各个第二子盲区与所述至少一个第一子盲区中各个第一子盲区之间的重叠区域;Based on the position information of the first blind area and the position information of the second blind area, determine the difference between each second sub-blind area in the at least one second sub-blind area and each first sub-blind area in the at least one first sub-blind area the overlapping area between
针对所述至少一个第二子盲区中任一第二子盲区,基于该第二子盲区与所述至少一个第一子盲区中各个第一子盲区之间的重叠区域的面积,确定该第二子盲区与所述至少 一个第一子盲区中各个第一子盲区之间的关联关系。For any second sub-blind area in the at least one second sub-blind area, based on the area of the overlapping area between the second sub-blind area and each first sub-blind area in the at least one first sub-blind area, determine the second An association relationship between a sub-blind zone and each first sub-blind zone in the at least one first sub-blind zone.
在一种可能的实施方式中,所述确定模块420在基于所述关联关系,以及所述至少一个第一子盲区中各个第一子盲区内的对象信息,确定所述至少一个第二子盲区中各个第二子盲区内的初始对象信息时,用于:In a possible implementation manner, the determining module 420 determines the at least one second sub-blind area based on the association relationship and object information in each first sub-blind area of the at least one first sub-blind area When the initial object information in each of the second sub-blind areas is used for:
针对所述至少一个第二子盲区中任一第二子盲区,基于所述至少一个第一子盲区中各个第一子盲区内的对象信息,以及该第二子盲区与所述至少一个第一子盲区中各个第一子盲区之间的关联关系,确定该第二子盲区内的初始对象信息。For any second sub-blind area in the at least one second sub-blind area, based on the object information in each first sub-blind area in the at least one first sub-blind area, and the relationship between the second sub-blind area and the at least one first sub-blind area The association relationship among the first sub-blind areas in the sub-blind areas determines the initial object information in the second sub-blind areas.
在一种可能的实施方式中,盲区内的对象信息包括可观测对象信息及不可观测对象信息;In a possible implementation manner, the object information in the blind area includes observable object information and unobservable object information;
所述确定模块420在基于所述第一盲区内的目标对象信息、所述第二对象识别结果、所述第一对象识别结果、所述第二盲区的位置信息,对所述初始对象信息进行更新,得到所述第二盲区内的目标对象信息时,用于:The determining module 420 is based on the target object information in the first blind area, the second object recognition result, the first object recognition result, and the position information of the second blind area, and performs an operation on the initial object information. Updating, when the target object information in the second blind area is obtained, it is used for:
基于所述第一对象识别结果、所述第二对象识别结果,以及所述第二盲区的位置信息,更新所述初始对象信息中的可观测对象信息;updating the observable object information in the initial object information based on the first object recognition result, the second object recognition result, and the location information of the second blind area;
基于所述第一对象识别结果、所述第二对象识别结果、所述初始对象信息中的可观测对象信息,以及所述第一盲区内的目标对象信息中的不可观测对象信息,更新所述初始对象信息中的不可观测对象信息;Update the Unobservable object information in the initial object information;
确定更新后的所述初始对象信息为所述第二盲区内的目标对象信息。It is determined that the updated initial object information is target object information within the second blind area.
在一种可能的实施方式中,所述控制模块430具体用于:In a possible implementation manner, the control module 430 is specifically configured to:
基于所述第二盲区内的目标对象信息的不可观测对象信息,确定所述至少一个第二子盲区中各个第二子盲区内的不可观测对象的类型及数量;Based on the unobservable object information of the target object information in the second blind area, determine the type and quantity of unobservable objects in each second sub-blind area in the at least one second sub-blind area;
基于所述至少一个第二子盲区中各个第二子盲区内的不可观测对象的类型及数量,确定所述至少一个第二子盲区中各个第二子盲区对应的危险等级;Based on the type and quantity of unobservable objects in each second sub-blind area in the at least one second sub-blind area, determine the risk level corresponding to each second sub-blind area in the at least one second sub-blind area;
基于所述至少一个第二子盲区中各个第二子盲区对应的危险等级,控制所述目标车辆的行驶状态。Based on the risk level corresponding to each second sub-blind zone in the at least one second sub-blind zone, the driving state of the target vehicle is controlled.
对应于上述车辆控制方法,本公开实施例还提供了一种电子设备500,如图5所示,为本公开实施例提供的电子设备500结构示意图,包括:Corresponding to the above vehicle control method, an embodiment of the present disclosure further provides an electronic device 500, as shown in FIG. 5 , which is a schematic structural diagram of the electronic device 500 provided by the embodiment of the present disclosure, including:
处理器51、存储器52、和总线53;存储器52用于存储执行指令,包括内存521和外部存储器522;这里的内存521也称内存储器,用于暂时存放处理器51中的运算数据,以及与硬盘等外部存储器522交换的数据,处理器51通过内存521与外部存储器522进行数据交换,当电子设备500运行时,处理器51与存储器52之间通过总线53通信,使得处理器51执行以下指令: Processor 51, memory 52, and bus 53; memory 52 is used for storing and executing instruction, comprises memory 521 and external memory 522; memory 521 here is also called internal memory, is used for temporarily storing computing data in processor 51, and The data exchanged by the external memory 522 such as hard disk, the processor 51 exchanges data with the external memory 522 through the memory 521, when the electronic device 500 is running, the processor 51 communicates with the memory 52 through the bus 53, so that the processor 51 executes the following instructions :
获取安装在目标车辆上的雷达在采集当前帧之前采集的历史帧中的第一点云数据对应的第一盲区信息、所述第一点云数据对应的第一对象识别结果、以及所述当前帧中的第二点云数据;Obtain the first blind area information corresponding to the first point cloud data in the historical frame collected by the radar installed on the target vehicle before collecting the current frame, the first object recognition result corresponding to the first point cloud data, and the current the second point cloud data in the frame;
基于所述第二点云数据、所述第一盲区信息、所述第一对象识别结果,确定所述第二点云数据对应的第二盲区信息;Based on the second point cloud data, the first blind area information, and the first object recognition result, determine second blind area information corresponding to the second point cloud data;
基于所述第二盲区信息,控制所述目标车辆的行驶状态。Based on the second blind spot information, the driving state of the target vehicle is controlled.
本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上述方法实施例中所述车辆控制方法的步骤。其中,该存储介质可以是易失性或非易失的计算机可读取存储介质。Embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is run by a processor, the steps of the vehicle control method described in the above-mentioned method embodiments are executed. Wherein, the storage medium may be a volatile or non-volatile computer-readable storage medium.
本公开实施例还提供了一种计算机程序产品,包括存储了程序代码的计算机可读存储介质,所述程序代码包括的指令可用于执行上述方法实施例中所述车辆控制方法的步骤,具体可参见上述方法实施例,在此不再赘述。Embodiments of the present disclosure also provide a computer program product, including a computer-readable storage medium storing program codes. The instructions contained in the program codes can be used to execute the steps of the vehicle control method described in the method embodiments above. Specifically, Refer to the foregoing method embodiments, and details are not repeated here.
其中,该计算机程序产品可以具体通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品具体体现为计算机存储介质,在另一个可选实施例中, 计算机程序产品具体体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。Wherein, the computer program product may be specifically realized by hardware, software or a combination thereof. In an optional embodiment, the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), etc. wait.
本领域技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的***和装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。在本公开所提供的几个实施例中,应该理解到,所揭露的***、装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。Those skilled in the art can clearly understand that for the convenience and brevity of description, the specific working process of the above-described system and device can refer to the corresponding process in the foregoing method embodiment, and details are not repeated here. In the several embodiments provided in the present disclosure, it should be understood that the disclosed systems, devices and methods may be implemented in other ways. The device embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented. In another point, the mutual coupling or direct coupling or communication connection shown or discussed may be through some communication interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个处理器可执行的非易失的计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。If the functions are realized in the form of software function units and sold or used as independent products, they can be stored in a non-volatile computer-readable storage medium executable by a processor. Based on this understanding, the technical solution of the present disclosure is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in various embodiments of the present disclosure. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disc and other media that can store program codes. .
最后应说明的是:以上所述实施例,仅为本公开的具体实施方式,用以说明本公开的技术方案,而非对其限制,本公开的保护范围并不局限于此,尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本公开实施例技术方案的精神和范围,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应所述以权利要求的保护范围为准。Finally, it should be noted that: the above-mentioned embodiments are only specific implementations of the present disclosure, and are used to illustrate the technical solutions of the present disclosure, rather than limit them, and the protection scope of the present disclosure is not limited thereto, although referring to the aforementioned The embodiments have described the present disclosure in detail, and those skilled in the art should understand that any person familiar with the technical field can still modify the technical solutions described in the foregoing embodiments within the technical scope disclosed in the present disclosure Changes can be easily imagined, or equivalent replacements can be made to some of the technical features; and these modifications, changes or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions of the embodiments of the present disclosure, and should be included in this disclosure. within the scope of protection. Therefore, the protection scope of the present disclosure should be defined by the protection scope of the claims.

Claims (12)

  1. 一种车辆控制方法,其特征在于,包括:A vehicle control method, characterized by comprising:
    获取安装在目标车辆上的雷达在采集当前帧之前采集的历史帧中的第一点云数据对应的第一盲区信息、所述第一点云数据对应的第一对象识别结果、以及所述当前帧中的第二点云数据;Obtain the first blind area information corresponding to the first point cloud data in the historical frame collected by the radar installed on the target vehicle before collecting the current frame, the first object recognition result corresponding to the first point cloud data, and the current the second point cloud data in the frame;
    基于所述第二点云数据、所述第一盲区信息、所述第一对象识别结果,确定所述第二点云数据对应的第二盲区信息;Based on the second point cloud data, the first blind area information, and the first object recognition result, determine second blind area information corresponding to the second point cloud data;
    基于所述第二盲区信息,控制所述目标车辆的行驶状态。Based on the second blind spot information, the driving state of the target vehicle is controlled.
  2. 根据权利要求1所述的方法,其特征在于,所述第一盲区信息包括所述第一点云数据中第一盲区的位置信息,所述第二盲区信息包括所述第二点云数据中第二盲区的位置信息;The method according to claim 1, wherein the first blind area information includes the position information of the first blind area in the first point cloud data, and the second blind area information includes the position information of the first blind area in the second point cloud data. The location information of the second blind spot;
    所述基于所述第二点云数据、所述第一盲区信息、所述第一对象识别结果,确定所述第二点云数据对应的第二盲区信息,包括:The determining the second blind area information corresponding to the second point cloud data based on the second point cloud data, the first blind area information, and the first object recognition result includes:
    确定所述第二点云数据对应的第二对象识别结果及所述第二盲区的位置信息。A second object recognition result corresponding to the second point cloud data and position information of the second blind area are determined.
  3. 根据权利要求2所述的方法,其特征在于,所述第一盲区信息还包括所述第一盲区内的对象信息;所述第二盲区信息还包括所述第二盲区内的目标对象信息;The method according to claim 2, wherein the first blind area information further includes object information in the first blind area; the second blind area information further includes target object information in the second blind area;
    所述基于所述第二点云数据、所述第一盲区信息、所述第一对象识别结果,确定所述第二点云数据对应的第二盲区信息,包括:The determining the second blind area information corresponding to the second point cloud data based on the second point cloud data, the first blind area information, and the first object recognition result includes:
    基于所述第二盲区的位置信息、所述第一盲区的位置信息,以及所述第一盲区内的对象信息,确定第二盲区内的初始对象信息;determining initial object information in the second blind area based on the position information of the second blind area, the position information of the first blind area, and the object information in the first blind area;
    基于所述第一盲区内的对象信息、所述第二对象识别结果、所述第一对象识别结果、所述第二盲区的位置信息,对所述初始对象信息进行更新,得到所述第二盲区内的目标对象信息。Based on the object information in the first blind area, the second object recognition result, the first object recognition result, and the position information of the second blind area, the initial object information is updated to obtain the second Target object information in the blind zone.
  4. 根据权利要求2所述的方法,其特征在于,确定所述第二盲区的位置信息包括:The method according to claim 2, wherein determining the position information of the second blind area comprises:
    基于所述第二点云数据,确定在所述第二点云数据中距离所述目标车辆设定范围内的障碍物的信息;Based on the second point cloud data, determine information on obstacles within a set range from the target vehicle in the second point cloud data;
    基于所述雷达发射的线束信息、以及所述障碍物的信息,确定所述第二盲区的位置信息。The position information of the second blind area is determined based on the wire harness information emitted by the radar and the information of the obstacle.
  5. 根据权利要求3或4所述的方法,其特征在于,所述第一盲区包括至少一个第一子盲区,所述第二盲区包括至少一个第二子盲区;The method according to claim 3 or 4, wherein the first blind area includes at least one first sub-blind area, and the second blind area includes at least one second sub-blind area;
    所述基于所述第二盲区的位置信息、所述第一盲区的位置信息,以及所述第一盲区内的对象信息,确定第二盲区内的初始对象信息,包括:The determining the initial object information in the second blind area based on the position information of the second blind area, the position information of the first blind area, and the object information in the first blind area includes:
    基于所述第一盲区的位置信息,以及所述第二盲区的位置信息,分别确定所述至少一个第二子盲区中各个第二子盲区与所述至少一个第一子盲区中各个第一子盲区之间的关联关系;Based on the position information of the first blind area and the position information of the second blind area, respectively determine each second sub-blind area in the at least one second sub-blind area and each first sub-blind area in the at least one first sub-blind area The relationship between blind spots;
    基于所述关联关系,以及所述至少一个第一子盲区中各个第一子盲区内的对象信息,确定所述至少一个第二子盲区中各个第二子盲区内的初始对象信息。Based on the association relationship and the object information in each of the at least one first sub-blind areas, initial object information in each of the at least one second sub-blind areas is determined.
  6. 根据权利要求5所述的方法,其特征在于,所述基于所述第一盲区的位置信息,以及所述第二盲区的位置信息,分别确定所述至少一个第二子盲区中各个第二子盲区与所述至少一个第一子盲区中各个第一子盲区之间的关联关系,包括:The method according to claim 5, wherein, based on the position information of the first blind area and the position information of the second blind area, each second sub-block in the at least one second blind area is determined respectively. The association relationship between the dead zone and each first sub-blind zone in the at least one first sub-blind zone includes:
    基于所述第一盲区的位置信息及所述第二盲区的位置信息,确定所述至少一个第二子盲区中各个第二子盲区与所述至少一个第一子盲区中各个第一子盲区之间的重叠区域;Based on the position information of the first blind area and the position information of the second blind area, determine the difference between each second sub-blind area in the at least one second sub-blind area and each first sub-blind area in the at least one first sub-blind area the overlapping area between
    针对所述至少一个第二子盲区中任一第二子盲区,基于该第二子盲区与所述至少一个第一子盲区中各个第一子盲区之间的重叠区域的面积,确定该第二子盲区与所述至少一个第一子盲区中各个第一子盲区之间的关联关系。For any second sub-blind area in the at least one second sub-blind area, based on the area of the overlapping area between the second sub-blind area and each first sub-blind area in the at least one first sub-blind area, determine the second An association relationship between a sub-blind zone and each first sub-blind zone in the at least one first sub-blind zone.
  7. 根据权利要求5或6所述的方法,其特征在于,所述基于所述关联关系,以及所述至少一个第一子盲区中各个第一子盲区内的对象信息,确定所述至少一个第二子盲区中各个第二子盲区内的初始对象信息,包括:The method according to claim 5 or 6, wherein the at least one second The initial object information in each second sub-blind area in the sub-blind area, including:
    针对所述至少一个第二子盲区中任一第二子盲区,基于所述至少一个第一子盲区中各个第一子盲区内的对象信息,以及该第二子盲区与所述至少一个第一子盲区中各个第一子盲区之间的关联关系,确定该第二子盲区内的初始对象信息。For any second sub-blind area in the at least one second sub-blind area, based on the object information in each first sub-blind area in the at least one first sub-blind area, and the relationship between the second sub-blind area and the at least one first sub-blind area The association relationship among the first sub-blind areas in the sub-blind areas determines the initial object information in the second sub-blind areas.
  8. 根据权利要求5所述的方法,其特征在于,盲区内的对象信息包括可观测对象信息及不可观测对象信息;The method according to claim 5, wherein the object information in the blind area includes observable object information and unobservable object information;
    所述基于所述第一盲区内的对象信息、所述第二对象识别结果、所述第一对象识别结果、所述第二盲区的位置信息,对所述初始对象信息进行更新,得到所述第二盲区内的目标对象信息,包括:The initial object information is updated based on the object information in the first blind area, the second object recognition result, the first object recognition result, and the location information of the second blind area to obtain the Target object information in the second blind zone, including:
    基于所述第一对象识别结果、所述第二对象识别结果,以及所述第二盲区的位置信息,更新所述初始对象信息中的可观测对象信息;updating the observable object information in the initial object information based on the first object recognition result, the second object recognition result, and the location information of the second blind area;
    基于所述第一对象识别结果、所述第二对象识别结果、所述初始对象信息中的可观测对象信息,以及所述第一盲区内的对象信息中的不可观测对象信息,更新所述初始对象信息中的不可观测对象信息;Based on the first object recognition result, the second object recognition result, observable object information in the initial object information, and unobservable object information in the object information in the first blind area, update the initial Unobservable object information in object information;
    确定更新后的所述初始对象信息为所述第二盲区内的目标对象信息。It is determined that the updated initial object information is target object information within the second blind area.
  9. 根据权利要求8所述的方法,其特征在于,所述基于所述第二盲区信息,控制所述目标车辆的行驶状态,包括:The method according to claim 8, wherein the controlling the driving state of the target vehicle based on the second blind spot information comprises:
    基于所述第二盲区内的目标对象信息的不可观测对象信息,确定所述至少一个第二子盲区中各个第二子盲区内的不可观测对象的类型及数量;Based on the unobservable object information of the target object information in the second blind area, determine the type and quantity of unobservable objects in each second sub-blind area in the at least one second sub-blind area;
    基于所述至少一个第二子盲区中各个第二子盲区内的不可观测对象的类型及数量,确定所述至少一个第二子盲区中各个第二子盲区对应的危险等级;Based on the type and quantity of unobservable objects in each second sub-blind area in the at least one second sub-blind area, determine the risk level corresponding to each second sub-blind area in the at least one second sub-blind area;
    基于所述至少一个第二子盲区中各个第二子盲区对应的危险等级,控制所述目标车辆的行驶状态。Based on the risk level corresponding to each second sub-blind zone in the at least one second sub-blind zone, the driving state of the target vehicle is controlled.
  10. 一种车辆控制装置,其特征在于,包括:A vehicle control device, characterized by comprising:
    获取模块,用于获取安装在目标车辆上的雷达在采集当前帧之前采集的历史帧中的第一点云数据对应的第一盲区信息、所述第一点云数据对应的第一对象识别结果、以及所述当前帧中的第二点云数据;An acquisition module, configured to acquire the first blind spot information corresponding to the first point cloud data in the historical frame collected by the radar installed on the target vehicle before collecting the current frame, and the first object recognition result corresponding to the first point cloud data , and the second point cloud data in the current frame;
    确定模块,用于基于所述第二点云数据、所述第一盲区信息、所述第一对象识别结果,确定所述第二点云数据对应的第二盲区信息;A determining module, configured to determine second blind area information corresponding to the second point cloud data based on the second point cloud data, the first blind area information, and the first object recognition result;
    控制模块,用于基于所述第二盲区信息,控制所述目标车辆的行驶状态。A control module, configured to control the driving state of the target vehicle based on the second blind spot information.
  11. 一种电子设备,其特征在于,包括:处理器、存储器,所述存储器存储有所述处理器可执行的机器可读指令,所述处理器用于执行所述存储器中存储的机器可读指令,所述机器可读指令被所述处理器执行时,所述处理器执行如权利要求1至9任一项所述的车辆控制方法的步骤。An electronic device, characterized by comprising: a processor and a memory, the memory stores machine-readable instructions executable by the processor, and the processor is configured to execute the machine-readable instructions stored in the memory, When the machine-readable instructions are executed by the processor, the processor executes the steps of the vehicle control method according to any one of claims 1-9.
  12. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被计算机设备运行时,所述计算机设备执行如权利要求1至9任意一项所述的车辆控制方法的步骤。A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, and when the computer program is run by a computer device, the computer device executes the computer program described in any one of claims 1 to 9. The steps of the vehicle control method described above.
PCT/CN2022/103221 2021-11-29 2022-07-01 Vehicle control WO2023093056A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111437547.4 2021-11-29
CN202111437547.4A CN116184992A (en) 2021-11-29 2021-11-29 Vehicle control method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
WO2023093056A1 true WO2023093056A1 (en) 2023-06-01

Family

ID=86433244

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/103221 WO2023093056A1 (en) 2021-11-29 2022-07-01 Vehicle control

Country Status (2)

Country Link
CN (1) CN116184992A (en)
WO (1) WO2023093056A1 (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170357863A1 (en) * 2016-06-10 2017-12-14 Denso Corporation Object detection apparatus and object detection method
CN109633688A (en) * 2018-12-14 2019-04-16 北京百度网讯科技有限公司 A kind of laser radar obstacle recognition method and device
CN110346799A (en) * 2019-07-03 2019-10-18 深兰科技(上海)有限公司 A kind of obstacle detection method and equipment
CN110550105A (en) * 2018-05-30 2019-12-10 奥迪股份公司 Driving assistance system and method
CN111186432A (en) * 2018-11-13 2020-05-22 杭州海康威视数字技术股份有限公司 Vehicle blind area early warning method and device
CN112363492A (en) * 2019-07-25 2021-02-12 百度(美国)有限责任公司 Computer-implemented method for operating an autonomous vehicle and data processing system
CN113103957A (en) * 2021-04-28 2021-07-13 上海商汤临港智能科技有限公司 Blind area monitoring method and device, electronic equipment and storage medium
CN113228135A (en) * 2021-03-29 2021-08-06 华为技术有限公司 Blind area image acquisition method and related terminal device
CN113276769A (en) * 2021-04-29 2021-08-20 深圳技术大学 Vehicle blind area anti-collision early warning system and method
CN113348119A (en) * 2020-04-02 2021-09-03 华为技术有限公司 Vehicle blind area identification method, automatic driving assistance system and intelligent driving vehicle comprising system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170357863A1 (en) * 2016-06-10 2017-12-14 Denso Corporation Object detection apparatus and object detection method
CN110550105A (en) * 2018-05-30 2019-12-10 奥迪股份公司 Driving assistance system and method
CN111186432A (en) * 2018-11-13 2020-05-22 杭州海康威视数字技术股份有限公司 Vehicle blind area early warning method and device
CN109633688A (en) * 2018-12-14 2019-04-16 北京百度网讯科技有限公司 A kind of laser radar obstacle recognition method and device
CN110346799A (en) * 2019-07-03 2019-10-18 深兰科技(上海)有限公司 A kind of obstacle detection method and equipment
CN112363492A (en) * 2019-07-25 2021-02-12 百度(美国)有限责任公司 Computer-implemented method for operating an autonomous vehicle and data processing system
CN113348119A (en) * 2020-04-02 2021-09-03 华为技术有限公司 Vehicle blind area identification method, automatic driving assistance system and intelligent driving vehicle comprising system
CN113228135A (en) * 2021-03-29 2021-08-06 华为技术有限公司 Blind area image acquisition method and related terminal device
CN113103957A (en) * 2021-04-28 2021-07-13 上海商汤临港智能科技有限公司 Blind area monitoring method and device, electronic equipment and storage medium
CN113276769A (en) * 2021-04-29 2021-08-20 深圳技术大学 Vehicle blind area anti-collision early warning system and method

Also Published As

Publication number Publication date
CN116184992A (en) 2023-05-30

Similar Documents

Publication Publication Date Title
US20200410690A1 (en) Method and apparatus for segmenting point cloud data, storage medium, and electronic device
US11530924B2 (en) Apparatus and method for updating high definition map for autonomous driving
US20200293058A1 (en) Data processing method, apparatus and terminal
CN108647646B (en) Low-beam radar-based short obstacle optimized detection method and device
CN109144097B (en) Obstacle or ground recognition and flight control method, device, equipment and medium
CN109143207B (en) Laser radar internal reference precision verification method, device, equipment and medium
CN111309013B (en) Collision distance determining method and system, vehicle and storage medium
WO2020007189A1 (en) Obstacle avoidance notification method and apparatus, electronic device, and readable storage medium
US9298992B2 (en) Geographic feature-based localization with feature weighting
US11520340B2 (en) Traffic lane information management method, running control method, and traffic lane information management device
CN111780771B (en) Positioning method, positioning device, electronic equipment and computer readable storage medium
CN110286389B (en) Grid management method for obstacle identification
WO2019023443A4 (en) Traffic management for materials handling vehicles in a warehouse environment
US11110932B2 (en) Methods and systems for predicting object action
JPWO2018235239A1 (en) Vehicle information storage method, vehicle travel control method, and vehicle information storage device
CN111695546A (en) Traffic signal lamp identification method and device for unmanned vehicle
US20220373353A1 (en) Map Updating Method and Apparatus, and Device
CN110320531A (en) Obstacle recognition method, map creating method and device based on laser radar
JP7147651B2 (en) Object recognition device and vehicle control system
US20210215808A1 (en) Real-time and dynamic localization using active doppler sensing systems for vehicles
CN116710976A (en) Autonomous vehicle system for intelligent on-board selection of data for training a remote machine learning model
JP7376682B2 (en) Object localization for autonomous driving using visual tracking and image reprojection
CN113238251A (en) Target-level semantic positioning method based on vehicle-mounted laser radar
WO2022078342A1 (en) Dynamic occupancy grid estimation method and apparatus
CN114537447A (en) Safe passing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22897152

Country of ref document: EP

Kind code of ref document: A1