CN116087987A - Method, device, electronic equipment and storage medium for determining height of target object - Google Patents

Method, device, electronic equipment and storage medium for determining height of target object Download PDF

Info

Publication number
CN116087987A
CN116087987A CN202211532939.3A CN202211532939A CN116087987A CN 116087987 A CN116087987 A CN 116087987A CN 202211532939 A CN202211532939 A CN 202211532939A CN 116087987 A CN116087987 A CN 116087987A
Authority
CN
China
Prior art keywords
point cloud
cloud data
determining
target object
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211532939.3A
Other languages
Chinese (zh)
Inventor
李明龙
羊野
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202211532939.3A priority Critical patent/CN116087987A/en
Publication of CN116087987A publication Critical patent/CN116087987A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/86Combinations of sonar systems with lidar systems; Combinations of sonar systems with systems not using wave reflection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/93Sonar systems specially adapted for specific applications for anti-collision purposes
    • G01S15/931Sonar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Acoustics & Sound (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides a method, a device, electronic equipment and a storage medium for determining the height of a target object, relates to the technical field of artificial intelligence, and particularly relates to the field of automatic driving. The specific implementation scheme is as follows: determining a first three-dimensional detection frame corresponding to the target object according to the current point cloud data of the current frame; determining a second three-dimensional detection frame corresponding to the target object according to the current point cloud data and at least one frame of historical point cloud data; and determining an estimated height of the target object according to the first three-dimensional detection frame and the second three-dimensional detection frame.

Description

Method, device, electronic equipment and storage medium for determining height of target object
Technical Field
The present disclosure relates to the field of artificial intelligence, and more particularly to the field of autopilot, and more particularly, to a method, apparatus, electronic device, storage medium, and computer program product for determining the height of a target object.
Background
In the running process of the automatic driving vehicle, data related to the obstacle in the road are collected by utilizing sensors such as a laser radar and a camera, then the data are processed to obtain information such as the shape and the height of the obstacle, and running decisions such as straight running, braking, detouring and the like are determined based on the obtained information.
Disclosure of Invention
The present disclosure provides a method, apparatus, electronic device, storage medium, and computer program product for determining a height of a target object.
According to an aspect of the present disclosure, there is provided a method of determining a height of a target object, including: determining a first three-dimensional detection frame corresponding to the target object according to the current point cloud data of the current frame; determining a second three-dimensional detection frame corresponding to the target object according to the current point cloud data and at least one frame of historical point cloud data; and determining an estimated height of the target object according to the first three-dimensional detection frame and the second three-dimensional detection frame.
According to another aspect of the present disclosure, there is provided an apparatus for determining a height of a target object, including: the device comprises a first determining module, a second determining module and a third determining module. The first determining module is used for determining a first three-dimensional detection frame corresponding to the target object according to the current point cloud data of the current frame; the second determining module is used for determining a second three-dimensional detection frame corresponding to the target object according to the current point cloud data and at least one frame of historical point cloud data; the third determining module is used for determining the estimated height of the target object according to the first three-dimensional detection frame and the second three-dimensional detection frame.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the methods provided by the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method provided by the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method provided by the present disclosure.
According to another aspect of the present disclosure, there is provided an autonomous vehicle including the above-described electronic device.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic illustration of an application scenario of a method and apparatus for determining target object height according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart of a method of determining target object height according to an embodiment of the disclosure;
FIG. 3 is a schematic flow chart of a method of determining a first three-dimensional detection box according to an embodiment of the disclosure;
FIG. 4 is a schematic flow chart of a method of determining a second three-dimensional detection box according to an embodiment of the disclosure;
FIG. 5 is a schematic flow chart diagram of a method of determining an estimated altitude according to an embodiment of the disclosure;
FIG. 6 is a schematic flow chart diagram of a method of determining target object height according to another embodiment of the present disclosure;
FIG. 7A is a schematic diagram of a method of determining a target object height according to an embodiment of the present disclosure;
FIG. 7B is a schematic diagram of a method of determining a target object height according to an embodiment of the disclosure;
FIG. 8 is a schematic block diagram of an apparatus for determining the height of a target object according to an embodiment of the present disclosure; and
fig. 9 is a block diagram of an electronic device for implementing a method of determining a target object height according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In some embodiments, current frame point cloud data may be utilized to determine an estimated height of an obstacle (e.g., a tree branch, a plastic bag, etc.) and the estimated height is input to a downstream decision module that determines a travel decision based on the estimated height. For example, the estimated height of the obstacle determined based on the current frame point cloud data is high, e.g., the estimated height is higher than the actual height of the vehicle chassis, so the decision module determines that the obstacle will collide with the vehicle, and the vehicle needs to detour to avoid the obstacle. For another example, the estimated height of the obstacle determined based on the current frame point cloud data is relatively low, for example, the estimated height is lower than the actual height of the chassis of the vehicle, so that the decision module determines that the obstacle cannot collide with the vehicle, and the vehicle does not need to detour to take a straight-going decision.
However, there is a problem in that the recognition is unstable and the accuracy is low because of the estimated height of the obstacle determined based on only the point cloud data of the current frame, resulting in the vehicle making an erroneous driving decision based on the erroneous estimated height.
In addition, when the vehicle is far away from the obstacle, due to the fact that reflection points of the short obstacle are fewer and the like, the estimated height of the obstacle determined by the vehicle when the vehicle is far away from the obstacle is lower than the actual height, and the decision module can make straight decision more easily. For example, the chassis height of the vehicle is 15cm, the actual height of a certain obstacle is 17cm, and the estimated height of the obstacle is 3cm beyond 50 meters from the obstacle, so that the decision module determines that the vehicle cannot collide with the obstacle, and further makes a straight-going decision.
While as the distance between the vehicle and the obstacle gradually decreases, the estimated height of the obstacle detected by the vehicle continues to increase, gradually approaching the actual height of the obstacle, and not approaching the actual height until the estimated height detected by the vehicle travels to a position closer to the obstacle. For example, when the vehicle is 10 meters away from the obstacle, the estimated height of the obstacle is determined to be 17cm, and at the moment, the decision-making module determines that the vehicle can collide with the obstacle when moving straight, and then makes an emergency avoidance decision such as emergency braking, lane changing and the like.
It can be seen that for a remote obstacle, the estimated height determined based on the point cloud data is lower than the actual height, which is not beneficial to the downstream decision-making module to timely make avoidance decisions such as braking, detour and the like.
The embodiment of the disclosure aims to provide a method for determining the height of a target object, which can improve the accuracy of the determined estimated height.
The technical solutions provided by the present disclosure will be described in detail below with reference to the accompanying drawings and specific embodiments.
Fig. 1 is an application scenario schematic diagram of a method and apparatus for determining a target object height according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which embodiments of the present disclosure may be applied to assist those skilled in the art in understanding the technical content of the present disclosure, but does not mean that embodiments of the present disclosure may not be used in other devices, systems, environments, or scenarios.
As shown in fig. 1, a system architecture 100 according to this embodiment may include sensors 101, 102, 103, a network 120, a server 130, and a Road Side Unit (RSU) 140. Network 120 is the medium used to provide communication links between sensors 101, 102, 103 and server 130. Network 120 may include various connection types, such as wired and/or wireless communication links, and the like.
The sensors 101, 102, 103 may interact with the server 130 over the network 120 to receive or send messages, etc.
The sensors 101, 102, 103 may be functional elements integrated on the vehicle 110, such as infrared sensors, ultrasonic sensors, millimeter wave radars, image acquisition devices, lidars, inertial measurement units, etc. The sensors 101, 102, 103 may be used to collect status data of perceived objects (e.g., pedestrians, vehicles, obstacles, etc.) surrounding the vehicle 110 as well as surrounding roadway data.
Vehicle 110 may communicate with roadside unit 140, receive information from roadside unit 140, or send information to the roadside unit.
The server 130 may be disposed at a remote end capable of establishing communication with the vehicle-mounted terminal, and may be implemented as a distributed server cluster formed by a plurality of servers, or may be implemented as a single server.
The server 130 may be a server providing various services. A map class application, a data processing class application, or the like, for example, may be installed on the server 130. Taking the example of the server 130 running the data processing class application: point cloud data transmitted from the sensors 101, 102, 103 is received over the network 120. The point cloud data may be regarded as data to be processed. And processing the data to be processed to obtain the estimated height of the obstacle.
It should be noted that the method for determining the height of the target object provided by the embodiments of the present disclosure may be generally performed by the vehicle 110 or the server 130. Accordingly, the apparatus for determining the height of the target object provided in the embodiments of the present disclosure may also be provided in the vehicle 110 or the server 130.
It will be appreciated that the number of sensors, networks and servers in fig. 1 is merely illustrative. There may be any number of sensors, networks, and servers, as desired for implementation.
Fig. 2 is a schematic flow chart of a method of determining target object height according to an embodiment of the present disclosure.
As shown in fig. 2, the method 200 of determining the height of the target object may include operations S210 to S230.
In operation S210, a first three-dimensional detection frame corresponding to the target object is determined according to the current point cloud data of the current frame.
For example, a vehicle is mounted with a lidar that collects point cloud data of the surrounding environment at a predetermined period. The current point cloud data of the current frame can represent the latest one-frame point cloud data acquired by the laser radar, namely the last one-frame point cloud data acquired.
For example, the target object may be an obstacle, such as a tree branch, a plastic bag, an inverted cone, or the like, which is a low obstacle.
For example, the current point cloud data may include a plurality of sub-point clouds of a plurality of objects around the vehicle, and the target object may correspond to a first sub-point cloud of the plurality of sub-point clouds. The target object may be subjected to target detection, so as to obtain a first sub-point cloud corresponding to the target object, and then a bounding box including the first sub-point cloud is determined as a first three-dimensional detection box. The first three-dimensional detection frame is a 3D detection frame and has the attributes of length, width, height and the like.
In operation S220, a second three-dimensional detection frame corresponding to the target object is determined according to the current point cloud data and the at least one frame of history point cloud data.
For example, the at least one frame of historical point cloud data may include point cloud data of 2 nd to N nd frames acquired by the lidar, N being an integer greater than or equal to 2. The at least one frame of historical point cloud data may be, for example, the latest 4 frames of point cloud data acquired in addition to the current frame.
For example, the current point cloud data and at least one frame of history point cloud data form multi-frame point cloud data, and for each frame of point cloud data in the multi-frame point cloud data, a second sub-point cloud corresponding to the target object may be determined, so as to obtain a plurality of second sub-point clouds corresponding to the multi-frame point cloud data. And then determining a bounding box containing the plurality of second sub-point clouds as a second three-dimensional detection box. The second three-dimensional detection frame is a 3D detection frame and has the attributes of length, width, height and the like.
In operation S230, an estimated height of the target object is determined according to the first three-dimensional detection frame and the second three-dimensional detection frame.
For example, the reliability of the first three-dimensional detection frame and the reliability of the second three-dimensional detection frame may be compared first. For example, the second three-dimensional detection frame may be determined to be trusted and the first three-dimensional detection frame may not be trusted when a ratio between a certain side length of the second three-dimensional detection frame and a corresponding side length of the first three-dimensional detection frame is less than or equal to a ratio threshold. And under the condition that the ratio is larger than the ratio threshold, determining that the second three-dimensional detection frame is not trusted, and determining that the first three-dimensional detection frame is trusted. Next, an estimated height may be determined based on the trusted three-dimensional detection box. For example, the height of the trusted three-dimensional detection box is determined as the estimated height. In this embodiment, the side length may be at least one of length, width, and height, and the ratio threshold may be 5.
For another example, corresponding weights may be set for the first three-dimensional detection frame and the second three-dimensional detection frame, then a weighted sum of the side length of the first three-dimensional detection frame and the side length of the second three-dimensional detection frame is calculated, a target three-dimensional detection frame is obtained, and the height of the target three-dimensional detection frame is used as the estimated height.
The embodiment of the disclosure not only utilizes the current point cloud data of the current frame to determine a first three-dimensional detection frame, but also utilizes multi-frame point cloud data to determine a second three-dimensional detection frame, and determines the estimated height of the target object based on the first three-dimensional detection frame and the second three-dimensional detection frame. Therefore, compared with the technical scheme that the estimated height is determined by only using the current point cloud data of the current frame, the embodiment of the disclosure can improve the accuracy of estimating the height by using the point cloud data of at least one frame history before the current frame, thereby alleviating the problem of inaccurate estimation of the height caused by unstable detection of the point cloud data of the current frame.
On the basis, the decision-making module of the automatic driving vehicle can determine a driving decision based on the estimated height with higher accuracy, so that the frequency of emergency avoidance such as emergency braking is reduced, and the automatic driving safety and riding experience are improved.
Fig. 3 is a schematic flow chart of a method of determining a first three-dimensional detection box according to an embodiment of the disclosure.
According to another embodiment of the present disclosure, the method 310 of determining a first three-dimensional detection frame corresponding to a target object according to the current point cloud data of the current frame may include operations S311 to S313.
In operation S311, first sub-point cloud data corresponding to the target object among the current point cloud data is determined.
In operation S312, a first main direction of the first sub-point cloud data is determined according to the coordinates of the first sub-point cloud data.
In operation S313, a first three-dimensional detection frame is determined according to the coordinates and the first main direction of the first sub-point cloud data.
For example, the first main direction of the first sub-point cloud data may be calculated by using a least square method, where the calculating idea is to determine a first straight line by using the least square method, so that the sum of distances from a plurality of points in the first sub-point cloud data to the first straight line is minimum.
In the related art, the upstream vision module issues a detection frame, and the detection frame corresponds to a direction, and determines the direction as the direction of the first sub-point cloud data. However, in practical applications, the direction of issuing the upstream vision module does not conform to the distribution situation of the first sub-point cloud data, for example, the first sub-point cloud data is distributed in a long stripe shape, and the direction of issuing the vision module has a larger included angle with the extending direction of the long stripe shape, for example, the direction of issuing the vision module is perpendicular to the extending direction of the long stripe shape. Therefore, the first sub-point cloud data needs to be contained in the larger first three-dimensional detection frame, so that the determined first three-dimensional detection frame is larger in size, the first three-dimensional detection frame with the larger size easily contains the point clouds of other objects except the target object, and the shape and the height accuracy of the detected target object are reduced.
According to the embodiment of the disclosure, the main orientation is determined by using the first sub-point cloud data, and the first three-dimensional detection frame is determined by using the main orientation, so that the obtained first three-dimensional detection frame is smaller in size and is more attached to the first sub-point cloud data, and the shape and size accuracy of a target object are improved. In addition, since the primary direction is determined using the first sub-point cloud data without requiring an upstream vision module issuing direction, it can be decoupled from the upstream vision module.
Fig. 4 is a schematic flow chart of a method of determining a second three-dimensional detection box according to an embodiment of the disclosure.
According to another embodiment of the present disclosure, the method 420 for determining a second three-dimensional detection frame corresponding to a target object according to the current point cloud data and at least one frame of history point cloud data may include operations S421 to S424.
In operation S421, first sub-point cloud data corresponding to the target object among the current point cloud data is determined.
In operation S422, for each of the at least one frame of history point cloud data, second sub-point cloud data corresponding to the target object in each of the history point cloud data is determined, and at least one second sub-point cloud data is obtained.
In operation S423, the first sub-point cloud data and the at least one second sub-point cloud data are superimposed to obtain third sub-point cloud data.
For example, the two frames of point clouds can simultaneously include the same points in the collected continuous two frames of point cloud data under the influence of factors such as actual change conditions of a target object and laser radar detection precision, and in addition, the points of the later frame of point cloud can be increased compared with the points of the former frame of point cloud, and some points can be deleted.
For example, the superposition may be a determination of a union of the first sub-point cloud data and the at least one second sub-point cloud data, e.g. the first sub-point cloud data comprises the following points: p1, P2, P3, P4, P5, the second sub-point cloud data comprises the following points: p1, P2, P3, P6, the two-frame point cloud may include the following points after superposition: p1, P2, P3, P4, P5, P6.
In operation S424, a second three-dimensional detection frame is determined according to the third sub-point cloud data.
For example, a bounding box containing a plurality of third sub-point cloud data may be determined as the second three-dimensional detection box.
For another example, the second principal direction of the third sub-point cloud data may be determined according to the coordinates of the third sub-point cloud data, and then the second three-dimensional detection frame may be determined according to the coordinates of the third sub-point cloud data and the second principal direction. For example, the second main direction may be determined based on a least squares method, and the processing idea is to determine the second straight line such that a sum of distances from the plurality of points in the third sub-point cloud data to the second straight line is minimum. Compared with the point cloud direction issued by the upstream vision module, the size of the second three-dimensional detection frame obtained based on the main direction is smaller, and the third sub-point cloud data are attached to the second three-dimensional detection frame, so that the accuracy of the shape and the size of the target object is improved. Furthermore, it can be decoupled from the upstream vision module.
The embodiment of the disclosure determines a plurality of sub-point cloud data from current point cloud data and at least one frame of historical point cloud data, namely, determines first sub-point cloud data and at least one second sub-point cloud data, then superimposes the plurality of sub-point cloud data, and determines a second three-dimensional detection frame through superimposed third sub-point cloud data. It can be seen that, because the second three-dimensional detection frame is determined by combining the multi-frame point cloud data, the second three-dimensional detection frame can more accurately embody the shape, the height and other attributes of the obstacle.
Fig. 5 is a schematic flow chart diagram of a method of determining an estimated altitude according to an embodiment of the disclosure.
According to another embodiment of the present disclosure, the above-described method 530 of determining an estimated height of a target object according to a first three-dimensional detection frame and a second three-dimensional detection frame may include operations S531 to S532.
In operation S531, a reference height of the target object is determined according to the visual detection frame for the target object.
In one example, the fourth sub-point cloud data in the current visual detection frame is determined from the current point cloud data, and then the highest height of the fourth sub-point cloud data is determined as the reference height.
For example, the upstream vision module issues a 2D vision inspection box, which corresponds to the location information. In addition, the point cloud side transmits current point cloud data of the current frame, and each point in the current point cloud data corresponds to position information. The 3D point cloud may be projected into the 2D visual detection frame based on the location information, so that a point located in the 2D visual detection frame in the current point cloud data of the current frame is queried, and the highest height of the queried point is determined as the reference height.
The embodiment combines the visual detection frame with the current point cloud data of the current frame, and utilizes the points in the visual detection frame to determine the reference height, so as to provide a reference for the height of the target object, and further accurately check whether the second three-dimensional detection frame is credible or not.
In other examples, the height of the visual inspection frame may be determined as the reference height.
In operation S532, an estimated height is determined according to the reference height, the first three-dimensional detection frame, and the second three-dimensional detection frame.
In one example, the first validity of the second three-dimensional detection frame may be determined based on the reference height and the height of the second three-dimensional detection frame. And determining the second validity of the second three-dimensional detection frame according to the length of the first three-dimensional detection frame, the width of the first three-dimensional detection frame, the length of the second three-dimensional detection frame and the width of the second three-dimensional detection frame. The estimated altitude may then be determined based on the first validity and the second validity.
For example, it may be determined that the first validity of the second three-dimensional detection frame is valid in a case where it is determined that the ratio between the height of the second three-dimensional detection frame and the reference height is equal to or less than the first threshold value. The first threshold may be 5.
For example, in a case where it is determined that the ratio between the diagonal dimension of the second three-dimensional detection frame and the diagonal dimension of the first three-dimensional detection frame is equal to or smaller than the second threshold value, it may be determined that the second validity of the second three-dimensional detection frame is valid. The diagonal size may be calculated using the Pythagorean theorem, length, and width. The second threshold may be the same as or different from the first threshold, and the second threshold may be 5.
For example, in a case where it is determined that the first validity is valid and the second validity is valid, it may be determined that the shape difference of the single-frame point cloud data (i.e., the current point cloud data of the current frame) and the multi-frame point cloud data (i.e., the current point cloud data of the current frame and the at least one frame of history point cloud data) is small, at which time it may be determined that the second three-dimensional detection frame is authentic. The height of the second three-dimensional detection frame may then be determined as the estimated height.
For example, in the case where it is determined that at least one of the first validity and the second validity is invalid, it may be determined that the shape difference between the single-frame point cloud data (i.e., the current point cloud data of the current frame) and the multi-frame point cloud data (i.e., the current point cloud data of the current frame and the at least one frame of history point cloud data) is large, the actual shape of the target object may be changed, and at this time, it may be determined that the first three-dimensional detection frame is authentic, and thus the height of the first three-dimensional detection frame may be determined as the estimated height.
In other examples, a weighted sum of the reference height, the height of the first three-dimensional detection frame, and the height of the second three-dimensional detection frame may be determined as the estimated height.
According to the technical scheme provided by the embodiment of the disclosure, as the reference height and the first three-dimensional detection frame are both determined based on the current point cloud data of the current frame, and the second three-dimensional detection frame is determined based on multi-frame point cloud data. Therefore, the reference height and the first three-dimensional detection frame can be respectively compared with the second three-dimensional detection frame, and the validity of the second three-dimensional detection frame is determined in a cross-validation mode, so that whether the second three-dimensional detection frame is credible or not can be accurately obtained.
Fig. 6 is a schematic flow chart of a method of determining a target object height according to another embodiment of the present disclosure.
As shown in fig. 6, the method 600 for determining the height of the target object may include operations S610 to S650, where operations S610 to S630 may refer to operations S210 to S230 above, and are not described herein.
In operation S640, a first weight corresponding to the estimated height and a second weight corresponding to the corrected height are determined according to the tracking stability parameter of the target object.
For example, the tracking stability parameter may include an actual distance between the target vehicle and the target object. The tracking stability may be inversely related to the actual distance, with lower tracking stability when the actual distance is longer and higher tracking stability when the actual distance is shorter.
For example, the tracking stability parameter may include a detected frame number for the target object, the detected frame number representing a frame number of the point cloud data including the target object, and the frame number of the point cloud data including the target object detected within a predetermined time period in the past may be taken as the detected frame number. The tracking stability may be positively correlated with the number of detected frames, e.g., tracking stability is lower when the detected number of frames is only 2 frames. When the number of frames is greater than 10 frames, the tracking stability is higher.
For example, the first weight and the second weight may be variables related to tracking stability parameters. For example, the first weight is positively correlated with tracking stability, and for example, the second weight is negatively correlated with tracking stability.
For example, the corrected height may be related to the vehicle chassis height. For example, the corrected height may be the sum of the vehicle chassis height and a height threshold, which may be 2 centimeters.
In operation S650, the height of the target object is determined according to the estimated height, the corrected height, the first weight, and the second weight.
For example, a weighted sum of the estimated height and the corrected height may be determined as the height of the target object. For example, the height of the target object is expressed by the following formula (one):
h=α*h 1 +β*h 2 formula 1
Wherein h represents the height of the target object, alpha represents the first weight, h 1 Represents the estimated height, β represents the second weight, h 2 Representing the correction height, the sum of the first weight and the second weight may be 1.
In the related technology, for a remote obstacle, the estimated height determined based on the point cloud data is lower than the actual height, which is not beneficial to a downstream decision module to make avoidance decisions such as braking, detour and the like.
Embodiments of the present disclosure utilize tracking stability parameters to determine a first weight and a second weight, and then determine a height of a target object based on an estimated height, a corrected height, and respective corresponding weights. For the long-distance low obstacle, due to the fact that reflection points of the low obstacle are few, the estimated height is easily lower than the actual height of the obstacle, therefore, smaller first weight and larger second weight can be adopted, the determined height for the long-distance obstacle can be higher than the estimated height, further, a downstream decision module can conveniently make avoidance decisions such as braking and detouring in time, and the effect of improving driving safety and riding experience is achieved.
According to another embodiment of the present disclosure, the above-mentioned operation of determining the first weight corresponding to the estimated altitude and the second weight corresponding to the corrected altitude according to the tracking stability parameter may include the following operations: in response to detecting that the actual distance between the target vehicle and the target object is less than the distance threshold and the number of detected frames for the target object is greater than the number threshold, the tracking stability is deemed high and the first weight may be greater than the second weight. In response to detecting that the actual distance between the target vehicle and the target object is equal to or greater than the distance threshold, or that the detected number of frames for the target object is equal to or less than the number threshold, the tracking stability is considered low, and the first weight may be less than the second weight.
For example, the distance threshold may be 40 meters and the number threshold may be 10 frames.
For example, the first weight and the second weight may be fixed values. For example, when the first weight is greater than the second weight, the value of the first weight is 0.6, and the value of the second weight is 0.3. For example, when the first weight is smaller than the second weight, the value of the first weight is 0.4, and the value of the second weight is 0.6.
In other embodiments, for a target object that is stably detected with an actual distance from the vehicle that is greater than the target distance, the height value of the target object may be made equal to the height value of the correction height, thereby avoiding the convergence of the height of the target object from a smaller value (e.g., 0) and speeding up the convergence of the height.
It should be noted that the target distance may be a larger value of a predetermined distance, the predetermined distance may be 40 m, the slow braking distance may be calculated based on a current speed of the vehicle and a predetermined acceleration, and the predetermined acceleration may be 1.5m/s 2 . It can be seen that when the vehicle speed is low, the slow brake distance is small, and the target distance is a preset distance. When the vehicle speed is lower, the slow brake distance is larger, and the target distance is the slow brake distance.
For example, the target distance may be calculated using equation (two) and equation (three):
S=max(S th ,S Brake device ) Formula II
Figure BDA0003970709170000111
Wherein S represents the target distance, S th Represents a predetermined distance S Brake device The slow brake distance is represented, v represents the current speed of the vehicle, and a represents the predetermined acceleration.
Fig. 7A-7B are schematic diagrams of methods of determining a target object height according to embodiments of the present disclosure.
The method for determining the height of the target object according to the embodiment of the present disclosure is described below with reference to fig. 7A and 7B.
A first three-dimensional detection box 704 corresponding to the target object may be determined from the single-frame point cloud data 701 (e.g., the current point cloud data of the current frame). For example, first sub-point cloud data 702 corresponding to the target object in the single-frame point cloud data 701 may be determined, and then a first main direction 703 of the first sub-point cloud data 702 may be determined according to coordinates of the first sub-point cloud data 702. A first three-dimensional detection box 704 is then determined from the coordinates of the first sub-point cloud data 702 and the first main direction 703.
A second three-dimensional detection block 709 corresponding to the target object may be determined from multi-frame point cloud data 705 (e.g., current point cloud data of a current frame and at least one frame of historical point cloud data). For example, a first sub-point cloud data 702 corresponding to the target object among the current point cloud data may be determined. And determining second sub-point cloud data 706 corresponding to the target object in each historical point cloud data according to each historical point cloud data in at least one frame of historical point cloud data, and obtaining at least one second sub-point cloud data 706. The first sub-point cloud data 702 and at least one second sub-point cloud data 706 are superimposed to obtain third sub-point cloud data 707. A second main direction 708 of the third sub-point cloud data 707 is determined from the coordinates of the third sub-point cloud data 707. A second three-dimensional detection block 709 is determined according to the coordinates of the third sub-point cloud data 707 and the second main direction 708.
Next, a reference height 712 of the target object may be determined based on the current visual detection block 710 for the target object. For example, in the single-frame point cloud data 701, fourth sub-point cloud data 711 in the current visual detection frame 710 for the target object is determined, and then the highest height of the fourth sub-point cloud data 711 is determined as the reference height 712.
Next, cross-validation may be performed to determine whether the second three-dimensional detection block 709 is authentic. For example, the first validity 713 of the second three-dimensional detection frame 709 is determined according to the reference height 712 and the height of the second three-dimensional detection frame 709. The second validity 714 of the second three-dimensional detection frame 709 is determined based on the length of the first three-dimensional detection frame 704, the width of the first three-dimensional detection frame 704, the length of the second three-dimensional detection frame 709, and the width of the second three-dimensional detection frame 709. An estimated height 715 is then determined based on the first validity 713 and the second validity 714. For example, in the case where both the first validity 713 and the second validity 714 are valid, the second three-dimensional detection block 709 is determined to be authentic, otherwise the first three-dimensional detection block 704 is determined to be authentic. The height of the trusted three-dimensional detection box is then determined to be the estimated height 715
Next, a first weight 717 corresponding to the estimated height 715 and a second weight 718 corresponding to the corrected height 719 may be determined from the tracking stability parameter 716 of the target object. Wherein the tracking stability parameters 716 include at least one of: the actual distance between the target vehicle and the target object, the number of detected frames for the target object.
Next, a height 720 of the target object may be determined based on the estimated height 715, the corrected height 719, the first weight 717, and the second weight 718. For example, a weighted sum of the estimated height 715 and the corrected height 719 is taken as the height 720 of the target object.
Fig. 8 is a schematic block diagram of an apparatus for determining a height of a target object according to an embodiment of the present disclosure.
As shown in fig. 8, the apparatus 800 for determining the height of the target object may include: a first determination module 810, a second determination module 820, and a third determination module 830.
The first determining module 810 is configured to determine a first three-dimensional detection frame corresponding to the target object according to the current point cloud data of the current frame.
The second determining module 820 is configured to determine a second three-dimensional detection frame corresponding to the target object according to the current point cloud data and the at least one frame of historical point cloud data.
The third determining module 830 is configured to determine an estimated height of the target object according to the first three-dimensional detection frame and the second three-dimensional detection frame.
According to another embodiment of the present disclosure, the first determining module includes: the first, second and third determination sub-modules. The first determining submodule is used for determining first sub-point cloud data corresponding to the target object in the current point cloud data. The second determining submodule is used for determining a first main direction of the first sub-point cloud data according to the coordinates of the first sub-point cloud data. The third determining submodule is used for determining a first three-dimensional detection frame according to the coordinates and the first main direction of the first sub-point cloud data.
According to another embodiment of the present disclosure, the second determining module includes: the fourth determination sub-module, the fifth determination sub-module, the superposition sub-module, and the sixth determination sub-module. The fourth determining submodule is used for determining first sub-point cloud data corresponding to the target object in the current point cloud data. The fifth determining submodule is used for determining second sub-point cloud data corresponding to the target object in each history point cloud data according to each history point cloud data in at least one frame of history point cloud data to obtain at least one second sub-point cloud data. And the superposition submodule is used for superposing the first sub-point cloud data and at least one second sub-point cloud data to obtain third sub-point cloud data. And the sixth determining submodule is used for determining a second three-dimensional detection frame according to the third sub-point cloud data.
According to another embodiment of the present disclosure, the sixth determination submodule includes: a first determination unit and a second determination unit. The first determining unit is used for determining a second main direction of the third sub-point cloud data according to the coordinates of the third sub-point cloud data. The second determining unit is used for determining a second three-dimensional detection frame according to the coordinates of the third sub-point cloud data and the second main direction.
According to another embodiment of the present disclosure, the third determining module includes: a seventh determination submodule and an eighth determination submodule. The seventh determination submodule is used for determining the reference height of the target object according to the current visual detection frame aiming at the target object. The eighth determination submodule is used for determining the estimated height according to the reference height, the first three-dimensional detection frame and the second three-dimensional detection frame.
According to another embodiment of the present disclosure, the eighth determination submodule includes: a third determination unit, a fourth determination unit, and a fifth determination unit. The third determining unit is used for determining the first validity of the second three-dimensional detection frame according to the reference height and the height of the second three-dimensional detection frame. The fourth determining unit is configured to determine a second validity of the second three-dimensional detection frame according to a length of the first three-dimensional detection frame, a width of the first three-dimensional detection frame, a length of the second three-dimensional detection frame, and a width of the second three-dimensional detection frame. The fifth determining unit is used for determining the estimated height according to the first validity and the second validity.
According to another embodiment of the present disclosure, the seventh determination submodule includes: a sixth determination unit and a seventh determination unit. The sixth determining unit is used for determining fourth sub-point cloud data in the current visual detection frame in the current point cloud data. The seventh determining unit is configured to determine a highest height of the fourth sub-point cloud data as a reference height.
According to another embodiment of the present disclosure, the above apparatus further includes: a fourth determination module and a fifth determination module. The fourth determining module is used for determining a first weight corresponding to the estimated height and a second weight corresponding to the corrected height according to the tracking stability parameter of the target object after determining the estimated height of the target object. The fifth determining module is used for determining the height of the target object according to the estimated height, the corrected height, the first weight and the second weight. The tracking stability parameter includes at least one of: the actual distance between the target vehicle and the target object, the number of detected frames for the target object. The detected frame number characterizes a frame number of point cloud data containing the target object.
According to another embodiment of the present disclosure, the fourth determination module includes: a ninth determination sub-module and a tenth determination sub-module. The ninth determination submodule is used for determining that the first weight is greater than the second weight in response to detecting that the actual distance between the target vehicle and the target object is smaller than a distance threshold and the detected frame number for the target object is greater than a quantity threshold. The tenth determination submodule is used for determining that the first weight is smaller than the second weight in response to detecting that the actual distance between the target vehicle and the target object is larger than or equal to a distance threshold or the detected frame number for the target object is smaller than or equal to a quantity threshold.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the user accord with the regulations of related laws and regulations, and the public order colloquial is not violated.
In the technical scheme of the disclosure, the authorization or consent of the user is obtained before the personal information of the user is obtained or acquired.
According to an embodiment of the present disclosure, the present disclosure also provides an electronic device including at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of determining the height of a target object described above.
According to an embodiment of the present disclosure, the present disclosure also provides a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the above-described method of determining a target object height.
According to an embodiment of the present disclosure, the present disclosure also provides a computer program product comprising a computer program which, when executed by a processor, implements the above-described method of determining a target object height.
Fig. 9 shows a schematic block diagram of an example electronic device 900 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 9, the apparatus 900 includes a computing unit 901 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 902 or a computer program loaded from a storage unit 908 into a Random Access Memory (RAM) 903. In the RAM 903, various programs and data required for the operation of the device 900 can also be stored. The computing unit 901, the ROM 902, and the RAM 903 are connected to each other by a bus 904. An input/output (I/O) interface 905 is also connected to the bus 904.
Various components in device 900 are connected to I/O interface 905, including: an input unit 906 such as a keyboard, a mouse, or the like; an output unit 907 such as various types of displays, speakers, and the like; a storage unit 908 such as a magnetic disk, an optical disk, or the like; and a communication unit 909 such as a network card, modem, wireless communication transceiver, or the like. The communication unit 909 allows the device 900 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunications networks.
The computing unit 901 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 901 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 901 performs the respective methods and processes described above, for example, a method of determining the height of the target object. For example, in some embodiments, the method of determining the height of a target object may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 908. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 900 via the ROM 902 and/or the communication unit 909. When the computer program is loaded into the RAM 903 and executed by the computing unit 901, one or more steps of the above-described method of determining the height of the target object may be performed. Alternatively, in other embodiments, the computing unit 901 may be configured to perform the method of determining the target object height by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (22)

1. A method of determining a target object height, comprising:
determining a first three-dimensional detection frame corresponding to the target object according to the current point cloud data of the current frame;
determining a second three-dimensional detection frame corresponding to the target object according to the current point cloud data and at least one frame of historical point cloud data; and
and determining the estimated height of the target object according to the first three-dimensional detection frame and the second three-dimensional detection frame.
2. The method of claim 1, wherein the determining a first three-dimensional detection box corresponding to the target object from the current point cloud data of the current frame comprises:
determining first sub-point cloud data corresponding to the target object in the current point cloud data;
determining a first main direction of the first sub-point cloud data according to the coordinates of the first sub-point cloud data; and
And determining the first three-dimensional detection frame according to the coordinates of the first sub-point cloud data and the first main direction.
3. The method of claim 1, wherein the determining a second three-dimensional detection box corresponding to the target object from the current point cloud data and at least one frame of historical point cloud data comprises:
determining first sub-point cloud data corresponding to the target object in the current point cloud data;
determining second sub-point cloud data corresponding to the target object in each history point cloud data according to each history point cloud data in the at least one frame of history point cloud data to obtain at least one second sub-point cloud data;
superposing the first sub-point cloud data and the at least one second sub-point cloud data to obtain third sub-point cloud data; and
and determining the second three-dimensional detection frame according to the third sub-point cloud data.
4. The method of claim 3, wherein the determining the second three-dimensional detection box from the third sub-point cloud data comprises:
determining a second main direction of the third sub-point cloud data according to the coordinates of the third sub-point cloud data; and
And determining the second three-dimensional detection frame according to the coordinates of the third sub-point cloud data and the second main direction.
5. The method of any of claims 1-4, wherein the determining the estimated height of the target object from the first three-dimensional detection box and the second three-dimensional detection box comprises:
determining a reference height of the target object according to a current visual detection frame aiming at the target object; and
and determining the estimated height according to the reference height, the first three-dimensional detection frame and the second three-dimensional detection frame.
6. The method of claim 5, wherein the determining the estimated height from the reference height, the first three-dimensional detection box, and the second three-dimensional detection box comprises:
determining a first validity of the second three-dimensional detection frame according to the reference height and the height of the second three-dimensional detection frame;
determining a second validity of the second three-dimensional detection frame according to the length of the first three-dimensional detection frame, the width of the first three-dimensional detection frame, the length of the second three-dimensional detection frame and the width of the second three-dimensional detection frame; and
The estimated height is determined based on the first validity and the second validity.
7. The method of claim 5, wherein the determining the reference height of the target object based on the current visual detection box for the target object comprises:
determining fourth sub-point cloud data in the current visual detection frame in the current point cloud data; and
and determining the highest height of the fourth sub-point cloud data as the reference height.
8. The method of any of claims 1 to 7, further comprising: after determining the estimated height of the target object,
determining a first weight corresponding to the estimated height and a second weight corresponding to the corrected height according to the tracking stability parameter of the target object; and
determining the height of the target object according to the estimated height, the corrected height, the first weight and the second weight;
wherein the tracking stability parameter comprises at least one of: an actual distance between a target vehicle and the target object, a detected frame number for the target object; the detected frame number characterizes a frame number of point cloud data including the target object.
9. The method of claim 8, the determining a first weight corresponding to the estimated altitude and a second weight corresponding to a corrected altitude according to a tracking stability parameter of the target object comprising:
in response to detecting that an actual distance between a target vehicle and the target object is less than a distance threshold and a detected number of frames for the target object is greater than a quantity threshold, determining that the first weight is greater than the second weight; and
in response to detecting that an actual distance between the target vehicle and the target object is greater than or equal to a distance threshold, or that a detected number of frames for the target object is less than or equal to a quantity threshold, the first weight is determined to be less than the second weight.
10. An apparatus for determining a height of a target object, comprising:
the first determining module is used for determining a first three-dimensional detection frame corresponding to the target object according to the current point cloud data of the current frame;
the second determining module is used for determining a second three-dimensional detection frame corresponding to the target object according to the current point cloud data and at least one frame of historical point cloud data; and
and the third determining module is used for determining the estimated height of the target object according to the first three-dimensional detection frame and the second three-dimensional detection frame.
11. The apparatus of claim 10, wherein the first determination module comprises:
the first determining submodule is used for determining first sub-point cloud data corresponding to the target object in the current point cloud data;
the second determining submodule is used for determining a first main direction of the first sub-point cloud data according to the coordinates of the first sub-point cloud data; and
and the third determining submodule is used for determining the first three-dimensional detection frame according to the coordinate of the first sub-point cloud data and the first main direction.
12. The apparatus of claim 10, wherein the second determination module comprises:
a fourth determining sub-module, configured to determine first sub-point cloud data corresponding to the target object in the current point cloud data;
a fifth determining sub-module, configured to determine, for each historical point cloud data in the at least one frame of historical point cloud data, second sub-point cloud data corresponding to the target object in each historical point cloud data, to obtain at least one second sub-point cloud data;
the superposition sub-module is used for superposing the first sub-point cloud data and the at least one second sub-point cloud data to obtain third sub-point cloud data; and
And the sixth determining submodule is used for determining the second three-dimensional detection frame according to the third sub-point cloud data.
13. The apparatus of claim 12, wherein the sixth determination submodule comprises:
a first determining unit, configured to determine a second main direction of the third sub-point cloud data according to coordinates of the third sub-point cloud data; and
and the second determining unit is used for determining the second three-dimensional detection frame according to the coordinates of the third sub-point cloud data and the second main direction.
14. The apparatus of any one of claims 10 to 13, wherein the third determination module comprises:
a seventh determining sub-module for determining a reference height of the target object according to a current visual detection frame for the target object; and
an eighth determination sub-module is configured to determine the estimated height according to the reference height, the first three-dimensional detection frame, and the second three-dimensional detection frame.
15. The apparatus of claim 14, wherein the eighth determination submodule comprises:
a third determining unit configured to determine a first validity of the second three-dimensional detection frame according to the reference height and the height of the second three-dimensional detection frame;
A fourth determining unit, configured to determine a second validity of the second three-dimensional detection frame according to a length of the first three-dimensional detection frame, a width of the first three-dimensional detection frame, a length of the second three-dimensional detection frame, and a width of the second three-dimensional detection frame; and
a fifth determining unit configured to determine the estimated height according to the first validity and the second validity.
16. The apparatus of claim 14, wherein the seventh determination submodule comprises:
a sixth determining unit, configured to determine fourth sub-point cloud data in the current visual detection frame, from the current point cloud data; and
and a seventh determining unit, configured to determine a highest height of the fourth sub-point cloud data as the reference height.
17. The apparatus of any of claims 10 to 16, further comprising:
a fourth determining module, configured to determine, after determining the estimated height of the target object, a first weight corresponding to the estimated height and a second weight corresponding to the corrected height according to a tracking stability parameter of the target object; and
a fifth determining module, configured to determine a height of the target object according to the estimated height, the corrected height, the first weight, and the second weight;
Wherein the tracking stability parameter comprises at least one of: an actual distance between a target vehicle and the target object, a detected frame number for the target object; the detected frame number characterizes a frame number of point cloud data including the target object.
18. The apparatus of claim 17, the fourth determination module comprising:
a ninth determination submodule for determining that the first weight is greater than the second weight in response to detecting that an actual distance between a target vehicle and the target object is less than a distance threshold and that a detected number of frames for the target object is greater than a quantity threshold; and
a tenth determination submodule is configured to determine that the first weight is smaller than the second weight in response to detecting that an actual distance between the target vehicle and the target object is equal to or greater than a distance threshold or that a detected number of frames for the target object is equal to or less than a number threshold.
19. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 9.
20. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1 to 9.
21. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1 to 9.
22. An autonomous vehicle comprising the electronic device of claim 19.
CN202211532939.3A 2022-11-29 2022-11-29 Method, device, electronic equipment and storage medium for determining height of target object Pending CN116087987A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211532939.3A CN116087987A (en) 2022-11-29 2022-11-29 Method, device, electronic equipment and storage medium for determining height of target object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211532939.3A CN116087987A (en) 2022-11-29 2022-11-29 Method, device, electronic equipment and storage medium for determining height of target object

Publications (1)

Publication Number Publication Date
CN116087987A true CN116087987A (en) 2023-05-09

Family

ID=86185814

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211532939.3A Pending CN116087987A (en) 2022-11-29 2022-11-29 Method, device, electronic equipment and storage medium for determining height of target object

Country Status (1)

Country Link
CN (1) CN116087987A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116665188A (en) * 2023-07-20 2023-08-29 南京博融汽车电子有限公司 Bus image system data analysis method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116665188A (en) * 2023-07-20 2023-08-29 南京博融汽车电子有限公司 Bus image system data analysis method
CN116665188B (en) * 2023-07-20 2023-10-10 南京博融汽车电子有限公司 Bus image system data analysis method

Similar Documents

Publication Publication Date Title
WO2021023102A1 (en) Method and apparatus for updating map, and storage medium
US10229363B2 (en) Probabilistic inference using weighted-integrals-and-sums-by-hashing for object tracking
EP4080468A2 (en) Collision detection method and apparatus, electronic device, medium, and autonomous vehicle
CN113264066B (en) Obstacle track prediction method and device, automatic driving vehicle and road side equipment
EP3957955A2 (en) Vehicle locating method and apparatus, electronic device, storage medium and computer program product
CN113899363B (en) Vehicle positioning method and device and automatic driving vehicle
CN112558072B (en) Vehicle positioning method, device, system, electronic equipment and storage medium
CN114475585B (en) Automatic intersection driving method and device, electronic equipment and automatic driving vehicle
CN112116809A (en) Non-line-of-sight vehicle anti-collision method and device based on V2X technology
CN116087987A (en) Method, device, electronic equipment and storage medium for determining height of target object
CN113091737A (en) Vehicle-road cooperative positioning method and device, automatic driving vehicle and road side equipment
CN113119999A (en) Method, apparatus, device, medium, and program product for determining automatic driving characteristics
CN117612132A (en) Method and device for complementing bird's eye view BEV top view and electronic equipment
CN116890876A (en) Vehicle control method and device, electronic equipment and automatic driving vehicle
CN112902911A (en) Monocular camera-based distance measurement method, device, equipment and storage medium
CN112507964B (en) Detection method and device for lane-level event, road side equipment and cloud control platform
CN114581869A (en) Method and device for determining position of target object, electronic equipment and storage medium
CN114170300A (en) High-precision map point cloud pose optimization method, device, equipment and medium
CN113587937A (en) Vehicle positioning method and device, electronic equipment and storage medium
CN114584949B (en) Method and equipment for determining attribute value of obstacle through vehicle-road cooperation and automatic driving vehicle
CN114581615B (en) Data processing method, device, equipment and storage medium
CN115096328B (en) Positioning method and device of vehicle, electronic equipment and storage medium
CN112683216B (en) Method and device for generating vehicle length information, road side equipment and cloud control platform
CN113985413A (en) Point cloud data processing method, device and equipment and automatic driving vehicle
CN113917446A (en) Road guardrail prediction method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination