WO2022052765A1 - 目标跟踪方法及装置 - Google Patents

目标跟踪方法及装置 Download PDF

Info

Publication number
WO2022052765A1
WO2022052765A1 PCT/CN2021/113337 CN2021113337W WO2022052765A1 WO 2022052765 A1 WO2022052765 A1 WO 2022052765A1 CN 2021113337 W CN2021113337 W CN 2021113337W WO 2022052765 A1 WO2022052765 A1 WO 2022052765A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
tracking result
target tracking
radar
camera
Prior art date
Application number
PCT/CN2021/113337
Other languages
English (en)
French (fr)
Inventor
刘志洋
任建乐
周融
周伟
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP21865821.9A priority Critical patent/EP4206731A4/en
Publication of WO2022052765A1 publication Critical patent/WO2022052765A1/zh
Priority to US18/181,204 priority patent/US20230204755A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/66Radar-tracking systems; Analogous systems
    • G01S13/72Radar-tracking systems; Analogous systems for two-dimensional tracking, e.g. combination of angle and range tracking, track-while-scan radar
    • G01S13/723Radar-tracking systems; Analogous systems for two-dimensional tracking, e.g. combination of angle and range tracking, track-while-scan radar by using numerical data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/66Radar-tracking systems; Analogous systems
    • G01S13/72Radar-tracking systems; Analogous systems for two-dimensional tracking, e.g. combination of angle and range tracking, track-while-scan radar
    • G01S13/723Radar-tracking systems; Analogous systems for two-dimensional tracking, e.g. combination of angle and range tracking, track-while-scan radar by using numerical data
    • G01S13/726Multiple target tracking
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/865Combination of radar systems with lidar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/28Details of pulse systems
    • G01S7/285Receivers
    • G01S7/295Means for transforming co-ordinates or for evaluating data, e.g. using computers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/411Identification of targets based on measurements of radar reflectivity
    • G01S7/412Identification of targets based on measurements of radar reflectivity based on a comparison between measured values and known or stored values
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/886Radar or analogous systems specially adapted for specific applications for alarm systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/9316Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles combined with communication equipment with other vehicles or with base stations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/9328Rail vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/415Identification of targets based on measurements of movement associated with the target

Definitions

  • the present application relates to the technical field of data processing, and in particular, to a target tracking method and device.
  • intelligent terminals such as intelligent transportation equipment, smart home equipment, and robots are gradually entering people's daily life. Sensors play a very important role in smart terminals.
  • Various sensors installed on smart terminals such as millimeter-wave radar, lidar, imaging radar, ultrasonic radar, cameras, etc., enable smart terminals to perceive the surrounding environment, collect data, and identify and track moving objects.
  • static scenes such as lane lines, identification of signs, path planning combined with navigator and map data, etc.
  • target tracking can be performed based on sensors, and certain strategies can be implemented based on target tracking. For example, in the field of autonomous driving, driving strategies can be formulated based on target tracking; in the field of security or monitoring, unsafe factors such as illegal intrusions can be alerted based on target tracking.
  • the position and speed of the target can be detected by the camera and the radar respectively, and then the correlation algorithm can be used to confirm that the position and speed are similar in the detection of the camera and the radar as the same target.
  • Embodiments of the present application provide a target tracking method and device, which can improve the accuracy of target tracking using radar and cameras.
  • an embodiment of the present application provides a target tracking method, including: obtaining a camera target tracking result and a radar target tracking result; obtaining a target tracking result according to the camera target tracking result and a target model corresponding to the radar target tracking result; wherein, The target model is used to indicate the relationship between the target and the height information of the target in the radar target tracking result.
  • the target model includes the height information of the target, when the camera target tracking results and the radar target tracking results are correlated, the target tracking results monitored by the radar can be combined with the height information of the target, thereby effectively reducing the range of the target monitored by the radar. Expand, correlate to get accurate target tracking results.
  • the method further includes: obtaining height information of the target according to the type information of the target in the radar target tracking result; and fusing the height information of the target and the target in the radar target tracking result to obtain a target model.
  • a target model that can characterize the position and height of the target can be obtained, and an accurate target tracking result can be obtained subsequently by using the target model association.
  • the height information of the target can be conveniently obtained based on the type information of the target.
  • obtaining the target tracking result according to the camera target tracking result and the target model corresponding to the radar target tracking result includes: projecting the target model to the camera coordinate system to obtain the projected radar target tracking result; The target tracking result and the post-projection radar target tracking result are obtained to obtain the target tracking result.
  • accurate target tracking results can be obtained based on the camera target tracking results and the post-projection radar target tracking results.
  • projecting the target model to the camera coordinate system includes: converting the target model to the camera coordinate system according to a preset or defined height conversion relationship; wherein different height information corresponds to different height conversion relationships , the height conversion relationship is used to convert the target tracking result with height in the radar coordinate system to the camera coordinate system.
  • the target model can be easily converted to the camera coordinate system based on the height conversion relationship.
  • the height conversion relationship corresponding to the height information is different. Because the horizontal lines corresponding to different areas are different, for example, the visual height of the same target in a low-lying area and a flat area is usually different. Therefore, different height conversion relationships are set for different areas, and the radar coordinates are converted by using the height conversion relationship. When the target tracking result with height in the system is converted to the camera coordinate system, accurate conversion can be achieved.
  • the area types include one or more of the following: ground undulating areas, areas with slopes, or areas with flat ground. In this way, accurate conversion between coordinate systems can be achieved for common ground types.
  • converting the target model to the camera coordinate system according to the preset or defined height conversion relationship includes: determining the target area type corresponding to the target model; according to the height conversion relationship corresponding to the target area type, with The height information of the target model matches the target height transformation relationship to transform the target model to the camera coordinate system.
  • the target tracking result is obtained according to the camera target tracking result and the projected radar target tracking result, including: determining the camera target tracking according to the overlap ratio of the camera target tracking result and the projected radar target tracking result
  • the result and the projected radar target tracking result are the same target; wherein, the overlap ratio is greater than the first value.
  • the overlap ratio can be used to conveniently and accurately determine that the camera target tracking result and the projected radar target tracking result are the same target.
  • the overlap ratio of the camera target tracking result and the projected radar target tracking result it is determined that the camera target tracking result and the projected radar target tracking result are the same target, including: when the overlap ratio is greater than The first value, and if the position and/or speed of the overlapping target in the camera target tracking result and the projected radar target tracking result satisfy the preset condition, it is determined that the camera target tracking result and the projected radar target tracking result are the same target .
  • the preset conditions include: the position and/or velocity of the overlapping target in the tracking result of the camera target, and the difference between the position and/or velocity of the overlapping target in the tracking result of the radar target is less than a second value .
  • the radar target tracking result comes from an imaging radar; the target model also includes size information of the target.
  • the overlap ratio between the visual bounding box, the height information and the size can be calculated at the same time.
  • the overlap ratio is greater than or equal to a certain value, the association is the same target. Achieve more accurate target association, and thus more accurate target tracking.
  • the camera target tracking result includes the target bounding box; the radar target tracking result includes the target point cloud.
  • target tracking can be performed efficiently and accurately.
  • an embodiment of the present application provides a target tracking device.
  • the target tracking device can be a vehicle with target tracking function, or other components with target tracking function.
  • the target tracking device includes but is not limited to: on-board terminal, on-board controller, on-board module, on-board module, on-board components, on-board chip, on-board unit, on-board radar or on-board camera and other sensors.
  • device vehicle-mounted module, vehicle-mounted module, vehicle-mounted component, vehicle-mounted chip, vehicle-mounted unit, vehicle-mounted radar or camera, and implement the method provided in this application.
  • the target tracking device may be an intelligent terminal, or be arranged in other intelligent terminals with target tracking function other than the vehicle, or be arranged in a component of the intelligent terminal.
  • the intelligent terminal may be other terminal equipment such as intelligent transportation equipment, smart home equipment, and robots.
  • the target tracking device includes, but is not limited to, a smart terminal or a controller, a chip, other sensors such as radar or a camera, and other components in the smart terminal.
  • the target tracking device can be a general-purpose device or a special-purpose device.
  • the apparatus can also be a desktop computer, a portable computer, a network server, a PDA (personal digital assistant, PDA), a mobile phone, a tablet computer, a wireless terminal device, an embedded device, or other devices with processing functions.
  • PDA personal digital assistant
  • the embodiment of the present application does not limit the type of the target tracking device.
  • the target tracking device may also be a chip or processor with a processing function, and the target tracking device may include at least one processor.
  • the processor can be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor.
  • the chip or processor with processing function may be arranged in the sensor, or may not be arranged in the sensor, but arranged at the receiving end of the output signal of the sensor.
  • the processor includes but is not limited to a central processing unit (central processing unit, CPU), a graphics processing unit (graphics processing unit, GPU), a micro control unit (micro control unit, MCU), a microprocessor (micro processor unit, MPU) ), at least one of the coprocessors.
  • the target tracking apparatus may also be a terminal device, or a chip or a chip system in the terminal device.
  • the target tracking device may include a processing unit and a communication unit.
  • the processing unit may be a processor.
  • the target tracking device may further include a storage unit, which may be a memory. The storage unit is used for storing instructions, and the processing unit executes the instructions stored in the storage unit, so that the terminal device implements a target tracking method described in the first aspect or any possible implementation manner of the first aspect.
  • the processing unit may be a processor.
  • the processing unit executes the instructions stored in the storage unit, so that the terminal device implements a target tracking method described in the first aspect or any possible implementation manner of the first aspect.
  • the storage unit may be a storage unit (eg, a register, a cache, etc.) in the chip, or a storage unit (eg, a read-only memory, a random access memory, etc.) located outside the chip in the terminal device.
  • the communication unit is used to obtain the camera target tracking result and the radar target tracking result;
  • the processing unit is used to obtain the target tracking result according to the target model corresponding to the camera target tracking result and the radar target tracking result; wherein, the target model uses It is used to indicate the relationship between the target and the height information of the target in the radar target tracking result.
  • the processing unit is further configured to obtain the height information of the target according to the type information of the target in the radar target tracking result; the processing unit is further configured to fuse the height information of the target and the target in the radar target tracking result , get the target model.
  • the processing unit is specifically configured to project the target model to the camera coordinate system to obtain the projected radar target tracking result; and obtain the target tracking result according to the camera target tracking result and the projected radar target tracking result.
  • the processing unit is specifically configured to convert the target model to the camera coordinate system according to a preset or defined height conversion relationship; wherein, different height information corresponds to different height conversion relationships, and the height conversion relationship uses It is used to convert the target tracking result with height in the radar coordinate system to the camera coordinate system.
  • the height conversion relationship corresponding to the height information is different.
  • the area types include one or more of the following: ground undulating areas, areas with slopes, or areas with flat ground.
  • the processing unit is specifically configured to determine the target area type corresponding to the target model; according to the height conversion relationship corresponding to the target area type, the target height conversion relationship matching the height information of the target model converts the target The model is transformed to the camera coordinate system.
  • the processing unit is specifically configured to determine that the camera target tracking result and the projected radar target tracking result are the same target according to the overlap ratio of the camera target tracking result and the projected radar target tracking result; wherein, The overlap ratio is greater than the first value.
  • the processing unit is specifically configured to, when the overlap ratio is greater than the first value, and the position and/or speed of the overlapped target in the camera target tracking result and the post-projection radar target tracking result satisfy a preset In the case of conditions, it is determined that the tracking result of the camera target and the tracking result of the radar target after projection are the same target.
  • the preset conditions include: the position and/or velocity of the overlapping target in the tracking result of the camera target, and the difference between the position and/or velocity of the overlapping target in the tracking result of the radar target is less than a second value .
  • the radar target tracking result comes from an imaging radar; the target model also includes size information of the target.
  • the camera target tracking result includes the target bounding box; the radar target tracking result includes the target point cloud.
  • embodiments of the present application further provide a sensor system for providing a vehicle with a target tracking function. It includes at least one target tracking device mentioned in the above embodiments of the present application, and other sensors such as cameras and radars. At least one sensor device in the system can be integrated into a whole machine or equipment, or at least one sensor in the system. The devices can also be provided independently as elements or devices.
  • the embodiments of the present application further provide a system for use in unmanned driving or intelligent driving, which includes at least one of the target tracking devices, cameras, radars and other sensors mentioned in the above-mentioned embodiments of the present application.
  • a system for use in unmanned driving or intelligent driving which includes at least one of the target tracking devices, cameras, radars and other sensors mentioned in the above-mentioned embodiments of the present application.
  • One, at least one device in the system can be integrated into a whole machine or equipment, or at least one device in the system can also be independently set as a component or device.
  • any of the above systems may interact with the vehicle's central controller to provide detection and/or fusion information for decision-making or control of the vehicle's driving.
  • an embodiment of the present application further provides a terminal, where the terminal includes at least one target tracking device or any of the above-mentioned systems mentioned in the above-mentioned embodiments of the present application.
  • the terminal may be smart home equipment, smart manufacturing equipment, smart industrial equipment, smart transportation equipment (including drones, vehicles, etc.) and the like.
  • an embodiment of the present application further provides a chip, including at least one processor and an interface; the interface is used to provide program instructions or data for the at least one processor; the at least one processor is used to execute program line instructions to implement the first An aspect or any method of possible implementations of the first aspect.
  • an embodiment of the present application provides a target tracking apparatus, including at least one processor configured to invoke a program in a memory to implement any method in the first aspect or any possible implementation manner of the first aspect.
  • an embodiment of the present application provides a target tracking device, including: at least one processor and an interface circuit, where the interface circuit is configured to provide information input and/or information output for the at least one processor; at least one processor is configured to run code instructions to implement any method of the first aspect or any possible implementations of the first aspect.
  • an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores instructions, when the instructions are executed, to implement the first aspect or any of the possible implementations of the first aspect. a method.
  • 1 is a schematic diagram of determining a target according to a visual bounding box and a radar point cloud
  • FIG. 2 is a schematic diagram of determining a target according to a visual bounding box and a radar point cloud according to an embodiment of the present application
  • FIG. 3 is a functional block diagram of a vehicle 100 according to an embodiment of the present application.
  • Fig. 4 is the structural representation of the computer system in Fig. 3;
  • FIG. 5 is a schematic diagram of a chip hardware structure provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a probability height provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of height calibration provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a different area type provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a target association provided by an embodiment of the present application.
  • FIG. 11 is a schematic flowchart of a target tracking method provided by an embodiment of the present application.
  • FIG. 12 is a schematic flowchart of another target tracking method provided by an embodiment of the present application.
  • FIG. 13 is a schematic structural diagram of a target tracking apparatus provided by an embodiment of the present application.
  • FIG. 14 is a schematic structural diagram of a chip provided by an embodiment of the present application.
  • FIG. 15 is a schematic structural diagram of another target tracking apparatus provided by an embodiment of the present application.
  • FIG. 16 is a schematic structural diagram of a vehicle according to an embodiment of the application.
  • words such as “first” and “second” are used to distinguish the same or similar items with basically the same function and effect.
  • the first value and the second value are only used to distinguish different values, and do not limit their order.
  • the words “first”, “second” and the like do not limit the quantity and execution order, and the words “first”, “second” and the like are not necessarily different.
  • At least one means one or more, and “plurality” means two or more.
  • the character “/” generally indicates that the associated objects are an “or” relationship.
  • At least one item(s) below” or similar expressions thereof refer to any combination of these items, including any combination of single item(s) or plural items(s).
  • at least one item (a) of a, b, or c may represent: a, b, c, ab, ac, bc, or abc, where a, b, and c may be single or multiple .
  • Radar-based target tracking and/or camera-based target tracking are possible ways to achieve target tracking.
  • a radar may be a radio detection based device. Radar can measure the position of air, ground, and water targets, and may also be called radiolocation. Exemplarily, the radar can use a directional antenna to send radio waves into the air. After the radio waves meet the target, they are reflected back and accepted by the radar. The distance data of the target can be obtained by measuring the time that the radio waves travel through the air. Determine the angle data of the target, and then realize the target tracking. Generally, radar can obtain accurate speed and position information and has a longer field of view. However, in the clutter environment, the radar will have poor target tracking effect due to the influence of clutter.
  • the camera can project the optical image generated by the scene through the lens onto the surface of the image sensor, and then convert it into an electrical signal. After digital-to-analog conversion, it becomes a digital image signal.
  • the digital image signal can be processed in digital signal processing (digital signal processing, DSP) chip processing.
  • DSP digital signal processing
  • radar-camera fusion Fusion of the results obtained from radar-based target tracking and camera-based target tracking (hereinafter referred to as radar-camera fusion) can give full play to the respective advantages of radar and camera to achieve more accurate target tracking.
  • the implementation of radar camera fusion may include the object-level data fusion method and the measurement-level radar camera fusion (data-level data fusion) method.
  • the target-level radar camera fusion method includes: using the camera to obtain the visual bounding box of the target, and obtaining the visual bounding box of the target through the camera coordinates (also known as visual coordinates) and radar coordinates (also known as overhead view coordinates).
  • the transformation matrix transforms the visual bounding box to obtain the position and velocity of the target in the radar coordinates; the target is obtained by using the radar detection point cloud, and the position and velocity of the target in the radar coordinates are obtained; using the correlation algorithm related to the position and velocity of the target, The target detected by the radar and the target detected by the camera are correlated to confirm the same target; the state estimation of the target can obtain the fused target position and speed.
  • the measurement-level radar camera fusion method includes: using the point cloud of the radar monitoring target (or called radar point cloud or point cloud data, etc.), and projecting the point cloud detected by the radar into the camera coordinate system; using the camera Obtain the visual bounding box of the target and the association algorithm, associate the projection of the radar point cloud with the visual bounding box obtained by the camera, and confirm the same target; perform state estimation on the target to obtain the fused target position and speed.
  • position information needs to be used when correlating the target obtained by the camera with the target obtained by the radar.
  • the position information of the target obtained based on the camera is usually It depends on the accuracy of the bottom edge of the visual bounding box, and the accuracy of the bottom edge of the visual bounding box may not be high due to weather, environment and other reasons.
  • the position information of the target obtained based on radar usually depends on the target point cloud, while clutter or ground fluctuations, etc. In the environment, the accuracy of the target point cloud may not be high, which may easily cause false associations.
  • FIG. 1 shows a schematic diagram of determining a target according to a visual bounding box and a radar point cloud.
  • the visual bounding box 10 frames the person
  • the bottom border of the visual bounding box 10 is set on the upper body of the person
  • the radar point cloud 11 may detect
  • the lower body of a person such as feet
  • the target fusion is performed, because the position of the visual bounding box 10 and the radar point cloud 11 is far away, the target framed by the visual bounding box 10 and the radar point cloud 11 may not be determined.
  • the target association is the same target, resulting in a false association.
  • the target tracking method of the embodiment of the present application when the result of the camera target tracking and the result of the radar target tracking are correlated, the height information of the target is introduced into the result of the radar target tracking, for example, the information used to indicate the radar target is obtained.
  • the target model of the relationship between the target in the tracking result and the height information of the target when the camera target tracking result and the radar target tracking result are correlated, the target tracking result can be obtained according to the camera target tracking result and the target model. Because the target model includes the height information of the target, the range of the target monitored by the radar can be effectively expanded, and then the accurate target tracking result can be obtained by correlation.
  • FIG. 2 shows a schematic diagram of a visual bounding box and a radar point cloud to determine a target according to an embodiment of the present application.
  • the visual bounding box 20 frames the person
  • the bottom border of the visual bounding box 20 is set on the upper body of the person
  • the radar point cloud 21 may detect
  • the lower body of a person for example, feet
  • the height information of the person is introduced.
  • the line segment 23 used to represent the height information can be determined.
  • the accuracy of the bottom edge of the visual bounding box is no longer relied on, and the It does not depend on the accuracy of the radar point cloud, whether in a poorly lit environment (such as night), or the bottom edge of the visual bounding box has a low progress (such as only the upper body of a person), or in a cluttered environment.
  • the point cloud data can be associated with accurate targets based on height information and visual bounding boxes, improving the accuracy and stability of the associated targets.
  • the target tracking method in the embodiment of the present application may be applied to scenarios such as automatic driving, security protection, or monitoring.
  • scenarios such as automatic driving, security protection, or monitoring.
  • the target tracking method in the embodiments of the present application can be used to track objects such as obstacles, and then an automatic driving strategy and the like can be formulated based on the target tracking.
  • the target tracking method of the embodiment of the present application can be used to track targets such as people, and then alarms for unsafe factors such as illegal intrusions can be performed based on target tracking.
  • the target tracking method in this embodiment of the present application may be applied to a vehicle, or a chip in a vehicle, or the like.
  • FIG. 3 shows a functional block diagram of the vehicle 100 provided by the embodiment of the present application.
  • the vehicle 100 is configured in a fully or partially autonomous driving mode.
  • the vehicle 100 may also determine the current state of the vehicle and its surrounding environment through human operation while in the autonomous driving mode, such as determining the possibility of at least one other vehicle in the surrounding environment behavior, and determine a confidence level corresponding to the likelihood that the other vehicle will perform the possible behavior, and control the vehicle 100 based on the determined information.
  • the vehicle 100 may be set to perform driving-related operations automatically without requiring human interaction.
  • Vehicle 100 may include various subsystems, such as travel system 102 , sensor system 104 , control system 106 , one or more peripherals 108 and power supply 110 , computer system 112 , and user interface 116 .
  • vehicle 100 may include more or fewer subsystems, and each subsystem may include multiple elements. Additionally, each of the subsystems and elements of the vehicle 100 may be interconnected by wire or wirelessly.
  • the travel system 102 may include components that provide powered motion for the vehicle 100 .
  • travel system 102 may include engine 118 , energy source 119 , transmission 120 , and wheels/tires 121 .
  • the engine 118 may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations, such as a gasoline engine and electric motor hybrid engine, an internal combustion engine and an air compression engine hybrid engine.
  • Engine 118 converts energy source 119 into mechanical energy.
  • Examples of energy sources 119 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electricity.
  • the energy source 119 may also provide energy to other systems of the vehicle 100 .
  • Transmission 120 may transmit mechanical power from engine 118 to wheels 121 .
  • Transmission 120 may include a gearbox, a differential, and a driveshaft.
  • transmission 120 may also include other devices, such as clutches.
  • the drive shaft may include one or more axles that may be coupled to one or more wheels 121 .
  • the sensor system 104 may include several sensors that sense information about the environment surrounding the vehicle 100 .
  • the sensor system 104 may include a positioning system 122 (which may be a GPS system, a Beidou system or other positioning system), an inertial measurement unit (IMU) 124, a radar 126, a laser rangefinder 128, and camera 130.
  • the sensor system 104 may also include sensors of the internal systems of the vehicle 100 being monitored (eg, an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, etc.). Sensor data from one or more of these sensors can be used to detect objects and their corresponding characteristics (position, shape, orientation, velocity, etc.). This detection and identification is a critical function for the safe operation of the autonomous vehicle 100 .
  • the positioning system 122 may be used to estimate the geographic location of the vehicle 100 .
  • the IMU 124 is used to sense position and orientation changes of the vehicle 100 based on inertial acceleration.
  • IMU 124 may be a combination of an accelerometer and a gyroscope.
  • Radar 126 may utilize radio signals to sense objects within the surrounding environment of vehicle 100 . In some embodiments, in addition to sensing objects, radar 126 may be used to sense the speed and/or heading of objects.
  • the laser rangefinder 128 may utilize laser light to sense objects in the environment in which the vehicle 100 is located.
  • the laser rangefinder 128 may include one or more laser sources, laser scanners, and one or more detectors, among other system components.
  • Camera 130 may be used to capture multiple images of the surrounding environment of vehicle 100 .
  • Camera 130 may be a still camera or a video camera.
  • Control system 106 controls the operation of the vehicle 100 and its components.
  • Control system 106 may include various elements including steering system 132 , throttle 134 , braking unit 136 , sensor fusion algorithms 138 , computer vision system 140 , route control system 142 , and obstacle avoidance system 144 .
  • the steering system 132 is operable to adjust the heading of the vehicle 100 .
  • it may be a steering wheel system.
  • the throttle 134 is used to control the operating speed of the engine 118 and thus the speed of the vehicle 100 .
  • the braking unit 136 is used to control the deceleration of the vehicle 100 .
  • the braking unit 136 may use friction to slow the wheels 121 .
  • the braking unit 136 may convert the kinetic energy of the wheels 121 into electrical current.
  • the braking unit 136 may also take other forms to slow the wheels 121 to control the speed of the vehicle 100 .
  • Computer vision system 140 may be operable to process and analyze images captured by camera 130 in order to identify objects and/or features in the environment surrounding vehicle 100 .
  • the objects and/or features may include traffic signals, road boundaries and obstacles.
  • Computer vision system 140 may use object recognition algorithms, structure from motion (SFM) algorithms, video tracking, and other computer vision techniques.
  • SFM structure from motion
  • the computer vision system 140 may be used to map the environment, track objects, estimate the speed of objects, and the like.
  • the route control system 142 is used to determine the travel route of the vehicle 100 .
  • route control system 142 may combine data from sensors 138 , global positioning system (GPS) 122 , and one or more predetermined maps to determine a route for vehicle 100 .
  • GPS global positioning system
  • the obstacle avoidance system 144 is used to identify, evaluate, and avoid or otherwise traverse potential obstacles in the environment of the vehicle 100 .
  • control system 106 may additionally or alternatively include components other than those shown and described. Alternatively, some of the components shown above may be reduced.
  • Peripherals 108 may include a wireless communication system 146 , an onboard computer 148 , a microphone 150 and/or a speaker 152 .
  • peripherals 108 provide a means for a user of vehicle 100 to interact with user interface 116 .
  • the onboard computer 148 may provide information to the user of the vehicle 100 .
  • User interface 116 may also operate on-board computer 148 to receive user input.
  • the onboard computer 148 can be operated via a touch screen.
  • peripheral devices 108 may provide a means for vehicle 100 to communicate with other devices located within the vehicle.
  • microphone 150 may receive audio (eg, voice commands or other audio input) from a user of vehicle 100 .
  • speakers 152 may output audio to a user of vehicle 100 .
  • the display screen of the on-board computer 148 can also display the target tracked by the target tracking algorithm according to the embodiment of the present application, so that the user can perceive the environment around the vehicle on the display screen.
  • Wireless communication system 146 may wirelessly communicate with one or more devices, either directly or via a communication network.
  • wireless communication system 146 may use 3G cellular communications such as code division multiple access (CDMA), EVDO, global system for mobile communications (GSM)/general packet radio service, GPRS), or 4G cellular communications such as LTE. Or 5G cellular communications.
  • the wireless communication system 146 may utilize wireless-fidelity (WiFi) to communicate with a wireless local area network (WLAN).
  • WiFi wireless local area network
  • the wireless communication system 146 may communicate directly with the device using an infrared link, Bluetooth, or ZigBee.
  • Other wireless protocols, such as various vehicle communication systems, for example, wireless communication system 146 may include one or more dedicated short range communications (DSRC) devices, which may include communication between vehicles and/or roadside stations public and/or private data communications.
  • DSRC dedicated short range communications
  • the power supply 110 may provide power to various components of the vehicle 100 .
  • the power source 110 may be a rechargeable lithium-ion or lead-acid battery.
  • One or more battery packs of such a battery may be configured as a power source to provide power to various components of the vehicle 100 .
  • power source 110 and energy source 119 may be implemented together, such as in some all-electric vehicles.
  • Computer system 112 may include at least one processor 113 that executes instructions 115 stored in a non-transitory computer-readable medium such as data storage device 114 .
  • Computer system 112 may also be multiple computing devices that control individual components or subsystems of vehicle 100 in a distributed fashion.
  • the processor 113 may be any conventional processor, such as a commercially available central processing unit (CPU). Alternatively, the processor may be a special-purpose device such as an application specific integrated circuit (ASIC) or other hardware-based processor for use in a specific application.
  • FIG. 3 functionally illustrates a processor, memory, and other elements of the computer system 112 in the same block, one of ordinary skill in the art will understand that the processor, computer, or memory may actually include a processor, a computer, or a memory that may or may not Multiple processors, computers, or memories that are not stored within the same physical enclosure.
  • the memory may be a hard drive or other storage medium located within an enclosure other than a computer.
  • processors or computers will be understood to include reference to a collection of processors or computers or memories that may or may not operate in parallel.
  • some components such as the steering and deceleration components, may each have its own processor that only performs computations related to component-specific functions .
  • a processor may be located remotely from the vehicle and in wireless communication with the vehicle. In other aspects, some of the processes described herein are performed on a processor disposed within the vehicle while others are performed by a remote processor, including taking steps necessary to perform a single maneuver.
  • data storage 114 may include instructions 115 (eg, program logic) executable by processor 113 to perform various functions of vehicle 100 , including those described above.
  • Data storage 114 may also contain additional instructions, including sending data to, receiving data from, interacting with, and/or performing data processing on one or more of propulsion system 102 , sensor system 104 , control system 106 , and peripherals 108 . control commands.
  • the data storage device 114 may store data such as road maps, route information, the vehicle's position, direction, speed, and other such vehicle data, among other information. Such information may be used by the vehicle 100 and the computer system 112 during operation of the vehicle 100 in autonomous, semi-autonomous and/or manual modes.
  • a user interface 116 for providing information to or receiving information from a user of the vehicle 100 .
  • the user interface 116 may include one or more input/output devices within the set of peripheral devices 108 , such as a wireless communication system 146 , an onboard computer 148 , a microphone 150 and a speaker 152 .
  • Computer system 112 may control functions of vehicle 100 based on input received from various subsystems (eg, travel system 102 , sensor system 104 , and control system 106 ) and from user interface 116 .
  • computer system 112 may utilize input from control system 106 to control steering unit 132 to avoid obstacles detected by sensor system 104 and obstacle avoidance system 144.
  • computer system 112 is operable to provide control of various aspects of vehicle 100 and its subsystems.
  • one or more of these components described above may be installed or associated with the vehicle 100 separately.
  • data storage device 114 may exist partially or completely separate from vehicle 100 .
  • the above-described components may be communicatively coupled together in a wired and/or wireless manner.
  • the above component is just an example.
  • components in each of the above modules may be added or deleted according to actual needs, and FIG. 3 should not be construed as a limitation on the embodiments of the present application.
  • An autonomous vehicle traveling on a road can track objects in its surrounding environment according to the object tracking method of the embodiment of the present application to determine its own adjustment to the current speed or the driving route.
  • the object may be other vehicles, traffic control equipment, or other types of objects.
  • the computing device may also provide instructions to modify the steering angle of the vehicle 100 so that the self-driving car follows a given trajectory and/or maintains a close proximity to the self-driving car. Safe lateral and longitudinal distances to obstacles (eg, vehicles in adjacent lanes on the road).
  • obstacles eg, vehicles in adjacent lanes on the road.
  • the above-mentioned vehicle 100 can be a car, a truck, a motorcycle, a bus, a boat, an airplane, a helicopter, a lawn mower, a recreational vehicle, a playground vehicle, construction equipment, a tram, a golf cart, a train, a cart, etc.
  • the application examples are not particularly limited.
  • FIG. 4 is a schematic structural diagram of the computer system 112 in FIG. 3 .
  • computer system 112 includes processor 113 coupled to system bus 105 .
  • the processor 113 may be one or more processors, each of which may include one or more processor cores.
  • a video adapter 107 which can drive a display 109, is coupled to the system bus 105.
  • the system bus 105 is coupled to an input-output (I/O) bus through a bus bridge 111 .
  • I/O interface 115 is coupled to the I/O bus.
  • I/O interface 115 communicates with various I/O devices, such as input device 117 (eg, keyboard, mouse, touch screen, etc.), media tray 121, (eg, CD-ROM, multimedia interface, etc.).
  • Transceiver 123 (which can send and/or receive radio communication signals), camera 155 (which can capture still and moving digital video images) and external USB interface 125 .
  • the interface connected to the I/O interface 115 may be a universal serial bus (universal serial bus, USB) interface.
  • the processor 113 may be any conventional processor, including a reduced instruction set computing (“RISC”) processor, a complex instruction set computing (“CISC”) processor, or a combination thereof.
  • the processor may be a special purpose device such as an application specific integrated circuit (“ASIC").
  • the processor 113 may be a neural network processor or a combination of a neural network processor and the above-mentioned conventional processors.
  • the computer system may be located remotely from the autonomous vehicle and may communicate wirelessly with the autonomous vehicle.
  • some of the processes described herein are performed on a processor disposed within the autonomous vehicle, others are performed by a remote processor, including taking actions required to perform a single maneuver.
  • Network interface 129 is a hardware network interface, such as a network card.
  • the network 127 may be an external network, such as the Internet, or an internal network, such as an Ethernet network or a virtual private network (VPN).
  • the network 127 may also be a wireless network, such as a WiFi network, a cellular network, and the like.
  • the hard drive interface 131 is coupled to the system bus 105 .
  • the hard disk drive interface 131 and the hard disk drive 133 are connected.
  • System memory 135 is coupled to system bus 105 .
  • Software running in system memory 135 may include an operating system (OS) 137 and application programs 143 of computer system 112 .
  • OS operating system
  • application programs 143 of computer system 112 .
  • the operating system includes a Shell 139 and a kernel 141 .
  • Shell 139 is an interface between the user and the kernel of the operating system.
  • the shell is the outermost layer of the operating system. The shell manages the interaction between the user and the operating system: waiting for user input, interpreting the user's input to the operating system, and processing various operating system outputs.
  • Kernel 141 consists of those parts of the operating system that manage memory, files, peripherals, and system resources. Interacting directly with hardware, the operating system's kernel 141 typically runs processes and provides inter-process communication, providing CPU time slice management, interrupts, memory management, IO management, and the like.
  • Application 141 includes programs related to controlling the autonomous driving of the car, for example, programs that manage the interaction between the autonomous vehicle and road obstacles, programs that control the route or speed of the autonomous vehicle, and programs that control the interaction between the autonomous vehicle and other autonomous vehicles on the road. .
  • Application 141 also exists on the system of software deploying server 149 .
  • the computer system may download the application 143 from the deploying server 149 when the application 141 needs to be executed.
  • Sensor 153 is associated with a computer system. Sensor 153 is used to detect the environment around computer system 112 .
  • the sensor 153 can detect animals, cars, obstacles and pedestrian crossings, etc. Further sensors can also detect the environment around the above-mentioned animals, cars, obstacles and pedestrian crossings, such as: the environment around animals, for example, animals appear around other animals, weather conditions, ambient light levels, etc.
  • the sensors may be cameras, infrared sensors, chemical detectors, microphones, and the like.
  • FIG. 5 is a schematic diagram of a chip hardware structure according to an embodiment of the present application.
  • the chip may include a neural network processor 50 .
  • the chip can be applied to the vehicle shown in FIG. 3 or the computer system shown in FIG. 4 .
  • the neural network processor 50 can be a neural network processing unit (NPU), a tensor processing unit (TPU), or a graphics processing unit (graphics processing unit, GPU), etc., all suitable for large-scale A processor for XOR processing.
  • NPU neural network processing unit
  • TPU tensor processing unit
  • GPU graphics processing unit
  • the NPU can be mounted on the main CPU (host CPU) as a co-processor, and the host CPU assigns tasks to it.
  • the core part of the NPU is the arithmetic circuit 503, which is controlled by the controller 504 to extract the matrix data in the memory (501 and 502) and perform multiplication and addition operations.
  • the arithmetic circuit 503 includes multiple processing units (process engines, PEs). In some implementations, arithmetic circuit 503 is a two-dimensional systolic array. The arithmetic circuit 503 may also be a one-dimensional systolic array or other electronic circuitry capable of performing mathematical operations such as multiplication and addition. In some implementations, arithmetic circuit 503 is a general-purpose matrix processor.
  • the arithmetic circuit 503 fetches the weight data of the matrix B from the weight memory 502 and buffers it on each PE in the arithmetic circuit 503 .
  • the arithmetic circuit 503 fetches the input data of the matrix A from the input memory 501, performs matrix operations according to the input data of the matrix A and the weight data of the matrix B, and stores the partial result or the final result of the matrix in the accumulator 508 .
  • Unified memory 506 is used to store input data and output data.
  • the weight data is directly transferred to the weight memory 502 through a storage unit access controller (direct memory access controller, DMAC) 505.
  • DMAC direct memory access controller
  • Input data is also moved to unified memory 506 via the DMAC.
  • the bus interface unit (bus interface unit, BIU) 510 is used for the interaction of the DMAC and the instruction fetch buffer (instruction fetch buffer) 509; the bus interface unit 501 is also used for the instruction fetch memory 509 to obtain instructions from the external memory; the bus interface unit 501 also The memory cell access controller 505 acquires the original data of the input matrix A or the weight matrix B from the external memory.
  • the DMAC is mainly used to transfer the input data in the external memory DDR to the unified memory 506 , or the weight data to the weight memory 502 , or the input data to the input memory 501 .
  • the vector calculation unit 507 has multiple operation processing units, and if necessary, further processes the output of the operation circuit 503, such as vector multiplication, vector addition, exponential operation, logarithmic operation, size comparison and so on.
  • the vector calculation unit 507 is mainly used for the calculation of non-convolutional layers or fully connected layers (FC) in the neural network, and can specifically handle: Pooling (pooling), Normalization (normalization) and other calculations.
  • the vector calculation unit 507 may apply a nonlinear function to the output of the arithmetic circuit 503, such as a vector of accumulated values, to generate activation values.
  • vector computation unit 507 generates normalized values, merged values, or both.
  • vector computation unit 507 stores the processed vectors to unified memory 506 .
  • the vectors processed by the vector computation unit 507 can be used as the activation input to the arithmetic circuit 503 .
  • the instruction fetch memory (instruction fetch buffer) 509 connected to the controller 504 is used to store the instructions used by the controller 504;
  • the unified memory 506, the input memory 501, the weight memory 502 and the instruction fetch memory 509 are all On-Chip memories. External memory is independent of the NPU hardware architecture.
  • the target tracking method of the embodiment of the present application may be applied to an electronic device.
  • the electronic device may be a terminal device or a server or a chip with computing capabilities.
  • Terminal devices may include mobile phones, computers, or tablets.
  • FIG. 6 shows a schematic diagram of a scenario in which the target tracking method according to the embodiment of the present application is applied to security or monitoring.
  • a radar 601 , a camera 602 and an electronic device 603 may be included.
  • the radar 601 and the camera 602 can be set at positions such as utility poles, so that the radar 601 and the camera 602 have a better field of view.
  • Radar 601 and camera 602 can communicate with electronic device 603, respectively.
  • the point cloud data measured by the radar 601 and the image collected by the camera 602 can be transmitted to the electronic device 603, and the electronic device 603 can further use the target tracking method of the embodiment of the present application based on the point cloud data of the radar 601 and the image collected by the camera 602 to achieve Tracking of person 604, for example.
  • a warning may be displayed on a screen, a warning by a voice, and/or a warning by a warning device, etc., which are not specifically limited in this embodiment of the present application.
  • the camera target tracking results described in the embodiments of the present application may include: a target bounding box (or called a visual bounding box, etc.) obtained by framing an image captured by a camera, or other data for calibrating the target, and the like.
  • the camera target tracking result may also include one or more of the following: the position or speed of the target, etc., the number of camera target tracking results may be one or more, and the embodiment of the present application tracks the camera target.
  • the specific content and quantity of the results are not specifically limited.
  • the radar target tracking results described in the embodiments of the present application may include: target point clouds collected by radar, or other data for calibrating targets, and the like.
  • the radar target tracking result may also include one or more of the following: the position or speed of the target, etc., the number of radar target tracking results may be one or more, and the embodiment of the present application tracks the radar target.
  • the specific content and quantity of the results are not specifically limited.
  • the radars described in the embodiments of the present application may include: millimeter wave radars or imaging radars (image radars).
  • imaging radar can obtain more point cloud data than millimeter wave radar. Therefore, when imaging radar is used for target tracking, the size of the target can be obtained based on more point cloud data collected by imaging radar, and then combined with the target The size of radar camera is fused to obtain more accurate target tracking than millimeter wave radar.
  • the camera target tracking result described in the embodiment of the present application may be the target tracking result calibrated in the camera coordinate system.
  • the radar target tracking result described in the embodiments of the present application may be the target tracking result calibrated in the radar coordinate system.
  • the camera coordinate system described in this embodiment of the present application may be a camera-centered coordinate system.
  • the camera is at the origin, the x-axis is to the right, the z-axis is forward (towards the screen or the camera direction), and the y-axis
  • the axis is up (not above the world but above the camera itself).
  • the camera coordinate system may also be called the visual coordinate system.
  • the radar coordinate system described in the embodiments of the present application may be a radar-centered coordinate system.
  • the radar coordinate system may also be called a top view coordinate system or a bird eye view (bird eye view, BEV) coordinate system, or the like.
  • FIG. 7 is a schematic flowchart of a target tracking method provided by an embodiment of the present application. As shown in FIG. 7 , the method includes:
  • S701 Acquire the camera target tracking result and the radar target tracking result.
  • the camera may be used to capture images, and the radar may be used to detect and obtain point cloud data.
  • the camera, the radar, and the device for executing the target tracking method may be co-located in one device, or may be independent of each other, or may be co-located in one device, which is not specifically limited in this embodiment of the present application.
  • the camera may have computing capability, and the camera may obtain the camera target tracking result according to the captured image, and send the camera target tracking result to the device for executing the target tracking method.
  • the radar may have computing capability, and the radar may obtain the radar target tracking result according to the point cloud data, and send the radar target tracking result to the device for executing the target tracking method.
  • the device for executing the target tracking method can obtain the captured image from the camera and the point cloud data from the radar, and then the device for executing the target tracking method can obtain the camera target tracking result according to the captured image, and according to the point cloud data.
  • the cloud data gets the radar target tracking results.
  • the camera target tracking result may be a target tracking result obtained by using a possible camera tracking algorithm, etc.
  • the radar target tracking result may be a target tracking result obtained by using a possible radar tracking algorithm.
  • the specific methods of the target tracking results and the radar target tracking results are not limited.
  • S702 Obtain a target tracking result according to the camera target tracking result and the target model corresponding to the radar target tracking result; wherein, the target model is used to indicate the relationship between the target in the radar target tracking result and the height information of the target.
  • the target model of the radar target tracking result described in the embodiment of the present application is used to indicate the relationship between the target in the radar target tracking result and the height information of the target.
  • the target model may be a model obtained by combining the height information of the target and the position information of the target in the radar coordinate system.
  • the embodiment of the present application can extend the scattered and small number of point cloud data in the radar target tracking result to a target model with height information of a larger coverage area.
  • the camera target tracking result is usually related to the shape of the target.
  • the aforementioned camera target tracking result may include a target bounding box for framing the target, and the target model corresponding to the radar target tracking result in the embodiment of the present application is highly correlated with the target, and can If the range of the target monitored by the radar is effectively expanded, when the target association is performed according to the camera target tracking result and the target model corresponding to the radar target tracking result, the correlation range of the camera target tracking result and the target model corresponding to the radar target tracking result can be effectively expanded. Then, accurate target tracking results can be obtained by correlation.
  • the target tracking results described in the embodiments of the present application may include one or more of the following: the type, position, or speed of the target, and the number of targets may be one or more.
  • the content and quantity are not specifically limited.
  • the target tracking method of the embodiment of the present application when the result of camera target tracking and the result of radar target tracking are correlated, the height information of the target is introduced into the result of radar target tracking.
  • the target model indicating the relationship between the target in the radar target tracking result and the height information of the target
  • the target tracking result can be obtained according to the camera target tracking result and the target model. Because the target model includes the height information of the target, the range of the target monitored by the radar can be effectively expanded, and then the accurate target tracking result can be obtained by correlation.
  • it may further include: obtaining the height information of the target according to the type information of the target in the radar target tracking result; fusing the height information of the target and the radar target tracking result target to get the target model.
  • the target in the radar target tracking result obtained by radar detection can be performed.
  • Classification For example, the target may be classified according to a radar classification algorithm, and the type information of the classified target may include: vehicle (car), pedestrian (pedestrian), animal (animal), bicycle (cycle), and the like.
  • the height information of the target can be determined according to the type information of the target.
  • the height information of the target can be estimated from the type information of the target.
  • the correspondence between the type information of the target and the height information of the target may be pre-defined or preset, so that after the type information of the target is determined, the corresponding height information may be matched in the correspondence.
  • the height information may be a specific height value or a height interval.
  • the corresponding relationship may include: vehicle height (car height) 0.8-1.2 meters (meter, m for short), pedestrian height (ped height) 1.0-1.2 meters 1.8m, animal height 0.4-1.0m.
  • FIG. 8 shows a schematic diagram of a target type-probability height correspondence relationship based on a Gaussian distribution.
  • height distribution 1, height distribution 2 and height distribution 3 respectively represent the probability height distribution corresponding to different target types.
  • the height information of the target and the target in the radar target tracking result can be fused to obtain the target model.
  • a height value with the greatest probability or a larger height can be selected, and the height value corresponding to the height value can be obtained by using the height value.
  • the height line segment is fused with the position of the target in the target tracking result to obtain the target model.
  • the target model can be a model containing height line segments, the target model can also be called a probability height model, or a probability height line segment model, or the like.
  • S702 includes: projecting the target model to the camera coordinate system to obtain the projected radar target tracking result; according to the camera target tracking result and the projected radar target tracking result, Get target tracking results.
  • the target model contains the height information of the target
  • the target model contains the height information of the target
  • the target model when projecting the target model to the camera coordinate system, it can be understood that a two-dimensional height is introduced into the one-dimensional projection plane of the camera coordinate system.
  • the target jointly determined by the camera target tracking result and the projected radar target tracking result can be determined according to the camera target tracking result (such as the target bounding box) and the projected radar target tracking result (such as the line segment representing the height and position), Get target tracking results.
  • projecting the target model to the camera coordinate system includes: transforming the target model to the camera coordinate system according to a preset or defined height conversion relationship.
  • the height conversion relationship can be set or defined in advance based on experiments, etc. After the target model is obtained, the height conversion relationship corresponding to the target model can be matched, and then the target model can be converted to the camera coordinate system.
  • the height conversion relationship described in the embodiments of the present application is used to convert the target tracking result with height in the radar coordinate system to the camera coordinate system. Different height information corresponds to different height conversion relationships.
  • the height conversion relationship may include a height conversion matrix or a set of height conversion matrices, and the embodiment of the present application does not specifically limit the height conversion relationship.
  • FIG. 9 shows a schematic diagram of the calibration of the height conversion relationship of a target model.
  • the target model can be a line segment with a height.
  • different positions of the line segment (such as the two ends, or any position in the middle, etc.) can correspond to different heights
  • the transformation matrix for example, the height transformation matrix can be related to the distance d and the included angle ⁇ from the target to the origin of the camera coordinate system, and the height transformation matrix can be constructed separately for multiple positions of the line segment, then the height transformation matrix set composed of multiple height transformation matrices (or called a sequence of height matrices) can be used to convert the line segment to the camera coordinate system.
  • the height conversion relationship corresponding to the height information is different.
  • the area type described in this embodiment of the present application can be used to describe the ground type of the area where the target is located.
  • area types may include one or more of the following: areas of ground relief (eg, grass or undulating pavement, etc.), areas with slopes (eg, slopes, etc.), or areas of flat ground (eg, flat roads, etc.).
  • the ground plane of the target may be different, and the height of the target relative to the origin of the camera coordinate system may be different in different areas. Therefore, for the same target located in different areas, if the same height transformation is adopted relationship, which may cause the height obtained from the transformation to be inconsistent with the height of the target relative to the origin of the camera coordinate system, which may lead to inaccuracy in subsequent radar camera fusion.
  • the embodiments of the present application correspond to different region types, and the height conversion relationships corresponding to the height information are different, so that the target model can be accurately converted according to the height conversion relationships of various region types.
  • FIG. 10 shows a schematic diagram of including multiple area types in a scene.
  • area 1 represents grass
  • area 2 represents slope
  • area 3 represents flat road
  • the same height information corresponds to different height conversion relationships in area 1, area 2, and area 3, respectively.
  • the target area type corresponding to the target model can be determined (such as area 1, area 2 or area 3); according to the height conversion relationship corresponding to the target area type, the height information of the target model and the target model can be determined.
  • the matching object height transformation relation transforms the object model to the camera coordinate system. In this way, the target model can be accurately converted into the camera coordinate system by using the height conversion relationship of each area.
  • the target tracking result is obtained according to the camera target tracking result and the projected radar target tracking result, which may include: using any correlation algorithm to calculate the correlation degree between the camera target tracking result and the projected radar target tracking result,
  • Association algorithms include, for example, one or more of the following: global nearest neighbor (GNN), probabilistic data association (PDA), joint probabilistic data association (JPDA), or merging Ratio (intersection over union, IoU) and so on.
  • the camera target tracking result and the projected radar target tracking result are the same target according to the overlap ratio (or referred to as the overlap ratio) of the camera target tracking result and the projected radar target tracking result;
  • the ratio is greater than the first value.
  • the first value may be any value between 0.5 and 1, and the first value is not specifically limited in this embodiment of the present application. It can be understood that in general IoU calculation, the first value has a confidence distribution and is stable. Therefore, when IoU calculation is used for association, the first value does not need to be adjusted manually, which improves the versatility of the association calculation in this embodiment of the present application.
  • the overlap ratio of the camera target tracking results and the projected radar target tracking results can be calculated.
  • it is greater than the first value it is determined that the camera target tracking result and the projected radar target tracking result are the same target.
  • one camera target tracking result and one projected radar target tracking result can be combined The result is paired, and the overlap ratio of each pair of camera target tracking results and the projected radar target tracking results is calculated separately, and each pair of camera target tracking results and projected radar target tracking results whose overlap ratio is greater than or equal to the first value is calculated. identified as the same goal.
  • the overlap ratio between the camera target tracking result and the projected radar target tracking result is less than or equal to the first value, it is considered that the camera target tracking result and the projected radar target tracking result do not correspond to the same target.
  • the camera target tracking result and the post-projection radar target tracking result can be determined to be the same target according to the actual application scene setting, or the camera target tracking result can be determined according to the actual application scene setting. It is not the same target as the radar target tracking result after projection, which is not specifically limited in this embodiment of the present application.
  • multi-CR association multiple camera target tracking results overlap with one projected radar target tracking result
  • multiple RC associations there is an overlap (hereinafter referred to as multiple RC associations). If the two calculated overlap ratios are greater than or equal to the first value in the multi-CR association or multi-RC association, the tracking results of multiple cameras may be mistakenly associated with the same target, or the tracking results of multiple projected radar targets may be incorrectly associated with the same target.
  • the camera target tracking result can be further determined according to the position and/or speed of the overlapping target in the camera target tracking result and the position and/or speed of the overlapping target in the radar target tracking result. and whether the radar target tracking result after projection is the same target.
  • the camera target tracking result may be determined when the overlap ratio is greater than the first value, and the position and/or speed of the overlapped target in the camera target tracking result and the projected radar target tracking result satisfy a preset condition. It is the same target as the radar target tracking result after projection.
  • the preset conditions include: the position and/or velocity of the overlapping target in the camera target tracking result, and the difference between the position and/or velocity of the overlapping target in the radar target tracking result is less than the second value.
  • Figure 11 shows a schematic diagram of multiple R-Cs and multiple C-Rs.
  • the post-projection radar target tracking result 1001 and the post-projection radar target tracking result 1002 both overlap with the camera target tracking result 1003 .
  • the overlap ratio of the projected radar target tracking result 1002 and the camera target tracking result 1003 is greater than or equal to the first value, and the overlap ratio of the projected radar target tracking result 1001 and the camera target tracking result 1003 is less than the first value, It can be determined that the projected radar target tracking result 1002 and the camera target tracking result 1003 are the same target, and it can be determined that the projected radar target tracking result 1001 and the camera target tracking result 1003 are not the same target.
  • the overlap ratio of the projected radar target tracking result 1002 and the camera target tracking result 1003 is greater than or equal to the first value, and the overlap ratio of the projected radar target tracking result 1001 and the camera target tracking result 1003 is greater than or equal to the first value.
  • the distance between the position of the target in the radar target tracking result 1001 after projection and the position of the target in the camera target tracking result 1003 may be less than or equal to the distance threshold, and/or, the speed of the target in the radar target tracking result 1001 after projection and the camera
  • the speed difference of the target in the target tracking result 1003 is less than or equal to the speed difference threshold, it is determined that the radar target tracking result 1001 after projection and the camera target tracking result 1003 are the same target.
  • the distance between the position of the target in the post-projection radar target tracking result 1002 and the position of the target in the camera target tracking result 1003 may be less than or equal to the distance threshold, and/or, the velocity of the target in the post-projection radar target tracking result 1002 and the camera target tracking
  • the difference between the velocities of the targets in the result 1003 is less than or equal to the velocity difference threshold, it is determined that the radar target tracking result 1002 after projection and the camera target tracking result 1003 are the same target. In other cases, it can be determined that the post-projection radar target tracking result 1001 and/or the post-projection radar target tracking result 1001 are not the same target as the camera target tracking result 1003 .
  • the camera target tracking result 1004 and the camera target tracking result 1005 overlap with the post-projection radar target tracking result 1006 .
  • the camera target tracking result 1004 or the camera target tracking result 1005 is the same target as the post-projection radar target tracking result 1006 can be determined in a manner similar to that described in the multi-R-C, which will not be repeated here.
  • FIG. 12 is a schematic flowchart of another target tracking method provided by an embodiment of the present application, and the method includes:
  • the target tracking device obtains the camera target tracking result.
  • a camera may be set in a place where target tracking is required, the camera may capture images, and the target tracking device may acquire images from the camera.
  • the target tracking device can perform image recognition and other processing on the images obtained from the camera to achieve Bounding Box tracking, and the result of the Bounding Box tracking is used as the camera target tracking result.
  • the camera target tracking result may refer to the target bounding box (Bounding Box) used to frame the target in the camera coordinate system, and the number of target bounding boxes may be one or more.
  • the target tracking device obtains the radar target tracking result.
  • a radar may be set up in a place where target tracking needs to be performed, the radar can detect the target, and the target tracking device can obtain the data detected by the radar from the radar.
  • the target tracking device can process the data detected by the radar to obtain the point cloud data of the target as the result of the radar target tracking.
  • the radar target tracking result may refer to a point cloud used for calibrating the target, wherein the number of point clouds corresponding to one target may be related to the performance of the radar, and the number of targets may be one or more.
  • the target tracking device obtains the target type information in the radar target tracking result through the point cloud classification algorithm.
  • the target tracking device may determine the type information of the target in the radar tracking result according to the analysis of the radar target tracking result. For example, it may be determined that the targets in the radar tracking result are people and/or vehicles, and the embodiment of the present application does not limit the number of targets and the type information of the targets.
  • the target tracking device matches the height information of the target according to the type information of the target.
  • the height information of the person may be 1.0-1.8m
  • the height information of the vehicle may be 0.4-1.0m.
  • the target tracking device performs RC calibration of different heights on the image domain to obtain transformation matrices corresponding to different heights.
  • the image domain may be an area in the image in the camera coordinate system, and different areas correspond to different height transformation matrices.
  • the corresponding height can be selected for the target by identifying the specific area of the target in the image later. Transformation matrix to achieve more accurate tracking effect.
  • S1205 may be a step performed in advance, or it may be understood that S1205 may be set at any position before, in the middle or after S1201-S1204, and the execution steps of S1025 in this embodiment of the present application are not specifically limited.
  • the target tracking device projects the target model containing the height information to the image domain (which can be understood as projecting to the camera coordinate system) through transformation matrices corresponding to different heights.
  • the target tracking device associates the camera target tracking result with the projected radar target tracking result.
  • the target jointly determined in the camera target tracking result and the radar target tracking result can be obtained.
  • the camera target tracking result 20 is obtained at the A position; when the target tracking is performed according to the radar, the radar target tracking result 21 is obtained at the A position;
  • the camera coordinate system can be projected to the line segment 23. Because the camera target tracking result 20 and the line segment 23 overlap in a large proportion, it can be considered that the camera at position A
  • the target tracking result and the radar target tracking result are the same target, and then the camera target tracking result and the radar target tracking result corresponding to the same target are fused to obtain a more accurate and complete tracking result.
  • the bottom edge of the camera target tracking result 20 can be pulled down in combination with the length of the line segment 23 to achieve more accurate target determination.
  • the above method can be used to associate any camera target tracking result with any radar target tracking result, so that the camera target can be obtained.
  • the number of targets to be tracked jointly in the tracking result and the radar target tracking result can be one or more. It can be understood that if the overlap ratio between the camera target tracking result and the projected radar target tracking result is small, it can be considered that the target tracked by the camera target tracking result and the target tracked by the radar target tracking result are not the same target.
  • S1208 The target tracking device performs target tracking according to the associated result.
  • target tracking may be performed on one or more targets obtained by the above association, respectively, and the specific implementation of the target tracking is not limited in this embodiment of the present application.
  • the basic framework uses a target-level fusion framework with better efficiency and stability (because it is based on The goal is to track at granularity), so it can have higher computational efficiency.
  • the accuracy dependence on the midpoint of the bottom edge of the bounding box is greatly reduced, and the position accuracy dependence in the radar point cloud is greatly reduced.
  • high target tracking accuracy can still be achieved.
  • the radar target tracking result in FIG. 12 may come from imaging radar. Compared with millimeter-wave radar, imaging radar has more point cloud data. Therefore, imaging radar is used for target tracking.
  • the size of the target can be obtained based on more point cloud data collected by the imaging radar, and then in S1206, the size of the target and the target model can be further projected to the camera coordinate system, and in the camera coordinate system can be obtained including: Similar three-dimensional data relationship of visual bounding box, height information, and size, S1207 can be replaced by using visual bounding box, height information, and size to perform object association. Exemplarily, the intersection between visual bounding box, height information, and size can be calculated simultaneously.
  • Overlap ratio when the overlap ratio is greater than or equal to a certain value, the association is the same target.
  • the overlap ratio is greater than or equal to a certain value, the association is the same target.
  • more accurate target association can be achieved compared with the millimeter wave radar, and thus more accurate target tracking can be achieved.
  • the above implementing devices include hardware structures and/or software units corresponding to executing the functions.
  • the present application can be implemented in hardware or a combination of hardware and computer software with the units and algorithm steps of each example described in conjunction with the embodiments disclosed herein. Whether a function is performed by hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of this application.
  • an embodiment of the present application is an apparatus for tracking a target, and the apparatus for tracking a target includes a processor 1300, a memory 1301, and a transceiver 1302;
  • the processor 1300 is responsible for managing the bus architecture and general processing, and the memory 1301 may store data used by the processor 1300 when performing operations.
  • the transceiver 1302 is used to receive and transmit data under the control of the processor 1300 for data communication with the memory 1301 .
  • the bus architecture may include any number of interconnected buses and bridges, in particular one or more processors represented by processor 1300 and various circuits of memory represented by memory 1301 linked together.
  • the bus architecture may also link together various other circuits, such as peripherals, voltage regulators, and power management circuits, which are well known in the art and, therefore, will not be described further herein.
  • the bus interface provides the interface.
  • the processor 1300 is responsible for managing the bus architecture and general processing, and the memory 1301 may store data used by the processor 1300 when performing operations.
  • the processes disclosed in the embodiments of the present application may be applied to the processor 1300 or implemented by the processor 1300 .
  • each step of the flow of target tracking can be completed by an integrated logic circuit of hardware in the processor 1300 or instructions in the form of software.
  • the processor 1300 may be a general-purpose processor, a digital signal processor, an application-specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component, and may implement or execute the embodiments of the present application.
  • a general purpose processor may be a microprocessor or any conventional processor or the like.
  • the steps of the methods disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware processor, or executed by a combination of hardware and software modules in the processor.
  • the software modules may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art.
  • the storage medium is located in the memory 1301, and the processor 1300 reads the information in the memory 1301, and completes the steps of the signal processing flow in combination with its hardware.
  • the processor 1300 is configured to read the program in the memory 1301 and execute the method flow in S701-S702 shown in FIG. The method flow in S1208.
  • FIG. 14 is a schematic structural diagram of a chip according to an embodiment of the present application.
  • Chip 1400 includes one or more processors 1401 and interface circuits 1402 .
  • the chip 1400 may further include a bus 1403 . in:
  • the processor 1401 may be an integrated circuit chip with signal processing capability. In the implementation process, each step of the above-mentioned method may be completed by an integrated logic circuit of hardware in the processor 1401 or an instruction in the form of software.
  • the above-mentioned processor 1401 may be a general purpose processor, a digital communicator (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components , MCU, MPU, CPU or one or more of coprocessors.
  • DSP digital communicator
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the interface circuit 1402 can be used to send or receive data, instructions or information.
  • the processor 1401 can use the data, instructions or other information received by the interface circuit 1402 to process, and can send the processing completion information through the interface circuit 1402.
  • the chip further includes a memory, which may include a read-only memory and a random access memory, and provides operation instructions and data to the processor.
  • a portion of the memory may also include non-volatile random access memory (NVRAM).
  • NVRAM non-volatile random access memory
  • the memory stores executable software modules or data structures
  • the processor may execute corresponding operations by calling operation instructions stored in the memory (the operation instructions may be stored in the operating system).
  • the chip may be used in the target tracking device involved in the embodiments of the present application.
  • the interface circuit 1402 can be used to output the execution result of the processor 1401 .
  • processor 1401 and the interface circuit 1402 can be implemented by hardware design, software design, or a combination of software and hardware, which is not limited here.
  • an embodiment of the present application provides an apparatus for target tracking, where the apparatus includes a transceiver module 1500 and a processing module 1501 .
  • the transceiver module 1500 is used to obtain the camera target tracking result and the radar target tracking result.
  • the processing module 1501 is used to obtain the target tracking result according to the camera target tracking result and the target model corresponding to the radar target tracking result; wherein, the target model is used to indicate the correlation between the target in the radar target tracking result and the height information of the target .
  • the processing module is also used to obtain the height information of the target according to the type information of the target in the radar target tracking result; the processing module is also used to fuse the height information of the target and the target in the radar target tracking result, Get the target model.
  • the processing module is specifically used to project the target model to the camera coordinate system to obtain the projected radar target tracking result; and obtain the target tracking result according to the camera target tracking result and the projected radar target tracking result.
  • the processing module is specifically configured to convert the target model to the camera coordinate system according to a preset or defined height conversion relationship; wherein, different height information corresponds to different height conversion relationships, and the height conversion relationship uses It is used to convert the target tracking result with height in the radar coordinate system to the camera coordinate system.
  • the height conversion relationship corresponding to the height information is different.
  • the area types include one or more of the following: ground undulating areas, areas with slopes, or areas with flat ground.
  • the processing module is specifically used to determine the target area type corresponding to the target model; according to the height conversion relationship corresponding to the target area type, the target height conversion relationship matching the height information of the target model converts the target The model is transformed to the camera coordinate system.
  • the processing module is specifically configured to determine that the camera target tracking result and the projected radar target tracking result are the same target according to the overlap ratio of the camera target tracking result and the projected radar target tracking result; wherein, The overlap ratio is greater than the first value.
  • the processing module is specifically configured to, when the overlap ratio is greater than the first value, and the position and/or speed of the overlapped target in the camera target tracking result and the post-projection radar target tracking result satisfy a preset In the case of conditions, it is determined that the tracking result of the camera target and the tracking result of the radar target after projection are the same target.
  • the preset conditions include: the position and/or velocity of the overlapping target in the tracking result of the camera target, and the difference between the position and/or velocity of the overlapping target in the tracking result of the radar target is less than a second value .
  • the radar target tracking result comes from an imaging radar; the target model also includes size information of the target.
  • the camera target tracking result includes the target bounding box; the radar target tracking result includes the target point cloud.
  • the functions of the transceiver module 1500 and the processing module 1501 shown in FIG. 15 may be executed by the processor 1300 running a program in the memory 1301 , or executed by the processor 1300 alone.
  • the present application provides a vehicle, the device includes at least one camera 1601 , at least one memory 1602 , at least one transceiver 1603 , at least one processor 1604 , and a radar 1605 .
  • the camera 1601 is used to acquire an image, and the image is used to obtain the tracking result of the camera target.
  • the radar 1605 is used to obtain the target point cloud, and the target point cloud is used to obtain the radar target tracking result.
  • the memory 1602 is used to store one or more programs and data information; wherein the one or more programs include instructions.
  • the transceiver 1603 is used for data transmission with the communication device in the vehicle and data transmission with the cloud.
  • the processor 1604 is used to obtain the camera target tracking result and the radar target tracking result; obtain the target tracking result according to the camera target tracking result and the target model corresponding to the radar target tracking result; wherein, the target model is used to indicate the radar target tracking result The relationship between the target and the height information of the target.
  • various aspects of the target tracking method provided by the embodiments of the present application may also be implemented in the form of a program product, which includes program code, and when the program code runs on a computer device, all The program code is used to cause the computer device to perform the steps in the method for target tracking according to various exemplary embodiments of the present application described in this specification.
  • the program product may employ any combination of one or more readable media.
  • the readable medium may be a readable signal medium or a readable storage medium.
  • the readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples (non-exhaustive list) of readable storage media include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
  • a program product for object tracking may employ a portable compact disc read only memory (CD-ROM) and include program codes, and may run on a server device.
  • CD-ROM portable compact disc read only memory
  • the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that contains or stores a program that can be transmitted by communication, used by an apparatus or device, or used in combination therewith.
  • a readable signal medium may include a propagated data signal in baseband or as part of a carrier wave, carrying readable program code therein. Such propagated data signals may take a variety of forms including, but not limited to, electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a readable signal medium can also be any readable medium, other than a readable storage medium, that can transmit, propagate, or transport a program for use by or in connection with a periodic network action system, apparatus, or device.
  • Program code embodied on a readable medium may be transmitted using any suitable medium including, but not limited to, wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Program code for carrying out the operations of the present application may be written in any combination of one or more programming languages, including object-oriented programming languages—such as Java, C++, etc., as well as conventional procedural Programming Language - such as the "C" language or similar programming language.
  • the program code may execute entirely on the user computing device, partly on the user device, as a stand-alone software package, partly on the user computing device and partly on a remote computing device, or entirely on the remote computing device or server execute on.
  • the remote computing device may be connected to the user computing device through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computing device.
  • LAN local area network
  • WAN wide area network
  • the embodiments of the present application also provide a storage medium readable by a computing device for the target tracking method, that is, the content is not lost after the power is turned off.
  • Software programs are stored in the storage medium, including program codes. When the program codes are run on a computing device, the software programs can implement any of the above embodiments of the present application when read and executed by one or more processors. Target tracking scheme.
  • the embodiment of the present application also provides an electronic device.
  • the electronic device includes: a processing module for supporting the target tracking apparatus to perform the steps in the above embodiments, for example, S701 may be performed.
  • S701 may be performed.
  • the target tracking device includes but is not limited to the unit modules listed above.
  • the specific functions that can be implemented by the above functional units also include but are not limited to the functions corresponding to the method steps described in the above examples.
  • the detailed description of other units of the electronic device please refer to the detailed description of the corresponding method steps. This application implements Examples are not repeated here.
  • the electronic device involved in the above embodiments may include: a processing module, a storage module and a communication module.
  • the storage module is used to save the program codes and data of the electronic device.
  • the communication module is used to support the communication between the electronic device and other network entities, so as to realize the functions of the electronic device's call, data interaction, Internet access and so on.
  • the processing module is used to control and manage the actions of the electronic device.
  • the processing module may be a processor or a controller.
  • the communication module may be a transceiver, an RF circuit or a communication interface or the like.
  • the storage module may be a memory.
  • the electronic device may further include an input module and a display module.
  • the display module can be a screen or a display.
  • the input module can be a touch screen, a voice input device, or a fingerprint sensor.
  • the present application may also be implemented in hardware and/or software (including firmware, resident software, microcode, etc.). Still further, the present application may take the form of a computer program product on a computer-usable or computer-readable storage medium having computer-usable or computer-readable program code embodied in the medium for use by an instruction execution system or Used in conjunction with an instruction execution system.
  • a computer-usable or computer-readable medium can be any medium that can contain, store, communicate, transmit, or transmit a program for use by, or in connection with, an instruction execution system, apparatus, or device. device or equipment use.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

本申请实施例提供一种目标跟踪方法及装置,涉及数据处理技术领域,可用于安防、辅助驾驶和自动驾驶。方法包括:获取摄像头目标跟踪结果以及雷达目标跟踪结果;根据摄像头目标跟踪结果以及雷达目标跟踪结果对应的目标模型,得到目标跟踪结果;其中,目标模型用于指示雷达目标跟踪结果中的目标及目标的高度信息的关联关系。本申请实施例中,因为目标模型中包括了目标的高度信息,将摄像头目标跟踪结果和雷达目标跟踪结果进行关联时,可以将雷达监测的目标跟踪结果结合目标的高度信息,从而有效将雷达监测的目标的范围扩大,关联得到准确的目标跟踪结果。

Description

目标跟踪方法及装置
本申请要求于2020年09月11日提交中国国家知识产权局、申请号为202010953032.9、申请名称为“目标跟踪方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及数据处理技术领域,尤其涉及一种目标跟踪方法及装置。
背景技术
随着社会的发展,智能运输设备、智能家居设备、机器人等智能终端正在逐步进入人们的日常生活中。传感器在智能终端上发挥着十分重要的作用。安装在智能终端上的各式各样的传感器,比如毫米波雷达,激光雷达,成像雷达,超声波雷达,摄像头等,使得智能终端可以感知周围的环境,收集数据,进行移动物体的辨识与追踪,以及静止场景如车道线、标示牌的识别、结合导航仪及地图数据进行路径规划等。示例性的,在自动驾驶、安防或监控等领域,可以基于传感器进行目标跟踪,并基于目标跟踪实现一定的策略。例如,在自动驾驶领域,可以基于目标跟踪制定驾驶策略等;在安防或监控领域,可以基于目标跟踪对非法入侵等不安全因素进行告警等。
相关技术中,存在基于雷达和摄像头进行目标跟踪的方法。例如,可以通过摄像头和雷达分别检测目标的位置和速度,之后利用关联算法,将在摄像头和雷达检测中位置和速度相近的确认为同一目标。
但是,上述技术在确认同一目标时,容易产生误关联,导致进行目标跟踪的精度较低。
发明内容
本申请实施例提供一种目标跟踪方法及装置,可以提升利用雷达与摄像头进行目标跟踪的精度。
第一方面,本申请实施例提供一种目标跟踪方法,包括:获取摄像头目标跟踪结果以及雷达目标跟踪结果;根据摄像头目标跟踪结果以及雷达目标跟踪结果对应的目标模型,得到目标跟踪结果;其中,目标模型用于指示雷达目标跟踪结果中的目标及目标的高度信息的关联关系。这样,因为目标模型中包括了目标的高度信息,将摄像头目标跟踪结果和雷达目标跟踪结果进行关联时,可以将雷达监测的目标跟踪结果结合目标的高度信息,从而有效将雷达监测的目标的范围扩大,关联得到准确的目标跟踪结果。
在一种可能的实现方式中,还包括:根据雷达目标跟踪结果中目标的类型信息获取目标的高度信息;融合目标的高度信息和雷达目标跟踪结果中的目标,得到目标模型。这样,可以得到能够表征目标的位置和高度的目标模型,后续可以利用该目标模型关联得到准确的目标跟踪结果。
在一种可能的实现方式中,目标的类型信息与目标的高度信息之间存在预先定义或者预先设置的对应关系。这样,可以基于目标的类型信息,便捷的得到目标的高度信息。
在一种可能的实现方式中,根据摄像头目标跟踪结果以及雷达目标跟踪结果对应的目标模型,得到目标跟踪结果,包括:将目标模型投影到摄像头坐标系,得到投影后雷达目标跟踪结果;根据摄像头目标跟踪结果和投影后雷达目标跟踪结果,得到目标跟踪结果。这样,后续可以再摄像头坐标系中,基于摄像头目标跟踪结果和投影后雷达目标跟踪结果,得到准确的目标跟踪结果。
在一种可能的实现方式中,将目标模型投影到摄像头坐标系,包括:根据预先设置或者定义的高度转换关系将目标模型转换到摄像头坐标系;其中,不同的高度信息对应不同的高度转换关系,高度转换关系用于将雷达坐标系中具备高度的目标跟踪结果转换到摄像头坐标系。这样可以基于该高度转换关系,便捷的将目标模型转换到摄像头坐标系。
在一种可能的实现方式中,对应不同的区域类型,高度信息对应的高度转换关系不同。因为不同的区域所对应的水平线不同,例如,同一个目标在低洼区域与在平坦区域视觉上的高度通常不同,因此,将不同区域分别设置不同的高度转换关系,在利用高度转换关系将雷达坐标系中具备高度的目标跟踪结果转换到摄像头坐标系时,可以实现精准的转换。
在一种可能的实现方式中,区域类型包括下述的一种或多种:地面起伏区域、具有坡度的区域或地面平坦区域。这样,可以对常见的地面类型实现精准的坐标系之间的转换。
在一种可能的实现方式中,根据预先设置或者定义高度转换关系将目标模型转换到摄像头坐标系,包括:确定目标模型对应的目标区域类型;根据目标区域类型所对应的高度转换关系中,与目标模型的高度信息匹配的目标高度转换关系将目标模型转换到摄像头坐标系。
在一种可能的实现方式中,根据摄像头目标跟踪结果和投影后雷达目标跟踪结果,得到目标跟踪结果,包括:根据摄像头目标跟踪结果和投影后雷达目标跟踪结果的交叠比例,确定摄像头目标跟踪结果和投影后雷达目标跟踪结果为同一目标;其中,交叠比例大于第一值。这样,可以利用交叠比例便捷、准确地判断摄像头目标跟踪结果和投影后雷达目标跟踪结果为同一目标。
在一种可能的实现方式中,在根据摄像头目标跟踪结果和投影后雷达目标跟踪结果的交叠比例,确定摄像头目标跟踪结果和投影后雷达目标跟踪结果为同一目标,包括:在交叠比例大于第一值,且摄像头目标跟踪结果和投影后雷达目标跟踪结果中的交叠目标的位置和/或速度满足预设条件的情况下,确定摄像头目标跟踪结果和投影后雷达目标跟踪结果为同一目标。这样,可以在计算交叠比例的基础上,进一步结合交叠目标的位置和/或速度判断摄像头目标跟踪结果和投影后雷达目标跟踪结果为同一目标,则能实现更为准确的判断。
在一种可能的实现方式中,预设条件包括:交叠目标在摄像头目标跟踪结果的位置和/或速度,与交叠目标在雷达目标跟踪结果的位置和/或速度的差异小于第二值。
在一种可能的实现方式中,雷达目标跟踪结果来自成像雷达;目标模型还包括目标的尺寸信息。这样,可以同时计算视觉边界框、高度信息和尺寸之间的交叠比例,在交叠比例大于或等于一定值时,关联为同一目标,因为增加了尺寸,可以得到相较于毫米波雷达能够实现更准确的目标关联,进而实现更为准确的目标跟踪。
在一种可能的实现方式中,摄像头目标跟踪结果包括目标边界框;雷达目标跟踪结果包括目标点云。这样,利用目标边界框和目标点云,能够高效、准确的进行目标跟踪。
第二方面,本申请实施例提供一种目标跟踪装置。
该目标跟踪装置可为具有目标跟踪功能的车辆,或者为具有目标跟踪功能的其他部件。该目标跟踪装置包括但不限于:车载终端、车载控制器、车载模块、车载模组、车载部件、车载芯片、车载单元、车载雷达或车载摄像头等其他传感器,车辆可通过该车载终端、车载控制器、车载模块、车载模组、车载部件、车载芯片、车载单元、车载雷达或摄像头,实施本申请提供的方法。
该目标跟踪装置可以智能终端,或设置在除了车辆之外的其他具有目标跟踪功能的智能终端中,或设置于该智能终端的部件中。该智能终端可以为智能运输设备、智能家居设备、机器人等其他终端设备。该目标跟踪装置包括但不限于智能终端或智能终端内的控制器、芯片、雷达或摄像头等其他传感器、以及其他部件等。
该目标跟踪装置可以是一个通用设备或者是一个专用设备。在具体实现中,该装置还可以台式机、便携式电脑、网络服务器、掌上电脑(personal digital assistant,PDA)、移动手机、平板电脑、无线终端设备、嵌入式设备或其他具有处理功能的设备。本申请实施例不限定该目标跟踪装置的类型。
该目标跟踪装置还可以是具有处理功能的芯片或处理器,该目标跟踪装置可以包括至少一个处理器。处理器可以是一个单核(single-CPU)处理器,也可以是一个多核(multi-CPU)处理器。该具有处理功能的芯片或处理器可以设置在传感器中,也可以不设置在传感器中,而设置在传感器输出信号的接收端。所述处理器包括但不限于中央处理器(central processing unit,CPU)、图形处理器(graphics processing unit,GPU)、微控制单元(micro control unit,MCU)、微处理器(micro processor unit,MPU)、协处理器中的至少一个。
该目标跟踪装置还可以是终端设备,也可以是终端设备内的芯片或者芯片***。该目标跟踪装置可以包括处理单元和通信单元。当该目标跟踪装置是终端设备时,该处理单元可以是处理器。该目标跟踪装置还可以包括存储单元,该存储单元可以是存储器。该存储单元用于存储指令,该处理单元执行该存储单元所存储的指令,以使该终端设备实现第一方面或第一方面的任意一种可能的实现方式中描述的一种目标跟踪方法。当该目标跟踪装置是终端设备内的芯片或者芯片***时,该处理单元可以是处理器。该处理单元执行存储单元所存储的指令,以使该终端设备实现第一方面或第一方面的任意一种可能的实现方式中描述的一种目标跟踪方法。该存储单元可以是该芯片内的存储单元(例如,寄存器、缓存等),也可以是该终端设备内的位于该芯片外部的存储单元(例如,只读存储器、随机存取存储器等)。
示例性的,通信单元,用于获取摄像头目标跟踪结果以及雷达目标跟踪结果;处 理单元,用于根据摄像头目标跟踪结果以及雷达目标跟踪结果对应的目标模型,得到目标跟踪结果;其中,目标模型用于指示雷达目标跟踪结果中的目标及目标的高度信息的关联关系。
在一种可能的实现方式中,处理单元,还用于根据雷达目标跟踪结果中目标的类型信息获取目标的高度信息;处理单元,还用于融合目标的高度信息和雷达目标跟踪结果中的目标,得到目标模型。
在一种可能的实现方式中,目标的类型信息与目标的高度信息之间存在预先定义或者预先设置的对应关系。
在一种可能的实现方式中,处理单元,具体用于将目标模型投影到摄像头坐标系,得到投影后雷达目标跟踪结果;根据摄像头目标跟踪结果和投影后雷达目标跟踪结果,得到目标跟踪结果。
在一种可能的实现方式中,处理单元,具体用于根据预先设置或者定义的高度转换关系将目标模型转换到摄像头坐标系;其中,不同的高度信息对应不同的高度转换关系,高度转换关系用于将雷达坐标系中具备高度的目标跟踪结果转换到摄像头坐标系。
在一种可能的实现方式中,对应不同的区域类型,高度信息对应的高度转换关系不同。
在一种可能的实现方式中,区域类型包括下述的一种或多种:地面起伏区域、具有坡度的区域或地面平坦区域。
在一种可能的实现方式中,处理单元,具体用于确定目标模型对应的目标区域类型;根据目标区域类型所对应的高度转换关系中,与目标模型的高度信息匹配的目标高度转换关系将目标模型转换到摄像头坐标系。
在一种可能的实现方式中,处理单元,具体用于根据摄像头目标跟踪结果和投影后雷达目标跟踪结果的交叠比例,确定摄像头目标跟踪结果和投影后雷达目标跟踪结果为同一目标;其中,交叠比例大于第一值。
在一种可能的实现方式中,处理单元,具体用于在交叠比例大于第一值,且摄像头目标跟踪结果和投影后雷达目标跟踪结果中的交叠目标的位置和/或速度满足预设条件的情况下,确定摄像头目标跟踪结果和投影后雷达目标跟踪结果为同一目标。
在一种可能的实现方式中,预设条件包括:交叠目标在摄像头目标跟踪结果的位置和/或速度,与交叠目标在雷达目标跟踪结果的位置和/或速度的差异小于第二值。
在一种可能的实现方式中,雷达目标跟踪结果来自成像雷达;目标模型还包括目标的尺寸信息。
在一种可能的实现方式中,摄像头目标跟踪结果包括目标边界框;雷达目标跟踪结果包括目标点云。
第三方面,本申请实施例还提供一种传感器***,用于为车辆提供目标跟踪功能。其包含至少一个本申请上述实施例提到的目标跟踪装置,以及,摄像头和雷达等其他传感器,该***内的至少一个传感器装置可以集成为一个整机或设备,或者该***内的至少一个传感器装置也可以独立设置为元件或装置。
第四方面,本申请实施例还提供一种***,应用于无人驾驶或智能驾驶中,其包 含至少一个本申请上述实施例提到的目标跟踪装置、摄像头、雷达等传感器其他传感器中的至少一个,该***内的至少一个装置可以集成为一个整机或设备,或者该***内的至少一个装置也可以独立设置为元件或装置。
进一步,上述任一***可以与车辆的中央控制器进行交互,为所述车辆驾驶的决策或控制提供探测和/或融合信息。
第五方面,本申请实施例还提供一种终端,所述终端包括至少一个本申请上述实施例提到的目标跟踪装置或上述任一***。进一步,所述终端可以为智能家居设备、智能制造设备、智能工业设备、智能运输设备(含无人机、车辆等)等。
第六方面,本申请实施例还提供一种芯片,包括至少一个处理器和接口;接口,用于为至少一个处理器提供程序指令或者数据;至少一个处理器用于执行程序行指令,以实现第一方面或第一方面可能的实现方式中任一方法。
第七方面,本申请实施例提供一种目标跟踪装置,包括,至少一个处理器,用于调用存储器中的程序,以实现第一方面或第一方面任意可能的实现方式中的任一方法。
第八方面,本申请实施例提供一种目标跟踪装置,包括:至少一个处理器和接口电路,接口电路用于为至少一个处理器提供信息输入和/或信息输出;至少一个处理器用于运行代码指令,以实现第一方面或第一方面任意可能的实现方式中的任一方法。
第九方面,本申请实施例提供一种计算机可读存储介质,该计算机可读存储介质存储有指令,当指令被执行时,以实现第一方面或第一方面任意可能的实现方式中的任一方法。
应当理解的是,本申请的第二方面至第九方面与本申请的第一方面的技术方案相对应,各方面及对应的可行实施方式所取得的有益效果相似,不再赘述。
附图说明
图1为一种根据视觉边界框和雷达点云确定目标的示意图;
图2为本申请实施例提供的根据视觉边界框和雷达点云确定目标的示意图;
图3为本申请实施例提供的车辆100的功能框图;
图4为图3中的计算机***的结构示意图;
图5为本申请实施例提供的一种芯片硬件结构的示意图;
图6为本申请实施例提供的一种应用场景示意图;
图7为本申请实施例提供的一种概率高度示意图;
图8为本申请实施例提供的一种高度标定示意图;
图9为本申请实施例提供的一种不同区域类型示意图;
图10为本申请实施例提供的一种目标关联示意图;
图11为本申请实施例提供的一种目标跟踪方法流程示意图;
图12为本申请实施例提供的另一种目标跟踪方法流程示意图;
图13为本申请实施例提供的一种目标跟踪装置的结构示意图;
图14为本申请实施例提供的一种芯片的结构示意图;
图15为本申请实施例提供的另一种目标跟踪装置的结构示意图;
图16为本申请实施例提供的一种车辆的结构示意图。
具体实施方式
为了便于清楚描述本申请实施例的技术方案,在本申请的实施例中,采用了“第一”、“第二”等字样对功能和作用基本相同的相同项或相似项进行区分。例如,第一值和第二值仅仅是为了区分不同的值,并不对其先后顺序进行限定。本领域技术人员可以理解“第一”、“第二”等字样并不对数量和执行次序进行限定,并且“第一”、“第二”等字样也并不限定一定不同。
需要说明的是,本申请中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其他实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
本申请中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b,或c中的至少一项(个),可以表示:a,b,c,a-b,a-c,b-c,或a-b-c,其中a,b,c可以是单个,也可以是多个。
基于雷达的目标跟踪和/或基于摄像头的目标跟踪是可能的实现目标跟踪的方式。
雷达(radar)可以是基于无线电探测的设备。雷达可以测量空中、地面及水上目标的位置,也可能称为无线电定位。示例性的,雷达可以利用定向天线向空中发出无线电波,电波遇到目标后,反射回来被雷达所接受,通过测量电波在空中传播所经历的时间以获得目标的距离数据,根据天线波束指向以确定目标的角度数据,进而实现目标跟踪。通常的,雷达能够得到精准的速度位置信息,拥有较长的视野。但是在杂波环境中,雷达会因为杂波的影响,导致目标跟踪的效果较差。
摄像头(camera)可以将景物通过镜头生成的光学图像投射到图像传感器表面上,然后转为电信号,经过数模转换后变为数字图像信号,数字图像信号可以在数字信号处理(digital signal processing,DSP)芯片中加工处理。利用摄像头拍摄的得到的图像可以进行目标分类,以及对目标的位置和/或速度进行检测,进而实现目标跟踪。但是在光线较弱的环境中,摄像头拍摄的图像效果可能较差,导致目标跟踪的效果较差。
将基于雷达的目标跟踪得到的结果与基于摄像头的目标跟踪得到的结果融合(后续简称雷达摄像头融合),可以发挥雷达和摄像头各自的优势,实现较为准确的目标跟踪。雷达摄像头融合的实现中,可能包括目标级雷达摄像头融合(object-level data fusion)方法和测量级雷达摄像头融合(data-level data fusion)方法。
可能的实现方式中,目标级雷达摄像头融合方法包括:利用摄像头得到目标的视觉边界框(bounding box),通过摄像头坐标(也可以称为视觉坐标)和雷达坐标(也可以称为俯视图坐标)的转换矩阵转换该视觉边界框,得到在雷达坐标下目标的位置和速度;利用雷达检测点云得到目标,并得到雷达坐标下目标的位置和速度;利用与目标的位置和速度相关的关联算法,对雷达检测得到的目标和摄像头检测得到的目标 进行关联,确认同一目标;对目标进行状态估计可以得到融合后的目标位置和速度。
可能的实现方式中,测量级雷达摄像头融合方法包括:利用雷达监测目标的点云(或称为雷达点云或点云数据等),将雷达检测的点云投影到摄像头坐标系下;利用摄像头得到目标的视觉边界框以及关联算法,将雷达点云的投影和摄像头得到的视觉边界框进行关联,确认同一目标;对目标进行状态估计得到融合后的目标位置和速度。
然而,上述的实现方式中,目标级融合方法和测量级融合方法在确认同一目标时,对摄像头得到的目标与雷达得到的目标进行关联时需要利用位置信息,基于摄像头得到的目标的位置信息通常依赖视觉边界框的底边的精度,而因为天气、环境等原因使得视觉边界框底边的精度可能不高,基于雷达得到的目标的位置信息通常依赖目标点云,而杂波或地面起伏等环境中,目标点云的精度可能不高,从而容易产生误关联现象。
示例性的,图1示出了一种根据视觉边界框和雷达点云确定目标的示意图。如图1所示,可能由于人的腿部颜色与地面颜色相近等,视觉边界框10框定该人时,视觉边界框10的底边框定在人的上半身,而雷达点云11则可能检测到人的下半身(例如脚),那么在进行目标融合时,因为视觉边界框10与雷达点云11所在的位置差距较远,可能不会将视觉边界框10框定的目标与雷达点云11确定的目标关联为同一目标,导致产生误关联。
基于此,本申请实施例的目标跟踪方法中,在关联摄像头目标跟踪的结果和雷达目标跟踪的结果时,在雷达目标跟踪的结果中引入了目标的高度信息,例如,得到用于指示雷达目标跟踪结果中的目标及目标的高度信息的关联关系的目标模型,将摄像头目标跟踪结果和雷达目标跟踪结果进行关联时,可以根据摄像头目标跟踪结果和目标模型,得到目标跟踪结果。因为目标模型中包括了目标的高度信息,能够有效将雷达监测的目标的范围扩大,进而可以关联得到准确的目标跟踪结果。
示例性的,图2示出了本申请实施例的一种视觉边界框和雷达点云确定目标的示意图。如图2所示,可能由于人的腿部颜色与地面颜色相近等,视觉边界框20框定该人时,视觉边界框20的底边框定在人的上半身,而雷达点云21则可能检测到人的下半身(例如脚),但是本申请实施例中,引入了人的高度信息,例如可以确定用于表示高度信息的线段23,则因为视觉边界框20与用于表示高度信息的线段23的存在较多重叠部分,则很大可能将视觉边界框20框定的目标与雷达点云21确定的目标关联为同一目标,从而本申请实施例中,不再依赖视觉边界框底边的精度,也不依赖于雷达点云的精度,无论在光线较差的环境(例如夜晚)、或视觉边界框底边进度较低(例如只框定到人的上半身)、或杂波环境中雷达监测到不准确的点云数据,都可以基于高度信息和视觉边界框关联到准确的目标,提升关联目标的准确性和稳定性。
可能的实现方式中,本申请实施例的目标跟踪方法可以应用于自动驾驶、安防或监控等场景。例如,在自动驾驶场景,可以基于本申请实施例的目标跟踪方法,实现对障碍物等目标的跟踪,进而基于目标跟踪制定自动驾驶策略等。例如,在安防或监控场景,可以基于本申请实施例的目标跟踪方法,实现对人物等目标的跟踪,进而基于目标跟踪对非法入侵等不安全因素进行告警等。
示例性的,在自动驾驶场景中,本申请实施例的目标跟踪方法可以应用于车辆, 或车辆中的芯片等。例如,图3示出了本申请实施例提供的车辆100的功能框图。在一个实施例中,将车辆100配置为完全或部分地自动驾驶模式。例如,当车辆100配置为部分地自动驾驶模式时,车辆100在处于自动驾驶模式时还可通过人为操作来确定车辆及其周边环境的当前状态,例如确定周边环境中的至少一个其他车辆的可能行为,并确定该其他车辆执行可能行为的可能性相对应的置信水平,基于所确定的信息来控制车辆100。例如,在车辆100处于完全地自动驾驶模式中时,可以将车辆100置为不需要与人交互,自动执行驾驶相关操作。
车辆100可包括各种子***,例如行进***102、传感器***104、控制***106、一个或多个***设备108以及电源110、计算机***112和用户接口116。可选地,车辆100可包括更多或更少的子***,并且每个子***可包括多个元件。另外,车辆100的每个子***和元件可以通过有线或者无线互连。
行进***102可包括为车辆100提供动力运动的组件。在一个实施例中,行进***102可包括引擎118、能量源119、传动装置120和车轮/轮胎121。引擎118可以是内燃引擎、电动机、空气压缩引擎或其他类型的引擎组合,例如汽油发动机和电动机组成的混动引擎,内燃引擎和空气压缩引擎组成的混动引擎。引擎118将能量源119转换成机械能量。
能量源119的示例包括汽油、柴油、其他基于石油的燃料、丙烷、其他基于压缩气体的燃料、乙醇、太阳能电池板、电池和其他电力来源。能量源119也可以为车辆100的其他***提供能量。
传动装置120可以将来自引擎118的机械动力传送到车轮121。传动装置120可包括变速箱、差速器和驱动轴。在一个实施例中,传动装置120还可以包括其他器件,比如离合器。其中,驱动轴可包括可耦合到一个或多个车轮121的一个或多个轴。
传感器***104可包括感测关于车辆100周边的环境的信息的若干个传感器。例如,传感器***104可包括定位***122(定位***可以是GPS***,也可以是北斗***或者其他定位***)、惯性测量单元(inertial measurement unit,IMU)124、雷达126、激光测距仪128以及相机130。传感器***104还可包括被监视车辆100的内部***的传感器(例如,车内空气质量监测器、燃油量表、机油温度表等)。来自这些传感器中的一个或多个的传感器数据可用于检测对象及其相应特性(位置、形状、方向、速度等)。这种检测和识别是自主车辆100的安全操作的关键功能。
定位***122可用于估计车辆100的地理位置。IMU 124用于基于惯性加速度来感测车辆100的位置和朝向变化。在一个实施例中,IMU 124可以是加速度计和陀螺仪的组合。
雷达126可利用无线电信号来感测车辆100的周边环境内的物体。在一些实施例中,除了感测物体以外,雷达126还可用于感测物体的速度和/或前进方向。
激光测距仪128可利用激光来感测车辆100所位于的环境中的物体。在一些实施例中,激光测距仪128可包括一个或多个激光源、激光扫描器以及一个或多个检测器,以及其他***组件。
相机130可用于捕捉车辆100的周边环境的多个图像。相机130可以是静态相机或视频相机。
控制***106为控制车辆100及其组件的操作。控制***106可包括各种元件,其中包括转向***132、油门134、制动单元136、传感器融合算法138、计算机视觉***140、路线控制***142以及障碍物避免***144。
转向***132可操作来调整车辆100的前进方向。例如在一个实施例中可以为方向盘***。
油门134用于控制引擎118的操作速度并进而控制车辆100的速度。
制动单元136用于控制车辆100减速。制动单元136可使用摩擦力来减慢车轮121。在其他实施例中,制动单元136可将车轮121的动能转换为电流。制动单元136也可采取其他形式来减慢车轮121转速从而控制车辆100的速度。
计算机视觉***140可以操作来处理和分析由相机130捕捉的图像以便识别车辆100周边环境中的物体和/或特征。所述物体和/或特征可包括交通信号、道路边界和障碍物。计算机视觉***140可使用物体识别算法、运动中恢复结构(structure from motion,SFM)算法、视频跟踪和其他计算机视觉技术。在一些实施例中,计算机视觉***140可以用于为环境绘制地图、跟踪物体、估计物体的速度等等。
路线控制***142用于确定车辆100的行驶路线。在一些实施例中,路线控制***142可结合来自传感器138、全球定位***(global positioning system,GPS)122和一个或多个预定地图的数据以为车辆100确定行驶路线。
障碍物规避***144用于识别、评估和避开或者以其他方式越过车辆100的环境中的潜在障碍物。
当然,在一个实例中,控制***106可以增加或替换地包括除了所示出和描述的那些以外的组件。或者也可以减少一部分上述示出的组件。
车辆100通过***设备108与外部传感器、其他车辆、其他计算机***或用户之间进行交互。***设备108可包括无线通信***146、车载电脑148、麦克风150和/或扬声器152。
在一些实施例中,***设备108提供车辆100的用户与用户接口116交互的手段。例如,车载电脑148可向车辆100的用户提供信息。用户接口116还可操作车载电脑148来接收用户的输入。车载电脑148可以通过触摸屏进行操作。在其他情况中,***设备108可提供用于车辆100与位于车内的其它设备通信的手段。例如,麦克风150可从车辆100的用户接收音频(例如,语音命令或其他音频输入)。类似地,扬声器152可向车辆100的用户输出音频。
可能的实现方式中,在车载电脑148的显示屏中,还可以显示根据本申请实施例的目标跟踪算法跟踪得到的目标,使得用户可以在显示屏中感知车辆周围的环境。
无线通信***146可以直接地或者经由通信网络来与一个或多个设备无线通信。例如,无线通信***146可使用3G蜂窝通信,例如码分多址(code division multiple access,CDMA)、EVD0、全球移动通信***(global system for mobile communications,GSM)/通用分组无线服务(general packet radio service,GPRS),或者4G蜂窝通信,例如LTE。或者5G蜂窝通信。无线通信***146可利用无线保真(wireless-fidelity,WiFi)与无线局域网(wireless local area network,WLAN)通信。在一些实施例中,无线通信***146可利用红外链路、蓝牙或紫蜂协议(ZigBee)与设备直接通信。其他 无线协议,例如各种车辆通信***,例如,无线通信***146可包括一个或多个专用短程通信(dedicated short range communications,DSRC)设备,这些设备可包括车辆和/或路边台站之间的公共和/或私有数据通信。
电源110可向车辆100的各种组件提供电力。在一个实施例中,电源110可以为可再充电锂离子或铅酸电池。这种电池的一个或多个电池组可被配置为电源为车辆100的各种组件提供电力。在一些实施例中,电源110和能量源119可一起实现,例如一些全电动车中那样。
车辆100的部分或所有功能受计算机***112控制。计算机***112可包括至少一个处理器113,处理器113执行存储在例如数据存储装置114这样的非暂态计算机可读介质中的指令115。计算机***112还可以是采用分布式方式控制车辆100的个体组件或子***的多个计算设备。
处理器113可以是任何常规的处理器,诸如商业可获得的中央处理器(central processing unit,CPU)。替选地,该处理器可以是诸如用于供专门应用的集成电路(application specific integrated circuit,ASIC)或其它基于硬件的处理器的专用设备。尽管图3功能性地图示了处理器、存储器、和在相同块中的计算机***112的其它元件,但是本领域的普通技术人员应该理解该处理器、计算机、或存储器实际上可以包括可以或者可以不存储在相同的物理外壳内的多个处理器、计算机、或存储器。例如,存储器可以是硬盘驱动器或位于不同于计算机的外壳内的其它存储介质。因此,对处理器或计算机的引用将被理解为包括对可以或者可以不并行操作的处理器或计算机或存储器的集合的引用。不同于使用单一的处理器来执行此处所描述的步骤,诸如转向组件和减速组件的一些组件每个都可以具有其自己的处理器,所述处理器只执行与特定于组件的功能相关的计算。
在此处所描述的各个方面中,处理器可以位于远离该车辆并且与该车辆进行无线通信。在其它方面中,此处所描述的过程中的一些在布置于车辆内的处理器上执行而其它则由远程处理器执行,包括采取执行单一操纵的必要步骤。
在一些实施例中,数据存储装置114可包含指令115(例如,程序逻辑),指令115可被处理器113执行来执行车辆100的各种功能,包括以上描述的那些功能。数据存储装置114也可包含额外的指令,包括向推进***102、传感器***104、控制***106和***设备108中的一个或多个发送数据、从其接收数据、与其交互和/或对其进行控制的指令。
除了指令115以外,数据存储装置114还可存储数据,例如道路地图、路线信息,车辆的位置、方向、速度以及其它这样的车辆数据,以及其他信息。这种信息可在车辆100在自主、半自主和/或手动模式中操作期间被车辆100和计算机***112使用。
用户接口116,用于向车辆100的用户提供信息或从其接收信息。可选地,用户接口116可包括在***设备108的集合内的一个或多个输入/输出设备,例如无线通信***146、车车在电脑148、麦克风150和扬声器152。
计算机***112可基于从各种子***(例如,行进***102、传感器***104和控制***106)以及从用户接口116接收的输入来控制车辆100的功能。例如,计算机***112可利用来自控制***106的输入以便控制转向单元132来避免由传感器系 统104和障碍物避免***144检测到的障碍物。在一些实施例中,计算机***112可操作来对车辆100及其子***的许多方面提供控制。
可选地,上述这些组件中的一个或多个可与车辆100分开安装或关联。例如,数据存储装置114可以部分或完全地与车辆100分开存在。上述组件可以按有线和/或无线方式来通信地耦合在一起。
可选地,上述组件只是一个示例,实际应用中,上述各个模块中的组件有可能根据实际需要增添或者删除,图3不应理解为对本申请实施例的限制。
在道路行进的自动驾驶汽车,如上面的车辆100,可以根据本申请实施例的目标跟踪方法跟踪其周围环境内的物体以确定自身对当前速度或行驶路线的调整等。该物体可以是其它车辆、交通控制设备、或者其它类型的物体。
除了提供调整自动驾驶汽车的速度或行驶路线的指令之外,计算设备还可以提供修改车辆100的转向角的指令,以使得自动驾驶汽车遵循给定的轨迹和/或维持与自动驾驶汽车附近的障碍物(例如,道路上的相邻车道中的车辆)的安全横向和纵向距离。
上述车辆100可以为轿车、卡车、摩托车、公共汽车、船、飞机、直升飞机、割草机、娱乐车、游乐场车辆、施工设备、电车、高尔夫球车、火车、和手推车等,本申请实施例不做特别的限定。
图4为图3中的计算机***112的结构示意图。如图4所示,计算机***112包括处理器113,处理器113和***总线105耦合。处理器113可以是一个或者多个处理器,其中每个处理器都可以包括一个或多个处理器核。显示适配器(video adapter)107,显示适配器107可以驱动显示器109,显示器109和***总线105耦合。***总线105通过总线桥111和输入输出(I/O)总线耦合。I/O接口115和I/O总线耦合。I/O接口115和多种I/O设备进行通信,比如输入设备117(如:键盘,鼠标,触摸屏等),多媒体盘(media tray)121,(例如,CD-ROM,多媒体接口等)。收发器123(可以发送和/或接受无线电通信信号),摄像头155(可以捕捉静态和动态数字视频图像)和外部USB接口125。其中,可选地,和I/O接口115相连接的接口可以是通用串行总线(universal serial bus,USB)接口。
其中,处理器113可以是任何传统处理器,包括精简指令集计算(“RISC”)处理器、复杂指令集计算(“CISC”)处理器或上述的组合。可选地,处理器可以是诸如专用集成电路(“ASIC”)的专用装置。可选地,处理器113可以是神经网络处理器或者是神经网络处理器和上述传统处理器的组合。
可选地,在本文所述的各种实施例中,计算机***可位于远离自动驾驶车辆的地方,并且可与自动驾驶车辆无线通信。在其它方面,本文所述的一些过程在设置在自动驾驶车辆内的处理器上执行,其它由远程处理器执行,包括采取执行单个操纵所需的动作。
计算机***112可以通过网络接口129和软件部署服务器149通信。网络接口129是硬件网络接口,比如,网卡。网络127可以是外部网络,比如因特网,也可以是内部网络,比如以太网或者虚拟私人网络(VPN)。可选地,网络127还可以是无线网络,比如WiFi网络,蜂窝网络等。
硬盘驱动接口131和***总线105耦合。硬盘驱动接口131和硬盘驱动器133相 连接。***内存135和***总线105耦合。运行在***内存135的软件可以包括计算机***112的操作***(operating system,OS)137和应用程序143。
操作***包括Shell 139和内核(kernel)141。Shell 139是介于使用者和操作***之内核(kernel)间的一个接口。shell是操作***最外面的一层。shell管理使用者与操作***之间的交互:等待使用者的输入,向操作***解释使用者的输入,并且处理各种各样的操作***的输出结果。
内核141由操作***中用于管理存储器、文件、外设和***资源的那些部分组成。直接与硬件交互,操作***的内核141通常运行进程,并提供进程间的通信,提供CPU时间片管理、中断、内存管理、IO管理等等。
应用程序141包括控制汽车自动驾驶相关的程序,比如,管理自动驾驶的汽车和路上障碍物交互的程序,控制自动驾驶汽车路线或者速度的程序,控制自动驾驶汽车和路上其他自动驾驶汽车交互的程序。应用程序141也存在于软件部署服务器(deploying server)149的***上。在一个实施例中,在需要执行应用程序141时,计算机***可以从deploying server149下载应用程序143。
传感器153和计算机***关联。传感器153用于探测计算机***112周围的环境。举例来说,传感器153可以探测动物,汽车,障碍物和人行横道等,进一步传感器还可以探测上述动物,汽车,障碍物和人行横道等物体周围的环境,比如:动物周围的环境,例如,动物周围出现的其他动物,天气条件,周围环境的光亮度等。可选地,如果计算机***112位于自动驾驶的汽车上,传感器可以是摄像头,红外线感应器,化学检测器,麦克风等。
图5为本申请实施例提供的一种芯片硬件结构的示意图。如图5所示,该芯片可以包括神经网络处理器50。该芯片可以应用于图3所示的车辆中,或图4所示的计算机***中。
神经网络处理器50可以是神经网络处理器(neural network processing unit,NPU),张量处理器(tensor processing unit,TPU),或者图形处理器(graphics processing unit,GPU)等一切适合用于大规模异或运算处理的处理器。以NPU为例:NPU可以作为协处理器挂载到主CPU(host CPU)上,由主CPU为其分配任务。NPU的核心部分为运算电路503,通过控制器504控制运算电路503提取存储器(501和502)中的矩阵数据并进行乘加运算。
在一些实现中,运算电路503内部包括多个处理单元(process engine,PE)。在一些实现中,运算电路503是二维脉动阵列。运算电路503还可以是一维脉动阵列或者能够执行例如乘法和加法这样的数学运算的其它电子线路。在一些实现中,运算电路503是通用的矩阵处理器。
举例来说,假设有输入矩阵A,权重矩阵B,输出矩阵C。运算电路503从权重存储器502中取矩阵B的权重数据,并缓存在运算电路503中的每一个PE上。运算电路503从输入存储器501中取矩阵A的输入数据,根据矩阵A的输入数据与矩阵B的权重数据进行矩阵运算,得到的矩阵的部分结果或最终结果,保存在累加器(accumulator)508中。
统一存储器506用于存放输入数据以及输出数据。权重数据直接通过存储单元访 问控制器(direct memory access controller,DMAC)505,被搬运到权重存储器502中。输入数据也通过DMAC被搬运到统一存储器506中。
总线接口单元(bus interface unit,BIU)510,用于DMAC和取指存储器(instruction fetch buffer)509的交互;总线接口单元501还用于取指存储器509从外部存储器获取指令;总线接口单元501还用于存储单元访问控制器505从外部存储器获取输入矩阵A或者权重矩阵B的原数据。
DMAC主要用于将外部存储器DDR中的输入数据搬运到统一存储器506中,或将权重数据搬运到权重存储器502中,或将输入数据搬运到输入存储器501中。
向量计算单元507多个运算处理单元,在需要的情况下,对运算电路503的输出做进一步处理,如向量乘,向量加,指数运算,对数运算,大小比较等等。向量计算单元507主要用于神经网络中非卷积层,或全连接层(fully connected layers,FC)的计算,具体可以处理:Pooling(池化),Normalization(归一化)等的计算。例如,向量计算单元507可以将非线性函数应用到运算电路503的输出,例如累加值的向量,用以生成激活值。在一些实现中,向量计算单元507生成归一化的值、合并值,或二者均有。
在一些实现中,向量计算单元507将经处理的向量存储到统一存储器506。在一些实现中,经向量计算单元507处理过的向量能够用作运算电路503的激活输入。
控制器504连接的取指存储器(instruction fetch buffer)509,用于存储控制器504使用的指令;
统一存储器506,输入存储器501,权重存储器502以及取指存储器509均为On-Chip存储器。外部存储器独立于该NPU硬件架构。
示例性的,在安防或监控场景中,本申请实施例的目标跟踪方法可以应用于电子设备。电子设备可以是具有计算能力的终端设备或服务器或芯片等。终端设备可以包括手机、电脑或平板等。例如,图6示出了本申请实施例的目标跟踪方法应用于安防或监控的场景示意图。
如图6所示,在安防或监控场景中,可以包括雷达601、摄像头602和电子设备603。雷达601和摄像头602可以设置在电线杆等位置,使得雷达601和摄像头602具有较好的视野。雷达601和摄像头602分别可以与电子设备603通信。雷达601测量的点云数据以及摄像头602采集的图像可以传输至电子设备603,电子设备603进而可以基于雷达601的点云数据以及摄像头602采集的图像,利用本申请实施例的目标跟踪方法,实现对例如人物604的跟踪。
可能的实现方式中,在电子设备603检测到人物604非法进入不安全领域时,可以通过屏幕显示示警、通过语音示警和/或通过警示设备示警等,本申请实施例对此不作具体限定。
下面对本申请实施例中所描述的词汇进行说明。可以理解,该说明是为更加清楚的解释本申请实施例,并不必然构成对本申请实施例的限定。
本申请实施例所描述的摄像头目标跟踪结果可以包括:对摄像头采集的图像进行目标框定得到的目标边界框(或称为视觉边界框等),或其他用于标定目标的数据,等。可能的实现方式中,摄像头目标跟踪结果还可以包括下述的一种或多种:目标的 位置或速度等,摄像头目标跟踪结果的数量可以为一个或多个,本申请实施例对摄像头目标跟踪结果的具体内容和数量不作具体限定。
本申请实施例所描述的雷达目标跟踪结果可以包括:雷达采集的目标点云,或其他用于标定目标的数据,等。可能的实现方式中,雷达目标跟踪结果还可以包括下述的一种或多种:目标的位置或速度等,雷达目标跟踪结果的数量可以为一个或多个,本申请实施例对雷达目标跟踪结果的具体内容和数量不作具体限定。
本申请实施例所描述的雷达可以包括:毫米波雷达或成像雷达(image radar)等。其中,成像雷达相较于毫米波雷达能得到更多的点云数据,因此在采用成像雷达进行目标跟踪时,可以基于成像雷达采集到的较多的点云数据得到目标的尺寸,进而结合目标尺寸进行雷达摄像头融合,得到相较于毫米波雷达更为准确的目标跟踪。
本申请实施例所描述的摄像头目标跟踪结果可以是在摄像头坐标系中标定的目标跟踪结果。本申请实施例所描述的雷达目标跟踪结果可以是在雷达坐标系中标定的目标跟踪结果。
本申请实施例所描述的摄像头坐标系可以是以摄像头为中心的坐标系,例如,摄像机坐标系中,摄像机在原点,x轴向右,z轴向前(朝向屏幕内或摄像机方向),y轴向上(不是世界的上方而是摄像机本身的上方)。可能的实现中,摄像头坐标系也可能称为视觉坐标系。
本申请实施例所描述的雷达坐标系可以是以雷达为中心的坐标系。可能的实现中,雷达坐标系也可能称为俯视图坐标系或鸟瞰图(bird eye view,BEV)坐标系等。
下面以具体地实施例对本申请的技术方案以及本申请的技术方案如何解决上述技术问题进行详细说明。下面这几个具体的实施例可以独立实现,也可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。
图7为本申请实施例提供的一种目标跟踪方法的流程示意图,如图7所示,该方法包括:
S701:获取摄像头目标跟踪结果以及雷达目标跟踪结果。
本申请实施例中,摄像头可以用于拍摄图像,雷达可以用于检测得到点云数据。摄像头、雷达以及用于执行目标跟踪方法的设备,三者可以是合设在一个设备中,也可以分别独立,也可以两两合设与一个设备中,本申请实施例对此不作具体限定。
可能的实现方式中,摄像头可以具备计算能力,则摄像头可以根据拍摄图像得到摄像头目标跟踪结果,并将摄像头目标跟踪结果发送给用于执行目标跟踪方法的设备。
可能的实现方式中,雷达可以具备计算能力,则雷达可以根据点云数据得到雷达目标跟踪结果,并将雷达目标跟踪结果发送给用于执行目标跟踪方法的设备。
可能的实现方式中,用于执行目标跟踪方法的设备可以从摄像头获取拍摄图像,从雷达获取点云数据,进而用于执行目标跟踪方法的设备可以根据拍摄图像得到摄像头目标跟踪结果,以及根据点云数据得到雷达目标跟踪结果。
可能的理解中,摄像头目标跟踪结果可以是采用可能的摄像头跟踪算法等得到的目标跟踪结果,雷达目标跟踪结果可以是采用可能的雷达跟踪算法等得到的目标跟踪结果,本申请实施例对获取摄像头目标跟踪结果以及雷达目标跟踪结果的具体方式不作限定。
S702:根据摄像头目标跟踪结果以及雷达目标跟踪结果对应的目标模型,得到目标跟踪结果;其中,目标模型用于指示雷达目标跟踪结果中的目标及目标的高度信息的关联关系。
本申请实施例所描述的雷达目标跟踪结果的目标模型,用于指示雷达目标跟踪结果中的目标及该目标的高度信息的关联关系。示例性的,目标模型可以是在雷达坐标系中,结合目标的高度信息,以及目标的位置信息等融合得到的模型。可能的理解方式中,本申请实施例可以将雷达目标跟踪结果中零散、数量较少的点云数据,扩展到具有较大覆盖范围的高度信息的目标模型。
摄像头目标跟踪结果通常与目标的形状相关,例如前述的摄像头目标跟踪结果可以包括用于框定目标的目标边界框,且本申请实施例中雷达目标跟踪结果对应的目标模型与目标的高度相关,能够有效将雷达监测的目标的范围扩大,则根据摄像头目标跟踪结果以及雷达目标跟踪结果对应的目标模型进行目标关联时,摄像头目标跟踪结果以及雷达目标跟踪结果对应的目标模型的关联范围可以有效扩大,进而可以关联得到准确的目标跟踪结果。
本申请实施例所描述的目标跟踪结果可以包括下述的一种或多种:目标的类型、位置或速度等,目标的数量可以为一个或多个,本申请实施例对目标跟踪结果的具体内容和数量不作具体限定。
综上所述,本申请实施例的目标跟踪方法中,在关联摄像头目标跟踪的结果和雷达目标跟踪的结果时,在雷达目标跟踪的结果中引入了目标的高度信息,具体的,可以得到用于指示雷达目标跟踪结果中的目标及目标的高度信息的关联关系的目标模型,将摄像头目标跟踪结果和雷达目标跟踪结果进行关联时,可以根据摄像头目标跟踪结果和目标模型,得到目标跟踪结果。因为目标模型中包括了目标的高度信息,能够有效将雷达监测的目标的范围扩大,进而可以关联得到准确的目标跟踪结果。
在图7对应的实施例的基础上,可能的实现方式中,S702之前还可以包括:根据雷达目标跟踪结果中目标的类型信息获取目标的高度信息;融合目标的高度信息和雷达目标跟踪结果中的目标,得到目标模型。
示例性的,可以根据通常的雷达分类算法(例如RD-map或者微多普勒谱等,本申请实施例对雷达分类算法不作具体限定),对雷达检测得到的雷达目标跟踪结果中的目标进行分类。例如,可以根据雷达分类算法对目标分类,得到分类目标的类型信息包括:车辆(car)、行人(pedestrian)、动物(animal)或自行车(cycle)等。
根据目标的类型信息可以确定目标的高度信息。例如,可以根据目标的类型信息估计目标的高度信息。或者,例如,可以预先定义或预先设置目标的类型信息与目标的高度信息之间的对应关系,从而在确定目标的类型信息后,可以在该对应关系中匹配相应的高度信息。其中,高度信息可以是具体的高度值,也可以是高度区间,比如,该对应关系可以包括:车辆高度(car height)0.8-1.2米(meter,简称m),行人高度(ped height)1.0-1.8m,动物高度(animal height)0.4-1.0m。
目标的类型信息与目标的高度信息之间的对应关系可以基于高斯分布或统计或机器学习等方式得到,本申请不做具体限定。示例性的,图8示出了基于高斯分布的目标类型-概率高度对应关系示意图。如图8所示,高度分布一、高度分布二和高度分布 三分别代表不同目标类型对应的概率高度分布。
在得到目标的高度信息的基础上,可以融合目标的高度信息和雷达目标跟踪结果中的目标,得到目标模型。示例性的,在确定目标的类型信息后,可以在目标的类型信息与目标的高度信息之间的对应关系中,选择概率最大或较大的高度值,利用该高度值得到该高度值对应的高度线段,融合高度线段和目标跟踪结果中目标的位置,得到目标模型。因为目标模型可以是含有高度线段的模型,因此,目标模型也可以称为概率高度模型,或概率高度线段模型等。
在图7对应的实施例的基础上,可能的实现方式中,S702包括:将目标模型投影到摄像头坐标系,得到投影后雷达目标跟踪结果;根据摄像头目标跟踪结果和投影后雷达目标跟踪结果,得到目标跟踪结果。
本申请实施例中,因为目标模型中包含了目标的高度信息,因此,在将目标模型投影到摄像头坐标系时,可以理解为在摄像头坐标系的一维的投影平面中,引入二维的高度信息,则可以根据摄像头目标跟踪结果(例如目标边界框)和投影后雷达目标跟踪结果(例如代表高度和位置的线段),确定摄像头目标跟踪结果与投影后雷达目标跟踪结果中共同确定的目标,得到目标跟踪结果。
可能的实现方式中,将目标模型投影到摄像头坐标系,包括:根据预先设置或者定义的高度转换关系将目标模型转换到摄像头坐标系。可能的实现方式中,高度转换关系可以预先基于实验等进行设置或定义,在得到目标模型后,可以匹配该目标模型对应的高度转换关系,进而将目标模型转换到摄像头坐标系。
本申请实施例所描述的高度转换关系用于将雷达坐标系中具备高度的目标跟踪结果转换到摄像头坐标系。不同的高度信息对应不同的高度转换关系。可能的实现方式中,高度转换关系可以包括高度转换矩阵或高度转换矩阵集合等,本申请实施例对高度转换关系不作具体限定。
示例性的,图9示出了有一种目标模型的高度转换关系标定示意图。如图9所示,目标模型可以是具有高度的线段,假设目标模型设置在地面上,在摄像头坐标系中,线段的不同位置(例如两端位置,或中间任意位置等)可以对应不同的高度转换矩阵,例如高度转换矩阵可以与目标到摄像头坐标系原点的距离d和夹角φ相关,可以对该线段的多个位置分别构建高度转换矩阵,则多个高度转换矩阵组成的高度转换矩阵集合(或称为高度矩阵序列)可以用于该线段向摄像头坐标系的转换。
可能的实现方式中,对应不同的区域类型,高度信息对应的高度转换关系不同。
本申请实施例所描述的区域类型可以用于描述目标所处区域的地面类型。例如,区域类型可以包括下述的一种或多种:地面起伏区域(例如草地或起伏路面等)、具有坡度的区域(例如斜坡等)、或地面平坦区域(例如平坦路面等)。在不同的区域类型中,目标所处的地平面可能不同,目标在不同区域中相对于摄像头坐标系的原点的高度可能不同,因此,对位于不同区域的相同目标,如果均采用相同的高度转换关系,可能会导致转换得到的高度与目标相对于摄像头坐标系的原点的高度不符的情况,进而可能导致后续进行雷达摄像头融合时不准确。
基于此,本申请实施例中对应不同的区域类型,高度信息对应的高度转换关系不同,从而可以根据各种区域类型的高度转换关系对目标模型进行准确的转换。
示例性的,图10示出了在一个场景中包括多种区域类型的示意图。例如,区域一代表草地,区域二代表斜坡,区域三代表平坦路面,同一高度信息在区域一、区域二和区域三中分别对应有不同的高度转换关系。在将目标模型转换到摄像头坐标系时,可以确定目标模型对应的目标区域类型(例如区域一、区域二或区域三);根据目标区域类型所对应的高度转换关系中,与目标模型的高度信息匹配的目标高度转换关系将目标模型转换到摄像头坐标系。这样可以利用各区域的高度转换关系准确的将目标模型转换到摄像头坐标系中。
可能的实现方式中,根据摄像头目标跟踪结果和投影后雷达目标跟踪结果,得到目标跟踪结果,可以包括:采用任意的关联算法,计算摄像头目标跟踪结果和投影后雷达目标跟踪结果的关联度,将关联度高的摄像头目标跟踪结果和投影后雷达目标跟踪结果确定为同一目标。关联算法例如包括下述的一种或多种:全局最近邻算法(global nearest neighbor,GNN)、概率数据关联(probabilistic data association,PDA)联合概率数据关联(joint probabilistic data association,JPDA)或交并比(intersection over union,IoU)等。
示例性的,可以根据摄像头目标跟踪结果和投影后雷达目标跟踪结果的交叠比例(或称为交并比),确定摄像头目标跟踪结果和投影后雷达目标跟踪结果为同一目标;其中,交叠比例大于第一值。
摄像头目标跟踪结果和投影后雷达目标跟踪结果交叠的部分越多(或理解为交叠比例越大),可以表明摄像头目标跟踪结果和投影后雷达目标跟踪结果指向的为同一目标,因此,可以在摄像头目标跟踪结果和投影后雷达目标跟踪结果的交叠比例大于或等于第一值时,确定摄像头目标跟踪结果和投影后雷达目标跟踪结果为同一目标,进行关联。示例性的,第一值可以为0.5-1之间的任意值,本申请实施例对第一值不作具体限定。可以理解,通常IoU计算中,第一值具备置信度分布,具有稳定性,因此,采用IoU计算进行关联,第一值可以不需要人为调节,提升本申请实施例的关联计算的通用性。
在一种可能的实现方式中,在摄像头目标跟踪结果的数量为一个,投影后雷达目标跟踪结果的数量为一个时,可以在该摄像头目标跟踪结果和该投影后雷达目标跟踪结果的交叠比例大于第一值时,确定该摄像头目标跟踪结果和该投影后雷达目标跟踪结果为同一目标。
在另一种可能的实现方式中,在摄像头目标跟踪结果的数量为多个,投影后雷达目标跟踪结果的数量为多个的情况下,可以将一个摄像头目标跟踪结果和一个投影后雷达目标跟踪结果组对,并分别计算每对对摄像头目标跟踪结果和投影后雷达目标跟踪结果的交叠比例,将交叠比例大于或等于第一值的每对摄像头目标跟踪结果和投影后雷达目标跟踪结果确定为同一目标。
可能的实现方式中,如果摄像头目标跟踪结果和投影后雷达目标跟踪结果的交叠比例小于或等于第一值,则认为该摄像头目标跟踪结果和投影后雷达目标跟踪结果对应的不是同一目标。
可以理解,在交叠比例等于第一值时,可以根据实际应用场景设定,确定摄像头目标跟踪结果和投影后雷达目标跟踪结果为同一目标,或根据实际应用场景设定,确 定摄像头目标跟踪结果和投影后雷达目标跟踪结果不为同一目标,本申请实施例对此不作具体限定。
可能的实现方式中,可能存在多个摄像头目标跟踪结果与一个投影后雷达目标跟踪结果存在交叠(后续简称为多C-R关联)的情况,或者一个摄像头目标跟踪结果与多个投影后雷达目标跟踪结果存在交叠(后续简称为多R-C关联)的情况。如果多C-R关联或多R-C关联中,计算得到的两个交叠比例均大于或等于第一值,则可能将多个摄像头目标跟踪结果误关联为同一目标,或者将多个投影后雷达目标跟踪结果误关联为同一目标,则可以进一步根据交叠目标在摄像头目标跟踪结果的位置和/或速度,与交叠目标在雷达目标跟踪结果的位置和/或速度的情况,进一步判定摄像头目标跟踪结果和投影后雷达目标跟踪结果是否为同一目标。
示例性的,可以在交叠比例大于第一值,且摄像头目标跟踪结果和投影后雷达目标跟踪结果中的交叠目标的位置和/或速度满足预设条件的情况下,确定摄像头目标跟踪结果和投影后雷达目标跟踪结果为同一目标。例如,预设条件包括:交叠目标在摄像头目标跟踪结果的位置和/或速度,与交叠目标在雷达目标跟踪结果的位置和/或速度的差异小于第二值。
例如,图11示出了多R-C和多C-R的示意图。
如图11所示,在多R-C中,投影后雷达目标跟踪结果1001和投影后雷达目标跟踪结果1002,均与摄像头目标跟踪结果1003存在交叠。
那么,如果投影后雷达目标跟踪结果1002和摄像头目标跟踪结果1003的交叠比例大于或等于第一值,且投影后雷达目标跟踪结果1001和摄像头目标跟踪结果1003的交叠比例小于第一值,可以确定投影后雷达目标跟踪结果1002和摄像头目标跟踪结果1003为同一目标,确定投影后雷达目标跟踪结果1001和摄像头目标跟踪结果1003不为同一目标。
如果投影后雷达目标跟踪结果1002和摄像头目标跟踪结果1003的交叠比例大于或等于第一值,且投影后雷达目标跟踪结果1001和摄像头目标跟踪结果1003的交叠比例大于或等于第一值。可以进一步的,判断投影后雷达目标跟踪结果1001中目标的位置与摄像头目标跟踪结果1003中目标的位置的距离是否大于距离阈值,和/或,判断投影后雷达目标跟踪结果1002中目标的位置与摄像头目标跟踪结果1003中目标的位置的距离是否大于距离阈值,和/或,判断投影后雷达目标跟踪结果1001中目标的速度与摄像头目标跟踪结果1003中目标的速度的差值是否大于速度差阈值,和/或,判断投影后雷达目标跟踪结果1002中目标的速度与摄像头目标跟踪结果1003中目标的速度的差值是否大于速度差阈值。进而,可以在投影后雷达目标跟踪结果1001中目标的位置与摄像头目标跟踪结果1003中目标的位置的距离小于或等于距离阈值,和/或,投影后雷达目标跟踪结果1001中目标的速度与摄像头目标跟踪结果1003中目标的速度的差值小于或等于速度差阈值时,确定投影后雷达目标跟踪结果1001和摄像头目标跟踪结果1003为同一目标。可以在投影后雷达目标跟踪结果1002中目标的位置与摄像头目标跟踪结果1003中目标的位置的距离小于或等于距离阈值,和/或,投影后雷达目标跟踪结果1002中目标的速度与摄像头目标跟踪结果1003中目标的速度的差值小于或等于速度差阈值时,确定投影后雷达目标跟踪结果1002和摄像头目标跟踪 结果1003为同一目标。其他情况可以确定投影后雷达目标跟踪结果1001和/或投影后雷达目标跟踪结果1001,与摄像头目标跟踪结果1003不为同一目标。
相似的,如图11所示,在多C-R中,摄像头目标跟踪结果1004和摄像头目标跟踪结果1005,均与投影后雷达目标跟踪结果1006存在交叠。可以采用类似于多R-C中记载的方式判断摄像头目标跟踪结果1004或摄像头目标跟踪结果1005是否与投影后雷达目标跟踪结果1006为同一目标,在此不再赘述。
示例性的,以执行本申请实施例的目标跟踪方法的设备(后续简称目标跟踪设备)、摄像头和雷达为三个独立的设备为例,结合图12对本申请实施例的目标跟踪方法进行详细说明,如图12所示,图12为本申请实施例提供的另一种目标跟踪方法的流程示意图,该方法包括:
S1201:目标跟踪设备获取摄像头目标跟踪结果。
一种可能的场景中,可以在需要进行目标跟踪的场所中设置摄像头,摄像头可以拍摄图像,目标跟踪设备可以从摄像头获取图像。
目标跟踪设备可以对从摄像头获取的图像进行图像识别等处理,实现Bounding Box跟踪,Bounding Box跟踪的结果作为摄像头目标跟踪结果。
可能的理解方式中,摄像头目标跟踪结果可以指摄像头坐标系下,用于框定目标的目标边界框(Bounding Box),目标边界框的数量可以是一个也可以是多个。
S1202:目标跟踪设备获取雷达目标跟踪结果。
一种可能的场景中,可以在需要进行目标跟踪的场所中设置雷达,雷达可以探测目标,目标跟踪设备可以从雷达获取雷达探测得到的数据。
目标跟踪设备可以对雷达探测得到的数据进行处理,得到目标的点云数据,作为雷达目标跟踪结果。
可能的理解方式中,雷达目标跟踪结果可以指用于标定目标的点云,其中一个目标对应的点云的数量可以与雷达的性能等相关,目标的数量可以是一个或多个。
S1203:目标跟踪设备通过点云分类算法,得到雷达目标跟踪结果中目标的类型信息。
本申请实施例的分类算法和目标的类型信息可以参照词汇说明部分的记载,在此不再赘述。
本申请实施例中,目标跟踪设备可以根据对雷达目标跟踪结果的分析,确定雷达跟踪结果中的目标的类型信息。例如,可能确定雷达跟踪结果中的目标为人和/或车辆等,本申请实施例对目标的数量以及目标的类型信息不作限定。
S1204:目标跟踪设备根据目标的类型信息,匹配目标的高度信息。
示例性的,以目标的数量为两个,两个目标的类型分别为人和车辆为例,人的高度信息可以为1.0-1.8m,车辆的高度信息可以0.4-1.0m。
S1205:目标跟踪设备对图像域(image domain)进行不同高度的RC标定,得到不同高度对应的转换矩阵。
本申请实施例,图像域可以为摄像头坐标系中图像中的区域,在不同区域中,对应不同的高度转换矩阵,这样,后续可以通过识别目标在图像中的具体区域,为目标选择对应的高度转换矩阵,达到更准确的跟踪效果。
需要说明的是,S1205可以是预先进行的步骤,或可以理解为S1205可以设置于S1201-S1204前面或中间或后面的任意位置,本申请实施例S1025的执行步骤不作具体限定。
S1206:目标跟踪设备将包含高度信息的目标模型,通过不同高度对应的转换矩阵,投影到图像域(可以理解为投影到摄像头坐标系)。
本申请实施例S1206,可以参照前述实施例的记载,在此不再赘述。
S1207:目标跟踪设备将摄像头目标跟踪结果与投影后的雷达目标跟踪结果进行关联。
本申请实施例的具体关联方法可以参照前述实施例的记载,在此不再赘述。
可能的实现中,经过S1207的关联,可以得到摄像头目标跟踪结果与雷达目标跟踪结果中共同确定的目标。
例如,结合图2示例,对场所中的A位置,依据摄像头进行目标跟踪时,在A位置得到摄像头目标跟踪结果20;依据雷达进行目标跟踪时,在A位置处得到雷达目标跟踪结果21;基于本案的目标模型,在雷达目标跟踪结果投影到摄像头坐标系时,摄像头坐标系中可以投影到线段23,因为摄像头目标跟踪结果20和线段23交叠的比例较大,则可以认为A位置的摄像头目标跟踪结果与雷达目标跟踪结果为同一目标,进而融合该同一目标对应的摄像头目标跟踪结果和雷达目标跟踪结果,得到更加准确、完整的跟踪结果。或者比如可以结合线段23的长度将摄像头目标跟踪结果20的底边下拉,实现更加准确的目标确定等。
可以理解,如果摄像头目标跟踪结果的数量为多个,雷达目标跟踪结果为多个,可以分别采用上述方法,对任一个摄像头目标跟踪结果和任一个雷达目标跟踪结果进行关联,从而可以得到摄像头目标跟踪结果与雷达目标跟踪结果中共同跟踪的目标,目标的数量可以是一个或多个。可以理解,如果摄像头目标跟踪结果与投影后的雷达目标跟踪结果交叠比例较小,则可以认为摄像头目标跟踪结果跟踪的目标与雷达目标跟踪结果跟踪的目标不是同一目标。在摄像头目标跟踪结果的数量为多个,雷达目标跟踪结果为多个的场景中,如果存在其中一个摄像头目标跟踪结果与任一个投影后的雷达目标跟踪结果的交叠比例都较小,可以判断该其中一个摄像头目标跟踪结果发生错误,后续可以不执行对该其中一个摄像头目标跟踪结果对应的目标的跟踪;相似的,如果存在任一个目标跟踪结果与其中一个投影后的雷达目标跟踪结果的交叠比例都较小,可以判断该其中一个投影后的雷达目标跟踪结果发生错误,后续可以不执行对该其中一个投影后的雷达目标跟踪结果对应的目标的跟踪。
S1208:目标跟踪设备根据关联后的结果进行目标跟踪。
示例性的,可以对上述关联得到的一个或多个目标分别进行目标跟踪,本申请实施例对进行目标跟踪的具体实现不作限定。
可能的理解方式中,本申请实施例中虽然使用类似特征级的关联方式(例如利用bounding box和高度信息),但基本框架使用的是效率和稳定性更好的目标级融合框架(因为是以目标为粒度进行跟踪),因此能有更高的计算效率。
且本申请实施例中,结合了目标的高度信息,对bounding box底边中点的精度依赖性大幅降低,对雷达点云中的位置精度依赖性大幅降低,在夜晚、或光线较弱、或 杂波环境复杂或起伏地面的情况下,依然能达到较高的目标跟踪准确度。
参照图12,在一种可能的实现方式中,图12中的雷达目标跟踪结果可以来自于成像雷达,成像雷达相较于毫米波雷达具有更多的点云数据,因此在采用成像雷达进行目标跟踪时,可以基于成像雷达采集到的较多的点云数据得到目标的尺寸,进而在S1206中,可以进一步将目标的尺寸和目标模型都投影到摄像头坐标系,在摄像头坐标系中可以得到包括视觉边界框、高度信息和尺寸的类似三维数据关系,S1207可以替换为利用视觉边界框、高度信息和尺寸进行目标关联,示例性的,可以同时计算视觉边界框、高度信息和尺寸之间的交叠比例,在交叠比例大于或等于一定值时,关联为同一目标。本申请实施例中因为增加了尺寸,可以得到相较于毫米波雷达能够实现更准确的目标关联,进而实现更为准确的目标跟踪。
通过上述对本申请方案的介绍,可以理解的是,上述实现各设备为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件单元。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
如图13所示,本申请实施例一种目标跟踪的装置,该目标跟踪的装置包括处理器1300、存储器1301和收发机1302;
处理器1300负责管理总线架构和通常的处理,存储器1301可以存储处理器1300在执行操作时所使用的数据。收发机1302用于在处理器1300的控制下接收和发送数据与存储器1301进行数据通信。
总线架构可以包括任意数量的互联的总线和桥,具体由处理器1300代表的一个或多个处理器和存储器1301代表的存储器的各种电路链接在一起。总线架构还可以将诸如***设备、稳压器和功率管理电路等之类的各种其他电路链接在一起,这些都是本领域所公知的,因此,本文不再对其进行进一步描述。总线接口提供接口。处理器1300负责管理总线架构和通常的处理,存储器1301可以存储处理器1300在执行操作时所使用的数据。
本申请实施例揭示的流程,可以应用于处理器1300中,或者由处理器1300实现。在实现过程中,目标跟踪的流程的各步骤可以通过处理器1300中的硬件的集成逻辑电路或者软件形式的指令完成。处理器1300可以是通用处理器、数字信号处理器、专用集成电路、现场可编程门阵列或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件,可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器1301,处理器1300读取存储器1301中的信息,结合其硬件完成信号处理流程的步骤。
本申请实施例一种可选的方式,所述处理器1300用于读取存储器1301中的程序并以执行如图7所示的S701-S702中的方法流程或如图12所示的S1201-S1208中的方法流程。
图14为本申请实施例提供的一种芯片的结构示意图。芯片1400包括一个或多个处理器1401以及接口电路1402。可选的,所述芯片1400还可以包含总线1403。其中:
处理器1401可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器1401中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器1401可以是通用处理器、数字通信器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或者其它可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件、MCU、MPU、CPU或者协处理器中的一个或多个。可以实现或者执行本申请实施例中的公开的各方法、步骤。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
接口电路1402可以用于数据、指令或者信息的发送或者接收,处理器1401可以利用接口电路1402接收的数据、指令或者其它信息,进行加工,可以将加工完成信息通过接口电路1402发送出去。
可选的,芯片还包括存储器,存储器可以包括只读存储器和随机存取存储器,并向处理器提供操作指令和数据。存储器的一部分还可以包括非易失性随机存取存储器(NVRAM)。
可选的,存储器存储了可执行软件模块或者数据结构,处理器可以通过调用存储器存储的操作指令(该操作指令可存储在操作***中),执行相应的操作。
可选的,芯片可以使用在本申请实施例涉及的目标跟踪装置中。可选的,接口电路1402可用于输出处理器1401的执行结果。关于本申请的一个或多个实施例提供的目标跟踪方法可参考前述各个实施例,这里不再赘述。
需要说明的,处理器1401、接口电路1402各自对应的功能既可以通过硬件设计实现,也可以通过软件设计来实现,还可以通过软硬件结合的方式来实现,这里不作限制。
如图15所示,本申请实施例提供一种目标跟踪的装置,所述装置包括收发模块1500和处理模块1501。
所述收发模块1500,用于获取摄像头目标跟踪结果以及雷达目标跟踪结果。
所述处理模块1501,用于根据摄像头目标跟踪结果以及雷达目标跟踪结果对应的目标模型,得到目标跟踪结果;其中,目标模型用于指示雷达目标跟踪结果中的目标及目标的高度信息的关联关系。
一种可能的实现方式中,处理模块,还用于根据雷达目标跟踪结果中目标的类型信息获取目标的高度信息;处理模块,还用于融合目标的高度信息和雷达目标跟踪结果中的目标,得到目标模型。
在一种可能的实现方式中,目标的类型信息与目标的高度信息之间存在预先定义或者预先设置的对应关系。
在一种可能的实现方式中,处理模块,具体用于将目标模型投影到摄像头坐标系,得到投影后雷达目标跟踪结果;根据摄像头目标跟踪结果和投影后雷达目标跟踪结果, 得到目标跟踪结果。
在一种可能的实现方式中,处理模块,具体用于根据预先设置或者定义的高度转换关系将目标模型转换到摄像头坐标系;其中,不同的高度信息对应不同的高度转换关系,高度转换关系用于将雷达坐标系中具备高度的目标跟踪结果转换到摄像头坐标系。
在一种可能的实现方式中,对应不同的区域类型,高度信息对应的高度转换关系不同。
在一种可能的实现方式中,区域类型包括下述的一种或多种:地面起伏区域、具有坡度的区域或地面平坦区域。
在一种可能的实现方式中,处理模块,具体用于确定目标模型对应的目标区域类型;根据目标区域类型所对应的高度转换关系中,与目标模型的高度信息匹配的目标高度转换关系将目标模型转换到摄像头坐标系。
在一种可能的实现方式中,处理模块,具体用于根据摄像头目标跟踪结果和投影后雷达目标跟踪结果的交叠比例,确定摄像头目标跟踪结果和投影后雷达目标跟踪结果为同一目标;其中,交叠比例大于第一值。
在一种可能的实现方式中,处理模块,具体用于在交叠比例大于第一值,且摄像头目标跟踪结果和投影后雷达目标跟踪结果中的交叠目标的位置和/或速度满足预设条件的情况下,确定摄像头目标跟踪结果和投影后雷达目标跟踪结果为同一目标。
在一种可能的实现方式中,预设条件包括:交叠目标在摄像头目标跟踪结果的位置和/或速度,与交叠目标在雷达目标跟踪结果的位置和/或速度的差异小于第二值。
在一种可能的实现方式中,雷达目标跟踪结果来自成像雷达;目标模型还包括目标的尺寸信息。
在一种可能的实现方式中,摄像头目标跟踪结果包括目标边界框;雷达目标跟踪结果包括目标点云。
可能的实现方式中,上述图15所示的收发模块1500和处理模块1501的功能可以由处理器1300运行存储器1301中的程序执行,或者由处理器1300单独执行。
如图16所示,本申请提供一种车辆,所述装置包括至少一个摄像器1601,至少一个存储器1602,至少一个收发器1603,至少一个处理器1604,以及雷达1605。
所述摄像器1601,用于获取图像,图像用于得到摄像头目标跟踪结果。
所述雷达1605,用于获取目标点云,目标点云用于得到雷达目标跟踪结果。
所述存储器1602,用于存储一个或多个程序以及数据信息;其中所述一个或多个程序包括指令。
所述收发器1603,用于与所述车辆中的通讯设备进行数据传输,以及用于与云端进行数据传输。
所述处理器1604,用于获取摄像头目标跟踪结果以及雷达目标跟踪结果;根据摄像头目标跟踪结果以及雷达目标跟踪结果对应的目标模型,得到目标跟踪结果;其中,目标模型用于指示雷达目标跟踪结果中的目标及目标的高度信息的关联关系。
在一些可能的实施方式中,本申请实施例提供的目标跟踪的方法的各个方面还可以实现为一种程序产品的形式,其包括程序代码,当所述程序代码在计算机设备上运 行时,所述程序代码用于使所述计算机设备执行本说明书中描述的根据本申请各种示例性实施方式的目标跟踪的方法中的步骤。
所述程序产品可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的***、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。
根据本申请的实施方式的用于目标跟踪的程序产品,其可以采用便携式紧凑盘只读存储器(CD-ROM)并包括程序代码,并可以在服务器设备上运行。然而,本申请的程序产品不限于此,在本文件中,可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被通信传输、装置或者器件使用或者与其结合使用。
可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了可读程序代码。这种传播的数据信号可以采用多种形式,包括——但不限于——电磁信号、光信号或上述的任意合适的组合。可读信号介质还可以是可读存储介质以外的任何可读介质,该可读介质可以发送、传播或者传输用于由周期网络动作***、装置或者器件使用或者与其结合使用的程序。
可读介质上包含的程序代码可以用任何适当的介质传输,包括——但不限于——无线、有线、光缆、RF等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言的任意组合来编写用于执行本申请操作的程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、C++等,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。在涉及远程计算设备的情形中,远程计算设备可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算设备,或者,可以连接到外部计算设备。
本申请实施例针对目标跟踪的方法还提供一种计算设备可读存储介质,即断电后内容不丢失。该存储介质中存储软件程序,包括程序代码,当所述程序代码在计算设备上运行时,该软件程序在被一个或多个处理器读取并执行时可实现本申请实施例上面任何一种目标跟踪的方案。
本申请实施例还提供一种电子设备,在采用对应各个功能划分各个功能模块的情况下,该电子设备包括:处理模块,用于支持目标跟踪装置执行上述实施例中的步骤,例如可以执行S701至S702的操作,或者本申请实施例所描述的技术的其他过程。
其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
当然,目标跟踪装置包括但不限于上述所列举的单元模块。并且,上述功能单元的具体所能够实现的功能也包括但不限于上述实例所述的方法步骤对应的功能,电子 设备的其他单元的详细描述可以参考其所对应方法步骤的详细描述,本申请实施例这里不予赘述。
在采用集成的单元的情况下,上述实施例中所涉及的电子设备可以包括:处理模块、存储模块和通信模块。存储模块,用于保存电子设备的程序代码和数据。该通信模块用于支持电子设备与其他网络实体的通信,以实现电子设备的通话,数据交互,Internet访问等功能。
其中,处理模块用于对电子设备的动作进行控制管理。处理模块可以是处理器或控制器。通信模块可以是收发器、RF电路或通信接口等。存储模块可以是存储器。
进一步的,该电子设备还可以包括输入模块和显示模块。显示模块可以是屏幕或显示器。输入模块可以是触摸屏,语音输入装置,或指纹传感器等。
以上参照示出根据本申请实施例的方法、装置(***)和/或计算机程序产品的框图和/或流程图描述本申请。应理解,可以通过计算机程序指令来实现框图和/或流程图示图的一个块以及框图和/或流程图示图的块的组合。可以将这些计算机程序指令提供给通用计算机、专用计算机的处理器和/或其它可编程数据处理装置,以产生机器,使得经由计算机处理器和/或其它可编程数据处理装置执行的指令创建用于实现框图和/或流程图块中所指定的功能/动作的方法。
相应地,还可以用硬件和/或软件(包括固件、驻留软件、微码等)来实施本申请。更进一步地,本申请可以采取计算机可使用或计算机可读存储介质上的计算机程序产品的形式,其具有在介质中实现的计算机可使用或计算机可读程序代码,以由指令执行***来使用或结合指令执行***而使用。在本申请上下文中,计算机可使用或计算机可读介质可以是任意介质,其可以包含、存储、通信、传输、或传送程序,以由指令执行***、装置或设备使用,或结合指令执行***、装置或设备使用。
本申请结合多个流程图详细描述了多个实施例,但应理解,这些流程图及其相应的实施例的相关描述仅为便于理解而示例,不应对本申请构成任何限定。各流程图中的每一个步骤并不一定是必须要执行的,例如有些步骤是可以跳过的。并且,各个步骤的执行顺序也不是固定不变的,也不限于图中所示,各个步骤的执行顺序应以其功能和内在逻辑确定。
本申请描述的多个实施例之间可以任意组合或步骤之间相互交叉执行,各个实施例的执行顺序和各个实施例的步骤之间的执行顺序均不是固定不变的,也不限于图中所示,各个实施例的执行顺序和各个实施例的各个步骤的交叉执行顺序应以其功能和内在逻辑确定。
尽管结合具体特征及其实施例对本申请进行了描述,显而易见的,在不脱离本申请的精神和范围的情况下,可对其进行各种修改和组合。相应地,本说明书和附图仅仅是所附权利要求所界定的本申请的示例性说明,且视为已覆盖本申请范围内的任意和所有修改、变化、组合或等同物。显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包括这些改动和变型在内。

Claims (30)

  1. 一种目标跟踪方法,其特征在于,包括:
    获取摄像头目标跟踪结果以及雷达目标跟踪结果;
    根据所述摄像头目标跟踪结果以及所述雷达目标跟踪结果对应的目标模型,得到目标跟踪结果;
    其中,所述目标模型用于指示所述雷达目标跟踪结果中的目标及所述目标的高度信息的关联关系。
  2. 根据权利要求1所述的方法,其特征在于,还包括:
    根据所述雷达目标跟踪结果中目标的类型信息获取所述目标的高度信息;
    融合所述目标的高度信息和所述雷达目标跟踪结果中的所述目标,得到所述目标模型。
  3. 根据权利要求2所述的方法,其特征在于,所述目标的类型信息与所述目标的高度信息之间存在预先定义或者预先设置的对应关系。
  4. 根据权利要求1至3中任一项所述的方法,其特征在于,所述根据所述摄像头目标跟踪结果以及所述雷达目标跟踪结果对应的目标模型,得到目标跟踪结果,包括:
    将所述目标模型投影到摄像头坐标系,得到投影后雷达目标跟踪结果;
    根据所述摄像头目标跟踪结果和所述投影后雷达目标跟踪结果,得到目标跟踪结果。
  5. 根据权利要求4所述的方法,其特征在于,所述将所述目标模型投影到所述摄像头坐标系,包括:
    根据预先设置或者定义的高度转换关系将所述目标模型转换到摄像头坐标系;其中,不同的所述高度信息对应不同的所述高度转换关系,所述高度转换关系用于将雷达坐标系中具备高度的目标跟踪结果转换到所述摄像头坐标系。
  6. 根据权利要求5所述的方法,其特征在于,对应不同的区域类型,所述高度信息对应的高度转换关系不同。
  7. 根据权利要求6所述的方法,其特征在于,所述区域类型包括下述的一种或多种:地面起伏区域、具有坡度的区域或地面平坦区域。
  8. 根据权利要求6或7所述的方法,其特征在于,所述根据预先设置或者定义高度转换关系将所述目标模型转换到摄像头坐标系,包括:
    确定所述目标模型对应的目标区域类型;
    根据所述目标区域类型所对应的高度转换关系中,与所述目标模型的高度信息匹配的目标高度转换关系将所述目标模型转换到摄像头坐标系。
  9. 根据权利要求4至8中任一项所述的方法,其特征在于,所述根据所述摄像头目标跟踪结果和所述投影后雷达目标跟踪结果,得到目标跟踪结果,包括:
    根据所述摄像头目标跟踪结果和所述投影后雷达目标跟踪结果的交叠比例,确定所述摄像头目标跟踪结果和所述投影后雷达目标跟踪结果为同一目标;其中,所述交叠比例大于第一值。
  10. 根据权利要求9所述的方法,其特征在于,在根据所述摄像头目标跟踪结果和所述投影后雷达目标跟踪结果的交叠比例,确定所述摄像头目标跟踪结果和所述投 影后雷达目标跟踪结果为同一目标,包括:
    在所述交叠比例大于所述第一值,且所述摄像头目标跟踪结果和所述投影后雷达目标跟踪结果中的交叠目标的位置和/或速度满足预设条件的情况下,确定所述摄像头目标跟踪结果和所述投影后雷达目标跟踪结果为同一目标。
  11. 根据权利要求10所述的方法,其特征在于,所述预设条件包括:
    所述交叠目标在所述摄像头目标跟踪结果的位置和/或速度,与所述交叠目标在所述雷达目标跟踪结果的位置和/或速度的差异小于第二值。
  12. 根据权利要求1至11中任一项所述的方法,其特征在于,所述雷达目标跟踪结果来自成像雷达;所述目标模型还包括所述目标的尺寸信息。
  13. 根据权利要求1至12中任一项所述的方法,其特征在于,所述摄像头目标跟踪结果包括目标边界框;所述雷达目标跟踪结果包括目标点云。
  14. 一种目标跟踪装置,其特征在于,包括:
    通信单元,用于获取摄像头目标跟踪结果以及雷达目标跟踪结果;
    处理单元,用于根据所述摄像头目标跟踪结果以及所述雷达目标跟踪结果对应的目标模型,得到目标跟踪结果;其中,所述目标模型用于指示所述雷达目标跟踪结果中的目标及所述目标的高度信息的关联关系。
  15. 根据权利要求14所述的装置,其特征在于,
    所述处理单元,还用于根据所述雷达目标跟踪结果中目标的类型信息获取所述目标的高度信息;
    所述处理单元,还用于融合所述目标的高度信息和所述雷达目标跟踪结果中的所述目标,得到所述目标模型。
  16. 根据权利要求15所述的装置,其特征在于,所述目标的类型信息与所述目标的高度信息之间存在预先定义或者预先设置的对应关系。
  17. 根据权利要求14至16中任一项所述的装置,其特征在于,所述处理单元,具体用于将所述目标模型投影到摄像头坐标系,得到投影后雷达目标跟踪结果;根据所述摄像头目标跟踪结果和所述投影后雷达目标跟踪结果,得到目标跟踪结果。
  18. 根据权利要求17所述的装置,其特征在于,所述处理单元,具体用于根据预先设置或者定义的高度转换关系将所述目标模型转换到摄像头坐标系;其中,不同的所述高度信息对应不同的所述高度转换关系,所述高度转换关系用于将雷达坐标系中具备高度的目标跟踪结果转换到所述摄像头坐标系。
  19. 根据权利要求18所述的装置,其特征在于,对应不同的区域类型,所述高度信息对应的高度转换关系不同。
  20. 根据权利要求19所述的装置,其特征在于,所述区域类型包括下述的一种或多种:地面起伏区域、具有坡度的区域或地面平坦区域。
  21. 根据权利要求19或20所述的装置,其特征在于,所述处理单元,具体用于确定所述目标模型对应的目标区域类型;根据所述目标区域类型所对应的高度转换关系中,与所述目标模型的高度信息匹配的目标高度转换关系将所述目标模型转换到摄像头坐标系。
  22. 根据权利要求17至21中任一项所述的装置,其特征在于,所述处理单元, 具体用于根据所述摄像头目标跟踪结果和所述投影后雷达目标跟踪结果的交叠比例,确定所述摄像头目标跟踪结果和所述投影后雷达目标跟踪结果为同一目标;其中,所述交叠比例大于第一值。
  23. 根据权利要求22所述的装置,其特征在于,所述处理单元,具体用于在所述交叠比例大于所述第一值,且所述摄像头目标跟踪结果和所述投影后雷达目标跟踪结果中的交叠目标的位置和/或速度满足预设条件的情况下,确定所述摄像头目标跟踪结果和所述投影后雷达目标跟踪结果为同一目标。
  24. 根据权利要求23所述的装置,其特征在于,所述预设条件包括:所述交叠目标在所述摄像头目标跟踪结果的位置和/或速度,与所述交叠目标在所述雷达目标跟踪结果的位置和/或速度的差异小于第二值。
  25. 根据权利要求14至24中任一项所述的装置,其特征在于,所述雷达目标跟踪结果来自成像雷达;所述目标模型还包括所述目标的尺寸信息。
  26. 根据权利要求14至25中任一项所述的装置,其特征在于,所述摄像头目标跟踪结果包括目标边界框;所述雷达目标跟踪结果包括目标点云。
  27. 一种目标跟踪装置,其特征在于,包括:至少一个处理器,用于调用存储器中的程序,以执行权利要求1至14中任一项所述的方法。
  28. 一种目标跟踪装置,其特征在于,包括:至少一个处理器和接口电路,所述接口电路用于为所述至少一个处理器提供信息输入和/或信息输出,所述至少一个处理器用于执行权利要求1至14中任一项所述的方法。
  29. 一种芯片,其特征在于,包括至少一个处理器和接口;
    所述接口,用于为所述至少一个处理器提供程序指令或者数据;
    所述至少一个处理器用于执行所述程序行指令,以实现如权利要求1至14中任一项所述的方法。
  30. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有指令,当所述指令被执行时,使得计算机执行如权利要求1至14中任一项所述的方法。
PCT/CN2021/113337 2020-09-11 2021-08-18 目标跟踪方法及装置 WO2022052765A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21865821.9A EP4206731A4 (en) 2020-09-11 2021-08-18 TARGET TRACKING METHOD AND DEVICE
US18/181,204 US20230204755A1 (en) 2020-09-11 2023-03-09 Target tracking method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010953032.9A CN114167404A (zh) 2020-09-11 2020-09-11 目标跟踪方法及装置
CN202010953032.9 2020-09-11

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/181,204 Continuation US20230204755A1 (en) 2020-09-11 2023-03-09 Target tracking method and apparatus

Publications (1)

Publication Number Publication Date
WO2022052765A1 true WO2022052765A1 (zh) 2022-03-17

Family

ID=80476064

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/113337 WO2022052765A1 (zh) 2020-09-11 2021-08-18 目标跟踪方法及装置

Country Status (4)

Country Link
US (1) US20230204755A1 (zh)
EP (1) EP4206731A4 (zh)
CN (1) CN114167404A (zh)
WO (1) WO2022052765A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114740465A (zh) * 2022-03-18 2022-07-12 四川九洲防控科技有限责任公司 雷达航迹快速起批方法、装置、存储介质及电子设备

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115103117B (zh) * 2022-06-20 2024-03-26 四川新视创伟超高清科技有限公司 基于二维坐标投影的运动目标快速追踪方法
CN117968665A (zh) * 2024-03-28 2024-05-03 杭州计算机外部设备研究所(中国电子科技集团公司第五十二研究所) 一种目标融合方法及***

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007212418A (ja) * 2006-02-13 2007-08-23 Alpine Electronics Inc 車載レーダ装置
CN109816702A (zh) * 2019-01-18 2019-05-28 苏州矽典微智能科技有限公司 一种多目标跟踪装置和方法
CN110163885A (zh) * 2018-02-12 2019-08-23 杭州海康威视数字技术股份有限公司 一种目标跟踪方法及装置
CN110208793A (zh) * 2019-04-26 2019-09-06 纵目科技(上海)股份有限公司 基于毫米波雷达的辅助驾驶***、方法、终端和介质
CN110246159A (zh) * 2019-06-14 2019-09-17 湖南大学 基于视觉和雷达信息融合的3d目标运动分析方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2639781A1 (en) * 2012-03-14 2013-09-18 Honda Motor Co., Ltd. Vehicle with improved traffic-object position detection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007212418A (ja) * 2006-02-13 2007-08-23 Alpine Electronics Inc 車載レーダ装置
CN110163885A (zh) * 2018-02-12 2019-08-23 杭州海康威视数字技术股份有限公司 一种目标跟踪方法及装置
CN109816702A (zh) * 2019-01-18 2019-05-28 苏州矽典微智能科技有限公司 一种多目标跟踪装置和方法
CN110208793A (zh) * 2019-04-26 2019-09-06 纵目科技(上海)股份有限公司 基于毫米波雷达的辅助驾驶***、方法、终端和介质
CN110246159A (zh) * 2019-06-14 2019-09-17 湖南大学 基于视觉和雷达信息融合的3d目标运动分析方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4206731A4

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114740465A (zh) * 2022-03-18 2022-07-12 四川九洲防控科技有限责任公司 雷达航迹快速起批方法、装置、存储介质及电子设备

Also Published As

Publication number Publication date
EP4206731A1 (en) 2023-07-05
US20230204755A1 (en) 2023-06-29
CN114167404A (zh) 2022-03-11
EP4206731A4 (en) 2024-02-21

Similar Documents

Publication Publication Date Title
CN110543814B (zh) 一种交通灯的识别方法及装置
WO2022027304A1 (zh) 一种自动驾驶车辆的测试方法及装置
WO2022001773A1 (zh) 轨迹预测方法及装置
WO2022052765A1 (zh) 目标跟踪方法及装置
WO2022104774A1 (zh) 目标检测方法和装置
US20220215639A1 (en) Data Presentation Method and Terminal Device
CN113792566A (zh) 一种激光点云的处理方法及相关设备
WO2021218693A1 (zh) 一种图像的处理方法、网络的训练方法以及相关设备
CN113498529B (zh) 一种目标跟踪方法及其装置
WO2022142839A1 (zh) 一种图像处理方法、装置以及智能汽车
WO2022156309A1 (zh) 一种轨迹预测方法、装置及地图
WO2022051951A1 (zh) 车道线检测方法、相关设备及计算机可读存储介质
US20220309806A1 (en) Road structure detection method and apparatus
US20240017719A1 (en) Mapping method and apparatus, vehicle, readable storage medium, and chip
US20230399023A1 (en) Vehicle Driving Intention Prediction Method, Apparatus, and Terminal, and Storage Medium
CN112810603B (zh) 定位方法和相关产品
CN115546781A (zh) 一种点云数据的聚类方法以及装置
WO2021000787A1 (zh) 道路几何识别方法及装置
CN114255275A (zh) 一种构建地图的方法及计算设备
CN117077073A (zh) 一种多模态数据的处理方法及相关装置
WO2022022284A1 (zh) 目标物的感知方法及装置
WO2022068643A1 (zh) 多任务部署的方法及装置
WO2022033089A1 (zh) 确定检测对象的三维信息的方法及装置
WO2021159397A1 (zh) 车辆可行驶区域的检测方法以及检测装置
CN115082886B (zh) 目标检测的方法、装置、存储介质、芯片及车辆

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21865821

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021865821

Country of ref document: EP

Effective date: 20230328

NENP Non-entry into the national phase

Ref country code: DE