WO2020244467A1 - 一种运动状态估计方法及装置 - Google Patents

一种运动状态估计方法及装置 Download PDF

Info

Publication number
WO2020244467A1
WO2020244467A1 PCT/CN2020/093486 CN2020093486W WO2020244467A1 WO 2020244467 A1 WO2020244467 A1 WO 2020244467A1 CN 2020093486 W CN2020093486 W CN 2020093486W WO 2020244467 A1 WO2020244467 A1 WO 2020244467A1
Authority
WO
WIPO (PCT)
Prior art keywords
measurement data
sensor
reference object
target reference
measurement
Prior art date
Application number
PCT/CN2020/093486
Other languages
English (en)
French (fr)
Inventor
王建国
王绪振
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP20818619.7A priority Critical patent/EP3964863A4/en
Publication of WO2020244467A1 publication Critical patent/WO2020244467A1/zh
Priority to US17/542,699 priority patent/US20220089166A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/10Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to vehicle motion
    • B60W40/105Speed
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C25/00Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/50Systems of measurement based on relative movement of target
    • G01S13/58Velocity or trajectory determination systems; Sense-of-movement determination systems
    • G01S13/589Velocity or trajectory determination systems; Sense-of-movement determination systems measuring the velocity vector
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/10Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to vehicle motion
    • B60W40/107Longitudinal acceleration
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C25/00Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
    • G01C25/005Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass initial alignment, calibration or starting-up of inertial devices
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/50Systems of measurement based on relative movement of target
    • G01S13/58Velocity or trajectory determination systems; Sense-of-movement determination systems
    • G01S13/60Velocity or trajectory determination systems; Sense-of-movement determination systems wherein the transmitter and receiver are mounted on the moving object, e.g. for determining ground speed, drift angle, ground track
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/02Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems using reflection of acoustic waves
    • G01S15/50Systems of measurement, based on relative movement of the target
    • G01S15/58Velocity or trajectory determination systems; Sense-of-movement determination systems
    • G01S15/588Velocity or trajectory determination systems; Sense-of-movement determination systems measuring the velocity vector
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/02Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems using reflection of acoustic waves
    • G01S15/50Systems of measurement, based on relative movement of the target
    • G01S15/58Velocity or trajectory determination systems; Sense-of-movement determination systems
    • G01S15/60Velocity or trajectory determination systems; Sense-of-movement determination systems wherein the transmitter and receiver are mounted on the moving object, e.g. for determining ground speed, drift angle, ground track
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/93Sonar systems specially adapted for specific applications for anti-collision purposes
    • G01S15/931Sonar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/865Combination of radar systems with lidar systems

Definitions

  • the invention relates to the technical field of Internet of Vehicles, in particular to a method and device for estimating the motion state of a sensor.
  • ADAS Advanced Driving Assistant System
  • AD Autonomous Driving
  • sensors are usually configured, such as Radar sensors, ultrasonic sensors, and vision sensors to sense the surrounding environment and Target information.
  • the information obtained by the above-mentioned sensors can be used to classify, identify and track the surrounding environment and objects. Further, the above information can be used to evaluate the surrounding environment situation and plan control.
  • the track information of the tracking target can be used as a vehicle
  • the input of planning control improves the efficiency and safety of vehicle planning control.
  • the above-mentioned sensor platform can be a vehicle-mounted, ship-borne, airborne or space-borne system, etc. The movement of the sensor platform has an impact on the realization of the above-mentioned classification, identification, and tracking functions.
  • the sensor will move with the movement of the vehicle, and the target (such as the target vehicle) in the sensor's field of view will also move, then after the two motion states are superimposed on each other, the observation from the sensor
  • the movement of the target is irregular movement.
  • vehicle a is equipped with radar or sonar or ultrasonic sensors to measure the position and speed information of the target vehicle
  • vehicle b is the target vehicle
  • vehicle a is normal Go straight, and vehicle b turns right
  • the running track of vehicle b observed by the sensor on vehicle a is an irregular track. Therefore, estimating the movement state of the sensor and compensating for the influence of the movement of the sensor can effectively improve the accuracy of the target track.
  • Ways to obtain the movement status of the sensor include: (1) Using Global Navigation Satellite System (GNSS) such as Global Positioning System (GPS) satellites for positioning, and measuring multiple satellites to the receiver of the vehicle The distance between the vehicle can be calculated to obtain the specific position of the vehicle, and the motion state of the vehicle can be obtained according to the specific position at multiple consecutive times.
  • GNSS Global Navigation Satellite System
  • GPS Global Positioning System
  • An inertial measurement unit (IMU) can measure the three-axis attitude angle and acceleration of the vehicle. The IMU estimates the motion state of the vehicle through the measured acceleration and attitude angle of the vehicle.
  • IMU has the disadvantage of error accumulation and is easily affected by electromagnetic interference. It can be seen that the error in the motion state of the vehicle measured by the prior art is relatively large, and how to obtain a more accurate sensor motion state is a technical problem being studied by those skilled in the art.
  • the embodiment of the invention discloses a method and a device for motion state estimation, which can obtain a more accurate motion state of the first sensor.
  • an embodiment of the present application provides a motion state estimation method, which includes:
  • each of the measurement data includes at least speed measurement information
  • the motion state of the first sensor is obtained, and the motion state includes at least the velocity vector of the first sensor.
  • a plurality of measurement data is obtained by the first sensor, and according to the measurement data corresponding to the target reference object, the measurement data includes at least speed measurement information. Since there is relative motion between the first sensor and the target reference object, the measurement data of the first sensor may include measurement information on the speed of the relative motion. Therefore, the first sensor can be obtained based on the measurement data corresponding to the target reference object. The movement state of the sensor.
  • the target reference object has various spatial distributions relative to the sensor, and particularly has a different geometric relationship with the first sensor, so that the speed measurement data and the first sensor have different measurement equations, especially The condition number of the measurement matrix in the measurement equation is reduced; moreover, the measurement data corresponding to the target reference object is numerous, thereby effectively reducing the influence of noise or interference on the motion state estimation.
  • the measurement data corresponding to the target reference object can be effectively used, especially the geometric relationship and quantity of the target reference object relative to the sensor, and the influence of measurement error or interference can be effectively reduced, thereby making this determination
  • the method of motion state obtains higher accuracy; in addition, the method can obtain the motion estimation of the sensor by using a single frame of data, so that good real-time performance can be obtained.
  • the target reference object is an object that is stationary relative to a reference frame.
  • the plurality of measurement data are obtained by the first sensor, and each of the After the measurement data includes at least speed measurement information, before obtaining the motion state of the first sensor according to the measurement data corresponding to the target reference object among the multiple measurement data, the method further includes:
  • the measurement data corresponding to the target reference object is determined from the plurality of measurement data.
  • the feature of the target reference object includes the geometric feature of the target reference object and / Or reflective characteristics.
  • the plurality of measurement data are obtained by the first sensor, and each of the After the measurement data includes at least speed measurement information, before obtaining the motion state of the first sensor according to the measurement data corresponding to the target reference object among the multiple measurement data, the method further includes:
  • the measurement data corresponding to the target reference object is determined from the plurality of measurement data of the first sensor.
  • the data from the first sensor is obtained from the data of the second sensor.
  • the measurement data corresponding to the target reference object is determined among the measurement data, including:
  • the measurement data corresponding to the target reference object in the first sensor is determined through the space.
  • the measurement according to the multiple measurement data corresponding to the target reference object Data to get the movement state of the first sensor including:
  • a least square LS estimation and/or sequential block filtering method is adopted to obtain the motion state of the first sensor. It can be understood that the use of LS estimation and/or sequential filtering estimation can more effectively improve the estimation accuracy of the motion state (such as speed) of the first sensor.
  • the measurement corresponding to the target reference object in the plurality of measurement data The data adopts least squares LS estimation and/or sequential block filtering to obtain the motion state of the first sensor, including:
  • Sequential filtering is performed according to the M radial velocity vectors corresponding to the target reference object and the corresponding measurement matrix to obtain the motion estimation value of the first sensor, where M ⁇ 2; the radial velocity vector is determined by
  • the plurality of measurement data is composed of K radial velocity measurement values in the measurement data corresponding to the target reference object, and the corresponding measurement matrix is composed of K directional co-vectors, where K ⁇ 1.
  • ⁇ m,i is the i-th azimuth angle measurement data in the m-th group of measurement data of the target reference object
  • the sequential filtering formula is as follows:
  • G m is the gain matrix
  • an embodiment of the present application provides a motion state estimation device, which includes a processor, a memory, and a first sensor, wherein the memory is used to store program instructions, and the processor is used to call the program instructions to Do the following:
  • each of the measurement data includes at least speed measurement information
  • the motion state of the first sensor is obtained, and the motion state includes at least the velocity vector of the first sensor.
  • a plurality of measurement data is obtained through the first sensor, and the measurement data includes at least speed measurement information according to the measurement data corresponding to the target reference object. Since there is relative motion between the first sensor and the target reference object, the measurement data of the first sensor may include measurement information on the speed of the relative motion. Therefore, the first sensor can be obtained based on the measurement data corresponding to the target reference object. The movement state of the sensor.
  • the target reference object has various spatial distributions relative to the sensor, and particularly has a different geometric relationship with the first sensor, so that the speed measurement data and the first sensor have different measurement equations, especially The condition number of the measurement matrix in the measurement equation is reduced; moreover, the measurement data corresponding to the target reference object is numerous, thereby effectively reducing the influence of noise or interference on the motion state estimation.
  • the measurement data corresponding to the target reference object can be effectively used, especially the geometric relationship and quantity of the target reference object relative to the sensor, and the influence of measurement error or interference can be effectively reduced, thereby making this determination
  • the method of motion state obtains higher accuracy; in addition, the method can obtain the motion estimation of the sensor by using a single frame of data, so that good real-time performance can be obtained.
  • the target reference object is an object that is stationary relative to a reference frame.
  • the processor is further configured to:
  • the measurement data corresponding to the target reference object is determined from the plurality of measurement data.
  • the feature of the target reference object includes the geometric feature of the target reference object and / Or reflective characteristics.
  • the processor is further configured to:
  • the measurement data corresponding to the target reference object is determined from the plurality of measurement data of the first sensor.
  • the data from the first sensor is obtained from the data of the second sensor.
  • the measurement data corresponding to the target reference object is determined among the measurement data, specifically:
  • the measurement data corresponding to the target reference object in the first sensor is determined through the space.
  • the measurement corresponding to the target reference object in the plurality of measurement data Data get the movement state of the first sensor, specifically:
  • a least square LS estimation and/or sequential block filtering method is adopted to obtain the motion state of the first sensor. It can be understood that the use of LS estimation and/or sequential filtering estimation can more effectively improve the estimation accuracy of the motion state (such as speed) of the first sensor.
  • the measurement corresponding to the target reference object in the plurality of measurement data The data adopts least squares LS estimation and/or sequential block filtering to obtain the motion state of the first sensor, specifically:
  • Sequential filtering is performed according to the M radial velocity vectors corresponding to the target reference object and the corresponding measurement matrix to obtain the motion estimation value of the first sensor, where M ⁇ 2; the radial velocity vector is determined by
  • the plurality of measurement data is composed of K radial velocity measurement values in the measurement data corresponding to the target reference object, and the corresponding measurement matrix is composed of K directional co-vectors, where K ⁇ 1.
  • ⁇ m,i is the i-th azimuth measurement data in the m-th group of measurement data of the target reference object
  • ⁇ mi is the i-th pitch angle measurement data in the m-th group of measurement data of the target reference object
  • the sequential filtering formula is as follows:
  • G m is the gain matrix
  • an embodiment of the present application provides a motion state estimation device, which includes some or all units for executing the first aspect or the method described in any possible implementation of the first aspect.
  • a plurality of measurement data is obtained by the first sensor, and according to the measurement data corresponding to the target reference object, the measurement data includes at least speed measurement information. Since there is relative motion between the first sensor and the target reference object, the measurement data of the first sensor may include measurement information on the speed of the relative motion. Therefore, the first sensor can be obtained based on the measurement data corresponding to the target reference object. The movement state of the sensor.
  • the target reference object has various spatial distributions relative to the sensor, and particularly has a different geometric relationship with the first sensor, so that the speed measurement data and the first sensor have different measurement equations, especially The condition number of the measurement matrix in the measurement equation is reduced; moreover, the measurement data corresponding to the target reference object is numerous, thereby effectively reducing the influence of noise or interference on the motion state estimation.
  • the measurement data corresponding to the target reference object can be effectively used, especially the geometric relationship and quantity of the target reference object relative to the sensor, and the influence of measurement error or interference can be effectively reduced, thereby making this determination
  • the method of motion state obtains higher accuracy; in addition, the method can obtain the motion estimation of the sensor by using a single frame of data, so that good real-time performance can be obtained. Further, it can be understood that the use of LS estimation and/or sequential filtering estimation can more effectively improve the estimation accuracy of the motion state (such as speed) of the first sensor.
  • Figure 1 is a schematic diagram of a vehicle motion scene in the prior art
  • Fig. 2 is a schematic diagram of the movement state of a target object detected by radar in the prior art
  • FIG. 3 is a schematic flowchart of a motion state estimation method provided by an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of the distribution of measurement data detected by radar provided by an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a picture taken by a camera according to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of a scene where a target reference object is mapped from pixel coordinates to radar coordinates according to an embodiment of the present invention
  • FIG. 7 is a schematic diagram of a scene for compensating the motion state of a detected detection target based on the motion state of a mine according to an embodiment of the present invention
  • FIG. 8 is a schematic structural diagram of a motion state estimation device provided by an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of another motion state estimation device provided by an embodiment of the present application.
  • the execution subject of the method may be a sensor system or a fusion sensing system or a planning/control system integrated with the above systems, such as a driving assistance or an automatic driving system. It can be in the form of software or hardware (such as a motion state estimation device connected or integrated with the corresponding sensor through a wireless or wired connection).
  • the following different execution steps can be implemented in a centralized or distributed manner.
  • the method includes but is not limited to the following steps:
  • Step S301 Obtain a plurality of measurement data through the first sensor, where each measurement data includes at least speed measurement information.
  • the first sensor may be a radar, sonar, or ultrasonic sensor, or a direction-finding sensor device with the ability to measure frequency shift.
  • the direction-finding sensor device obtains the radial direction by measuring the frequency shift of the received signal relative to a known frequency. Speed information.
  • the first sensor may be a vehicle-mounted, ship-borne, airborne, or satellite-borne sensor.
  • the sensor may be a sensor used to perceive the environment or a target on a system such as a vehicle, a ship, an aircraft, or a drone.
  • the vehicle is usually equipped with one or more of the above-mentioned sensors for safe and reliable driving to measure the state of the surrounding environment or objects (including the state of motion), and process the measurement data
  • the result serves as a reference basis for planning and control.
  • the physical composition of the first sensor can be one or more physical sensors.
  • each physical sensor can measure the azimuth angle, the pitch angle and the radial velocity respectively, or it can be derived from multiple physical sensors.
  • the measurement data derives the azimuth angle, pitch angle and radial velocity, which are not limited here.
  • the measurement data includes at least speed measurement information
  • the speed measurement information may be radial speed measurement information, such as the radial speed of a surrounding environment object or target relative to the sensor, and the measurement data may also include angle measurement information, such as target The azimuth and/or pitch angle measurement information relative to the sensor; it may also include the distance measurement information of the target relative to the sensor.
  • the measurement data may also include the direction cosine information of the surrounding environment object or the target relative to the sensor; the above measurement data information may also be information obtained after the original measurement data of the sensor is transformed, for example, the direction cosine information may be obtained from the target relative to the sensor.
  • the azimuth and/or pitch angle information is obtained, or obtained from the rectangular coordinate position and distance of the target.
  • a radar or sonar sensor is taken as an example.
  • the sensor may periodically or non-periodically transmit signals and obtain measurement data from the received echo signals.
  • the transmit signal may be a chirp signal
  • the distance information of the target can be obtained by the time delay of the echo signal
  • the radial velocity information between the target and the sensor can be obtained by the phase difference between multiple echo signals, and the multiple transmission and/or reception of the sensor
  • the antenna array geometry can obtain the angle of the target relative to the sensor, such as the azimuth and/or elevation angle information. It is understandable that due to the diversity of surrounding objects or targets, the sensor can obtain multiple measurement data for subsequent use.
  • Figure 4 illustrates the spatial location distribution of multiple measurement data obtained by the radar sensor in one frame, and the location of each measurement data is the location corresponding to the location information (distance and azimuth) contained in the measurement data point.
  • Step S302 Obtain the motion state of the first sensor according to the measurement data corresponding to the target reference object in the plurality of measurement data, where the motion state at least includes the velocity vector of the first sensor.
  • the target reference object may be an object or target that is stationary relative to a reference system; for example, taking a vehicle-mounted or drone airborne sensor as an example, the reference system may be a geodetic coordinate system, or an inertial coordinate system that moves at a constant speed relative to the ground,
  • the target reference object may be an object in the surrounding environment, such as guardrails, road edges, light poles, buildings, etc.
  • the target reference object may be a surface buoy, a lighthouse, a shore or island building, etc.
  • the target reference object may be a reference object, such as a spacecraft, that is stationary or moving at a constant speed relative to a star or satellite.
  • the measurement data corresponding to the target reference object may be obtained from the multiple measurement data according to the characteristics of the target reference object.
  • the feature of the target reference object may be a geometric feature of the target reference object, such as a curve feature, such as a straight line, a circular arc, or a spiral spiral, or a reflection feature, such as a radar cross-section (RCS).
  • a curve feature such as a straight line, a circular arc, or a spiral spiral
  • a reflection feature such as a radar cross-section (RCS).
  • the target reference object is a guardrail or a roadside as shown in Figure 5, and the target reference object has obvious geometric characteristics, that is, its data is a straight line or a spiral spiral.
  • feature recognition techniques such as Hough Transform, the data of the target reference object can be separated from the multiple measurement data.
  • the process of obtaining the curb/guardrail is as follows:
  • r k and ⁇ k are the k-th distance and azimuth angle measured by the radar.
  • ⁇ i are the Hough transform space parameters, for different Can get different values of ⁇ i , typical Is the discrete value between 0 and ⁇ .
  • ⁇ i here is usually After quantification.
  • the corresponding different parameters can be accumulated Count or weight of and ⁇ i .
  • one or more parameters corresponding to the count or weight peak can be obtained with Is an integer.
  • T ⁇ is a threshold, which can be based on distance or azimuth or the above parameters And the quantization interval or resolution of ⁇ i are obtained.
  • Hough transform can also identify target reference objects with other geometric characteristics, such as arcs, spirals, etc., which are not listed here.
  • the measurement data corresponding to the target reference object may be obtained from multiple measurement data of the first sensor according to the data of the second sensor.
  • the second sensor may be a visual sensor such as a camera or a camera sensor, or an imaging sensor such as an infrared sensor or a lidar sensor.
  • the second sensor can measure the target reference object within the detection range of the first sensor, including the surrounding environment, objects, or targets.
  • the second sensor can be installed on the same platform as the first sensor, and its data can be transmitted on the same platform; it can also be installed on different platforms, and the measurement data can be exchanged through communication channels, for example, installed on the roadside or Other vehicle-mounted or airborne systems send or receive measurement data or other auxiliary information, such as transformation parameter information, through the cloud.
  • the second sensor as a camera or camera module as an example, the camera or camera module can be used to capture images or videos within the detection range of a radar, sonar, or ultrasonic sensor, and it can be part or all of the detection range of the first sensor Image or video.
  • the image can be one frame or multiple frames.
  • Fig. 5 is a picture displayed in a video image captured by a camera within a detection range of a radar sensor in an embodiment of the present application.
  • the target reference object can be determined.
  • the target reference object can be an object that is stationary relative to the reference system.
  • the reference frame may be the earth, etc., as mentioned above;
  • the target reference object can be identified based on traditional classification or recognition methods or machine learning methods, such as parameter regression, support vector machine, image segmentation, etc.; it can also be based on artificial intelligence (Artificial Intelligence, AI) such as depth Learn technical means such as deep neural networks to identify the target reference object in the second sensor measurement data such as video or image.
  • AI Artificial Intelligence, AI
  • depth Learn technical means such as deep neural networks to identify the target reference object in the second sensor measurement data such as video or image.
  • one or more scenes can be designated as the target reference object according to the application scenario of the sensor, for example, one or more of the roadside, roadside sign, tree, and building can be designated as the target reference object, and the The pixel feature of the target reference object is searched from the measurement data of the second sensor such as the image or video for the same or similar pixel feature as the stored pixel feature. If the pixel feature is found, it is considered that the target reference object exists in the image or video. Then determine the position of the target reference object in the image or video. In a word, the target reference object in the above-mentioned image can be searched for by storing the features of the target reference object (including but not limited to pixel features), and then using the feature comparison method.
  • the features of the target reference object including but not limited to pixel features
  • the obtaining the measurement data corresponding to the target reference object from the multiple measurement data of the first sensor according to the measurement data of the second sensor may include:
  • the measurement data corresponding to the target reference object in the first sensor is determined through the space.
  • the space of the measurement data of the first sensor may be a space referenced by the coordinate system of the first sensor; the space of the measurement data of the second sensor may be the coordinate of the second sensor Is the reference space;
  • the common space may be a space referenced by the coordinate system of the sensor platform where the two sensors are located, for example, it may be a vehicle coordinate system, a ship coordinate system, or an aircraft coordinate system; it may also be a geodetic coordinate system or Coordinate system with reference to a certain star, planet or satellite, etc.
  • the measurement data of the first sensor and the measurement data of the second sensor are mapped to a common space.
  • the installation position and location of the first sensor such as radar in the vehicle coordinate system can be determined in advance.
  • the installation position of the second sensor such as a camera, maps the measurement data of the first sensor and the measurement data of the second sensor to the vehicle coordinate system.
  • the movement state of the sensor can be determined according to the multiple measurement data of the first sensor and the measurement data corresponding to the target reference object.
  • the target reference object is a stationary object relative to the geodetic coordinate system
  • the sensor platform is moving, so the target reference object measured by the sensor is moving rather than stationary relative to the sensor platform or the sensor. It is understandable that after the measurement data of the target reference object is obtained separately, the motion state of the target reference object can be obtained based on the measurement data of the target reference object, or the motion state of the sensor can be obtained equivalently.
  • the implementation process is described as follows.
  • the following takes the first sensor as a radar and the second sensor as a camera as an example to illustrate the implementation process, and the specific sensor is not limited here.
  • the multiple measurement data obtained by the radar and the target reference object data obtained by the camera may be mapped to the same coordinate space; the same coordinate space may be a two-dimensional or multi-dimensional coordinate space, optionally,
  • the multiple measurement data obtained by the radar may be mapped to the image coordinate system where the target reference object obtained by the camera is located, or the target reference object obtained by the camera may be mapped to the multiple measurement data obtained by the radar.
  • the target reference object can be the roadside 601, 602, 603;
  • Figure 6 illustrates the mapping of multiple measurement data obtained by the radar from the radar coordinate system to the target reference object (through the thick black line Represents the scene in the image coordinate system where) is located.
  • the projection mapping relationship of the measurement data obtained by the radar from the radar coordinate system to the image coordinate system is shown in formula 1-1.
  • A is the internal parameter matrix of the camera (or camera module), which is determined by the camera itself and determines the mapping relationship from the pixel coordinate system to the phase plane coordinate system.
  • B is the external parameter matrix, which is determined by the relative position relationship between the camera and the radar, and determines the mapping relationship from the phase plane coordinate system to the radar plane coordinate system, and z 1 is the depth of field information.
  • (u, v) are the coordinates of the target reference object in the pixel coordinate system.
  • the internal parameter matrix and external parameter matrix can be respectively:
  • R and T represent the relative rotation and relative offset of the radar coordinate system and the image coordinate system.
  • the undistorted scene can be further corrected, which is the prior art, and will not be further described here.
  • the position data measured by radar is usually in the form of polar coordinates or spherical coordinates, which can be converted to rectangular coordinates first, and then mapped to the image plane coordinate system using the above formula 1-1.
  • the distance and azimuth in the aforementioned radar data can be converted into rectangular coordinates x and y; the distance, azimuth, and elevation angle in the aforementioned radar measurement data can be converted into rectangular coordinates x, y, and z.
  • mapping rules may also exist, and other mapping rules are not given as examples here.
  • the position measurement data in the radar measurement data is transformed to the image coordinate system to obtain the corresponding pixel position (u, v).
  • the pixel position can be used to determine whether the corresponding radar data is the radar measurement data of the target reference object.
  • deep learning or image or video can be used for target detection, image segmentation or semantic segmentation or instance segmentation, and a mathematical representation of the target reference object can be established, for example, represented by a bounding box. It can be determined whether the pixel corresponding to the radar measurement data falls within the pixel point range of the target reference object, so as to determine whether the corresponding radar measurement data corresponds to the target reference object.
  • the frame of one of the target reference objects can be represented by the interval described by the following F 1 inequality:
  • F 1 4. If the pixel (u, v) corresponding to the radar measurement data satisfies the above inequality, it belongs to the data corresponding to the target reference object, otherwise it is not the data corresponding to the target reference object.
  • the frame of one of the target reference objects can be represented by an interval described by F 2 inequalities:
  • the multiple measurement data measured by the radar and the target reference object sensed by the camera are in the same coordinate space. Therefore, the target reference object can be detected, identified or segmented based on the image or video, thereby effectively confirming the reference to the target Radar measurement data corresponding to the object.
  • the motion state of the first sensor can be determined, where the motion state at least includes a velocity vector
  • the measurement data of the first sensor includes at least speed information, for example, the speed information is radial speed information. Further, the measurement data may also include azimuth angle and/or pitch angle information or direction cosine information.
  • the velocity vector of the first sensor can determine the estimation of the velocity of the sensor according to the following equation:
  • v s is the velocity vector of the first sensor
  • v T is the velocity vector of the target reference, for the purposes of the above object reference
  • v s -v T.
  • the following takes 1-5 as an example, and it can be obtained based on 1-6 equivalently, so this article will not go into details.
  • Is the kth radial velocity measurement data Is the corresponding measurement error, the mean is 0, and the variance is Its value depends on the performance of the first sensor.
  • v s and h k can be respectively
  • v s,x and v s,y are the two components of the velocity vector of the first sensor
  • [] T represents the transposition of a matrix or vector
  • ⁇ x and ⁇ y are directional cosines, which can be directly measured by the first sensor, It can also be calculated by the following formula
  • ⁇ k is the azimuth angle
  • v s and h k can be respectively
  • v s, x , v s, y and v s, z are the three components of the velocity vector of the first sensor
  • [] T represents the transposition of a matrix or vector
  • ⁇ x , ⁇ y and ⁇ z are directional cosines, which can be It can be directly measured by the first sensor or calculated by the following formula
  • ⁇ x cos ⁇ k cos ⁇ k
  • ⁇ y cos ⁇ k sin ⁇ k
  • ⁇ z sin ⁇ k , 1-14
  • ⁇ k is the azimuth angle, and ⁇ k is the pitch angle
  • the motion state of the first sensor can be determined.
  • Several optional implementation schemes are listed below for understanding.
  • the motion state of the first sensor may be obtained based on Least Square (LS) estimation and/or sequential block filtering.
  • LS Least Square
  • Solution 1 Obtain the movement state of the first sensor based on Least Square (LS) estimation.
  • LS Least Square
  • the least square estimated value of the velocity vector of the first sensor can be obtained based on the first radial velocity vector sum and its corresponding measurement matrix.
  • the least square estimated value of the velocity vector is:
  • R is a positive semi-definite matrix or a positive definite matrix, used for regularization, for example:
  • is the unit matrix of order N 1 and ⁇ is a non-negative or normal number, for example ⁇ 0;
  • the first radial velocity vector The N 1 radial velocity by measuring the N 1 values of the measurement data of the reference object corresponding to the target composition vector; matrix Is the first radial velocity vector In the corresponding measurement matrix, N 1 is a positive integer greater than 1.
  • the first radial velocity vector It can be expressed as among them, It represents the target reference i 1 corresponding to the first radial velocity measurements, Is the corresponding measurement error vector, which is composed of the corresponding radial velocity measurement error, as mentioned above; correspondingly, the measurement matrix It can be expressed as
  • the radial velocity measurement matrix by Composition where Is the measured value of the azimuth angle, where N 1 ⁇ 2.
  • the above-mentioned radial velocity measurement matrix by Composition where Is the azimuth measurement value, Is the measured value of the pitch angle, where N 1 ⁇ 3.
  • the radial velocity measurement matrix in the above measurement equation It can also be obtained from the direction cosine, the above-mentioned radial velocity measurement matrix by For a 2-dimensional velocity vector, N 1 ⁇ 2; for a 3-dimensional velocity vector, N 1 ⁇ 3.
  • the components of the cosine vector in each direction are as described above, and will not be further described here.
  • the above or The choice should make the distance between each other as large as possible to get a more accurate least square estimation.
  • the choice of angles that are as far apart as possible can make the condition number of the measurement matrix as small as possible.
  • each radial velocity component in the radial velocity vector makes each column vector of the corresponding measurement matrix orthogonal to each other as much as possible;
  • Solution 2 Obtain the motion state of the first sensor based on sequential block filtering:
  • the motion state of the first sensor can be obtained based on sequential block filtering according to the M radial velocity vectors and their corresponding measurement matrices, wherein the sequential filtering uses the radial direction corresponding to the target reference object each time.
  • the velocity vector is composed of K radial velocity measurement data.
  • the mth estimation formula of sequential filtering is as follows:
  • G m is the gain matrix; It is composed of K radial velocity measurement values, and H m,K is composed of K radial velocity measurement matrices, as described above.
  • the gain matrix may be:
  • R m,K is the covariance matrix of the radial velocity vector measurement error, for example, it can be:
  • Q is the preset velocity estimation covariance matrix
  • Solution 3 Obtain the motion state of the first sensor based on least squares and sequential block filtering
  • the measurement data of the target reference object corresponding to the first sensor can be divided into two parts, where the first part of data is used to obtain the least squares estimated value of the speed vector of the first sensor; the second part of data is used to Obtain the sequential block filtering estimation value of the velocity vector of the first sensor; the least square estimation value of the velocity vector of the first sensor is used as the initial value of the sequential block filtering.
  • I the m-th order of sensor speed through the block filter value
  • I K is the K ⁇ K unit matrix
  • the above And H m, K can be different for different m; the value of K can be the same or different for different m, and can be selected differently according to the situation; the above sequential filtering estimation can effectively reduce the influence of measurement noise, thereby improving the sensor Estimated accuracy.
  • the movement speed of the target reference object can be obtained first, and the movement speed of the sensor can be obtained according to the following relationship:
  • H m, K , and P m-1 are as described above.
  • the m-th second radial velocity vector is expressed as among them, Respectively are the first and second radial velocity measurement data in the m-th group of measurement data of the target reference object; the corresponding measurement matrix is:
  • the m-th second radial velocity vector is expressed as among them, Are the i-th radial velocity measurement data in the m-th group of measurement data of the target reference object; the corresponding measurement matrix is:
  • the selection of M sets of measurement data should make the condition number of the measurement matrix corresponding to each set of measurement data as small as possible;
  • the motion state of the first sensor may further include the position of the first sensor in addition to the velocity vector of the first sensor.
  • the position of the first sensor can be obtained according to the movement speed and the time interval by using a designated time starting point as a reference.
  • the aforementioned motion speed estimation of the first sensor may be provided as the motion speed estimation of other sensors.
  • the other sensor is a sensor located on the same platform as the first sensor, such as a camera, a vision sensor, or an imaging sensor installed in the same vehicle as the radar/sonar/ultrasonic sensor. So as to provide effective speed estimation for other sensors;
  • the movement state of the target object may be compensated according to the movement state of the first sensor to obtain the movement state of the target object relative to the geodetic coordinate system.
  • the target object may be a detection vehicle, an obstacle, a person, an animal, or other objects.
  • the lower left picture is the movement state (such as position) of the first sensor obtained above
  • the right picture is the movement state (such as position) of the target object detected by the detection device
  • the upper left picture is based on the movement of the first sensor State, compensate the movement state of the target object detected by the detection device, and obtain the movement state (position) of the target object relative to the geodetic coordinate system.
  • the above description is to measure the movement state of the first sensor.
  • the movement state of other devices of the platform is similar to the movement of the first sensor.
  • the state is the same or similar, so the above estimation of the motion state of the first sensor can also be equivalent to the estimation of the motion state of other devices. Therefore, if there is a solution for estimating the motion state of the other device using the foregoing principle, it also falls within the protection scope of the embodiment of the present invention.
  • a plurality of measurement data is obtained by the first sensor, and the measurement data includes at least speed measurement information according to the measurement data corresponding to the target reference object. Since there is relative motion between the first sensor and the target reference object, the measurement data of the first sensor may include measurement information on the speed of the relative motion. Therefore, the first sensor can be obtained based on the measurement data corresponding to the target reference object. The movement state of the sensor.
  • the target reference object has various spatial distributions relative to the sensor, and particularly has a different geometric relationship with the first sensor, so that the speed measurement data and the first sensor have different measurement equations, especially The condition number of the measurement matrix in the measurement equation is reduced; moreover, the measurement data corresponding to the target reference object is numerous, thereby effectively reducing the influence of noise or interference on the motion state estimation.
  • the measurement data corresponding to the target reference object can be effectively used, especially the geometric relationship and quantity of the target reference object relative to the sensor, and the influence of measurement error or interference can be effectively reduced, thereby making this determination
  • the method of motion state obtains higher accuracy; in addition, the method can obtain the motion estimation of the sensor by using a single frame of data, so that good real-time performance can be obtained. Further, it can be understood that the use of LS estimation and/or sequential filtering estimation can more effectively improve the estimation accuracy of the motion state (such as speed) of the first sensor.
  • Figure 8 is a schematic structural diagram of a motion state estimation device 80 provided by an embodiment of the present invention.
  • the device 80 may be a sensor system, a fusion sensing system, or a planning/control system that integrates the foregoing systems such as Assisted driving or automatic driving systems, etc., can be software or hardware.
  • the device can be installed or integrated on equipment such as vehicles, ships, airplanes, or drones, or can be installed or connected to the cloud.
  • the device may include an obtaining unit 801 and an estimation unit 802, where:
  • the obtaining unit 801 is configured to obtain a plurality of measurement data through the first sensor, where each measurement data includes at least speed measurement information;
  • the estimating unit 802 is configured to obtain the motion state of the first sensor according to the measurement data corresponding to the target reference object among the multiple measurement data, where the motion state at least includes the velocity vector of the first sensor.
  • a plurality of measurement data is obtained through the first sensor, and the measurement data includes at least speed measurement information according to the measurement data corresponding to the target reference object. Since there is relative motion between the first sensor and the target reference object, the measurement data of the first sensor may include measurement information on the speed of the relative motion. Therefore, the first sensor can be obtained based on the measurement data corresponding to the target reference object. The movement state of the sensor.
  • the target reference object has various spatial distributions relative to the sensor, and particularly has a different geometric relationship with the first sensor, so that the speed measurement data and the first sensor have different measurement equations, especially The condition number of the measurement matrix in the measurement equation is reduced; moreover, the measurement data corresponding to the target reference object is numerous, thereby effectively reducing the influence of noise or interference on the motion state estimation.
  • the measurement data corresponding to the target reference object can be effectively used, especially the geometric relationship and quantity of the target reference object relative to the sensor, and the influence of measurement error or interference can be effectively reduced, thereby making this determination
  • the method of motion state obtains higher accuracy; in addition, the method can obtain the motion estimation of the sensor by using a single frame of data, so that good real-time performance can be obtained.
  • the target reference object is an object that is stationary relative to the reference system.
  • the reference system may be the earth or earth coordinate system or an inertial coordinate system relative to the earth.
  • each measurement data includes at least speed measurement information
  • the multiple measurement data is compared with the target reference object.
  • the corresponding measurement data, before obtaining the motion state of the first sensor also includes:
  • the measurement data corresponding to the target reference object is determined from the plurality of measurement data.
  • the characteristics of the target reference object include geometric characteristics and/or reflection characteristics of the target reference object.
  • each measurement data includes at least speed measurement information
  • the multiple measurement data is compared with the target reference object.
  • the corresponding measurement data, before obtaining the motion state of the first sensor also includes:
  • the measurement data corresponding to the target reference object is determined from the plurality of measurement data of the first sensor.
  • the determining measurement data corresponding to the target reference object from a plurality of measurement data of the first sensor according to the data of the second sensor includes:
  • the measurement data corresponding to the target reference object in the first sensor is determined through the space.
  • the obtaining the motion state of the first sensor according to the measurement data corresponding to the target reference object in the multiple measurement data includes:
  • a least square LS estimation and/or sequential block filtering method is adopted to obtain the motion state of the first sensor. It can be understood that the use of LS estimation and/or sequential filtering estimation can more effectively improve the estimation accuracy of the motion state (such as speed) of the first sensor.
  • the method of least squares LS estimation and/or sequential block filtering is used to obtain the first method according to the measurement data corresponding to the target reference object among the plurality of measurement data.
  • the movement state of the sensor including:
  • Sequential filtering is performed according to the M radial velocity vectors corresponding to the target reference object and the corresponding measurement matrix to obtain the motion estimation value of the first sensor, where M ⁇ 2; the radial velocity vector is determined by
  • the plurality of measurement data is composed of K radial velocity measurement values in the measurement data corresponding to the target reference object, and the corresponding measurement matrix is composed of K directional co-vectors, where K ⁇ 1.
  • ⁇ m,i is the i-th azimuth measurement data in the m-th group of measurement data of the target reference object
  • ⁇ mi is the i-th pitch angle measurement data in the m-th group of measurement data of the target reference object
  • the sequential filtering formula is as follows:
  • G m is the gain matrix
  • each unit may also correspond to the corresponding description of the method embodiment shown in FIG. 3.
  • a plurality of measurement data is obtained by the first sensor, and the measurement data corresponding to the target reference object therein includes at least speed measurement information. Since there is relative motion between the first sensor and the target reference object, the measurement data of the first sensor may include measurement information on the speed of the relative motion. Therefore, the first sensor can be obtained based on the measurement data corresponding to the target reference object. The movement state of the sensor.
  • the target reference object has various spatial distributions relative to the sensor, and particularly has a different geometric relationship with the first sensor, so that the speed measurement data and the first sensor have different measurement equations, especially The condition number of the measurement matrix in the measurement equation is reduced; moreover, the measurement data corresponding to the target reference object is numerous, thereby effectively reducing the influence of noise or interference on the motion state estimation.
  • the measurement data corresponding to the target reference object can be effectively used, especially the geometric relationship and quantity of the target reference object relative to the sensor, and the influence of measurement error or interference can be effectively reduced, thereby making this determination
  • the method of motion state obtains higher accuracy; in addition, the method can obtain the motion estimation of the sensor by using a single frame of data, so that good real-time performance can be obtained. Further, it can be understood that the use of LS estimation and/or sequential filtering estimation can more effectively improve the estimation accuracy of the motion state (such as speed) of the first sensor.
  • FIG. 9 is a motion state estimation 90 provided by an embodiment of the present invention.
  • the device 90 includes a processor 901, a memory 902, and a first sensor 903.
  • the processor 901, the memory 902, and the first sensor 903 The bus 904 is connected to each other.
  • the memory 902 includes but is not limited to random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (erasable programmable read-only memory, EPROM), or A portable read-only memory (compact disc read-only memory, CD-ROM), the memory 902 is used for related program instructions and data.
  • the first sensor 903 is used to collect measurement data.
  • the processor 901 may be one or more central processing units (CPUs).
  • CPUs central processing units
  • the processor 901 is a CPU
  • the CPU may be a single-core CPU or a multi-core CPU.
  • the processor 901 in the device 90 is configured to read the program instructions stored in the memory 902, and perform the following operations:
  • each measurement data includes at least speed measurement information
  • the motion state of the first sensor is obtained, and the motion state includes at least the velocity vector of the first sensor.
  • a plurality of measurement data is obtained through the first sensor, and the measurement data includes at least speed measurement information according to the measurement data corresponding to the target reference object. Since there is relative motion between the first sensor and the target reference object, the measurement data of the first sensor may include measurement information on the speed of the relative motion. Therefore, the first sensor can be obtained based on the measurement data corresponding to the target reference object. The movement state of the sensor.
  • the target reference object has various spatial distributions relative to the sensor, and particularly has a different geometric relationship with the first sensor, so that the speed measurement data and the first sensor have different measurement equations, especially The condition number of the measurement matrix in the measurement equation is reduced; moreover, the measurement data corresponding to the target reference object is numerous, thereby effectively reducing the influence of noise or interference on the motion state estimation.
  • the measurement data corresponding to the target reference object can be effectively used, especially the geometric relationship and quantity of the target reference object relative to the sensor, and the influence of measurement error or interference can be effectively reduced, thereby making this determination
  • the method of motion state obtains higher accuracy; in addition, the method can obtain the motion estimation of the sensor by using a single frame of data, so that good real-time performance can be obtained.
  • the target reference object is an object that is stationary relative to the reference system.
  • the reference system may be the earth or the earth coordinate system, or an inertial coordinate system relative to the earth.
  • the processor 901 is further configured to:
  • the measurement data corresponding to the target reference object is determined from the plurality of measurement data.
  • the characteristics of the target reference object include geometric characteristics and/or reflection characteristics of the target reference object.
  • the processor 901 is further configured to:
  • the measurement data corresponding to the target reference object is determined from the plurality of measurement data of the first sensor.
  • the determining measurement data corresponding to the target reference object from a plurality of measurement data of the first sensor according to the data of the second sensor is specifically:
  • the measurement data corresponding to the target reference object in the first sensor is determined through the space.
  • the obtaining the movement state of the first sensor according to the measurement data corresponding to the target reference object in the multiple measurement data is specifically:
  • a least square LS estimation and/or sequential block filtering method is adopted to obtain the motion state of the first sensor. It can be understood that the use of LS estimation and/or sequential filtering estimation can more effectively improve the estimation accuracy of the motion state (such as speed) of the first sensor.
  • the method of least squares LS estimation and/or sequential block filtering is used to obtain the first method according to the measurement data corresponding to the target reference object among the plurality of measurement data.
  • the movement state of the sensor specifically:
  • Sequential filtering is performed according to the M radial velocity vectors corresponding to the target reference object and the corresponding measurement matrix to obtain the motion estimation value of the first sensor, where M ⁇ 2; the radial velocity vector is determined by
  • the plurality of measurement data is composed of K radial velocity measurement values in the measurement data corresponding to the target reference object, and the corresponding measurement matrix is composed of K directional co-vectors, where K ⁇ 1.
  • ⁇ m,i is the i-th azimuth angle measurement data in the m-th group of measurement data of the target reference object
  • the sequential filtering formula is as follows:
  • G m is the gain matrix
  • each operation may also correspond to the corresponding description of the method embodiment shown in FIG. 3.
  • a plurality of measurement data is obtained through the first sensor, and the measurement data corresponding to the target reference object includes at least speed measurement information. Since there is relative motion between the first sensor and the target reference object, the measurement data of the first sensor may include measurement information on the speed of the relative motion. Therefore, the first sensor can be obtained based on the measurement data corresponding to the target reference object. The movement state of the sensor.
  • the target reference object has various spatial distributions relative to the sensor, and particularly has a different geometric relationship with the first sensor, so that the speed measurement data and the first sensor have different measurement equations, especially The condition number of the measurement matrix in the measurement equation is reduced; moreover, the measurement data corresponding to the target reference object is numerous, thereby effectively reducing the influence of noise or interference on the motion state estimation.
  • the measurement data corresponding to the target reference object can be effectively used, especially the geometric relationship and quantity of the target reference object relative to the sensor, and the influence of measurement error or interference can be effectively reduced, thereby making this determination
  • the method of motion state obtains higher accuracy; in addition, the method can obtain the motion estimation of the sensor by using a single frame of data, so that good real-time performance can be obtained. Further, it can be understood that the use of LS estimation and/or sequential filtering estimation can more effectively improve the estimation accuracy of the motion state (such as speed) of the first sensor.
  • An embodiment of the present invention also provides a chip system, the chip system includes at least one processor, a memory, and an interface circuit.
  • the memory, the interface circuit, and the at least one processor are interconnected by wires, and the at least one memory Program instructions are stored therein; when the program instructions are executed by the processor, the method flow shown in FIG. 3 is realized.
  • the embodiment of the present invention also provides a computer-readable storage medium, which stores instructions in the computer-readable storage medium, and when it runs on a processor, the method flow shown in FIG. 3 is implemented.
  • the embodiment of the present invention also provides a computer program product, when the computer program product runs on a processor, the method flow shown in FIG. 3 is implemented.
  • a plurality of measurement data is obtained through the first sensor, and according to the measurement data corresponding to the target reference object, the measurement data includes at least speed measurement information. Since there is relative motion between the first sensor and the target reference object, the measurement data of the first sensor may include measurement information on the speed of the relative motion. Therefore, the first sensor can be obtained based on the measurement data corresponding to the target reference object. The movement state of the sensor.
  • the target reference object has various spatial distributions relative to the sensor, and particularly has a different geometric relationship with the first sensor, so that the speed measurement data and the first sensor have different measurement equations, especially The condition number of the measurement matrix in the measurement equation is reduced; moreover, the measurement data corresponding to the target reference object is numerous, thereby effectively reducing the influence of noise or interference on the motion state estimation.
  • the measurement data corresponding to the target reference object can be effectively used, especially the geometric relationship and quantity of the target reference object relative to the sensor, and the influence of measurement error or interference can be effectively reduced, thereby making this determination
  • the method of motion state obtains higher accuracy; in addition, the method can obtain the motion estimation of the sensor by using a single frame of data, so that good real-time performance can be obtained. Further, it can be understood that the use of LS estimation and/or sequential filtering estimation can more effectively improve the estimation accuracy of the motion state (such as speed) of the first sensor.
  • the process can be completed by a computer program instructing relevant hardware.
  • the program can be stored in a computer readable storage medium. , May include the processes of the foregoing method embodiments.
  • the aforementioned storage media include: ROM or random storage RAM, magnetic disks or optical discs and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Acoustics & Sound (AREA)
  • Automation & Control Theory (AREA)
  • Mathematical Physics (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Manufacturing & Machinery (AREA)
  • Electromagnetism (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

一种运动状态估计方法及装置,涉及无线通信以及自动驾驶/智能驾驶领域,方法包括:S301:通过第一传感器获得多个测量数据,其中每个测量数据至少包括速度测量信息;S302:根据多个测量数据中与目标参照物对应的测量数据,得到第一传感器的运动状态,运动状态至少包括第一传感器的速度矢量。采用该方法能够获得第一传感器更准确的运动状态,进一步提升汽车自动驾驶或高级驾驶辅助***ADAS能力,可应用于车联网,例如车辆外联V2X、车间通信长期演进技术LTE-V、车辆-车辆V2V等。

Description

一种运动状态估计方法及装置
本申请要求于2019年6月6日提交中国专利局、申请号为201910503710.9、申请名称为“一种运动状态估计方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及车联网技术领域,尤其涉及一种传感器运动状态的估计方法及装置。
背景技术
在先进辅助驾驶***(Advanced Driving Assistant System,ADAS)或者自动驾驶(Autonomous driving,AD)***中通常配置多种传感器,例如雷达(Radar)传感器、超声波传感器、视觉传感器等,用于感知周边环境及目标信息。利用上述传感器获取的信息,可以实现对周边环境及物体的分类、识别以及跟踪等功能,进一步地,利用上述信息可以实现周边环境态势评估以及规划控制等,例如跟踪目标的航迹信息可以作为车辆规划控制的输入,提高车辆规划控制的效率和安全性。上述传感器的平台可以是车载、舰载、机载或者星载***等,传感器平台的运动对上述分类、识别以及跟踪等功能的实现产生影响。具体地,以车载***应用为例,传感器会随着所在车辆运动而运动,传感器视场中的目标(如目标车辆)也会进行运动,那么两者运动状态互相叠加之后,从传感器观测到的目标所做的运动就是不规则的运动。以雷达或者声纳或者超声波传感器为例,如图1所示场景,车辆a配置有雷达或者声纳或者超声波传感器,用于测量目标车辆的位置和速度信息;车辆b为目标车辆,车辆a正常直行,车辆b右拐弯;从图2可以看出,车辆a上的传感器观测到的车辆b运行航迹为不规则航迹。因此,估计传感器的运动状态,并补偿传感器运动的影响,能够有效的提高目标航迹的精度。
获取传感器的运动状态的方式包括:(1)利用全球卫星导航***(Global Navigation Satellite System,GNSS)例如全球定位***(Global Positioning System,GPS)卫星进行定位,通过测量多个卫星到本车辆接收机之间的距离,可以计算得到本车辆的具***置,根据连续多个时刻的具***置即可得到本车辆的运动状态。但是民用的GNSS的精度较低,一般在米量级,因此通常存在较大误差。(2)惯性测量单元(Inertial measurement unit,IMU)能够测量本车辆的三轴姿态角以及加速度,IMU通过测量得到的本车辆的加速度和姿态角来估计得到本车辆的运动状态。但是IMU具有误差累计的缺点,易受电磁干扰的影响。可以看出,现有技术测量的本车辆的运动状态的误差较大,如何获得更准确的传感器运动状态是本领域的技术人员正在研究的技术问题。
发明内容
本发明实施例公开了一种运动状态估计的方法及装置,能够获得第一传感器更准确的运动状态。
第一方面,本申请实施例提供一种运动状态估计方法,该方法包括:
通过第一传感器获得多个测量数据,其中每个所述测量数据至少包括速度测量信息;
根据所述多个测量数据中与目标参照物对应的测量数据,得到第一传感器的运动状态,所述运动状态至少包括第一传感器的速度矢量。
在上述方法中,通过第一传感器获得多个测量数据,根据其中与目标参照物对应的测量数据,所述测量数据至少包含速度测量信息。由于第一传感器与该目标参照物之间存在相对运动,第一传感器的测量数据可以包含对所述相对运动的速度的测量信息,因此,可以基于与目标参照物对应的测量数据,得到第一传感器的运动状态。此外,通常所述目标参照物空间相对传感器的分布多样,特别是与所述第一传感器有不同的几何关系,从而使得所述速度测量数据与所述第一传感器有不同的测量方程,特别是降低测量方程中的测量矩阵的条件数;而且,通过所述与目标参照物对应的测量数据众多,从而有效降低噪声或者干扰对运动状态估计的影响。因此,利用本发明所述的方法,可以有效利用与目标参照物对应的测量数据,特别是目标参照物相对于传感器的几何关系以及数量,有效降低测量误差或者干扰的影响,从而使得这种确定运动状态的方式获得更高的精度;此外,所述方法利用单帧数据即可得到传感器的运动估计,从而可以获得很好的实时性。
结合第一方面,在第一方面的第一种可能的实现方式中,所述目标参照物为相对于参考系静止的物体。
结合第一方面,或者第一方面的上述任一种可能的实现方式,在第一方面的第二种可能的实现方式中,所述通过第一传感器获得多个测量数据,其中每个所述测量数据至少包括速度测量信息之后,所述根据所述多个测量数据中与目标参照物对应的测量数据,得到第一传感器的运动状态之前,还包括:
根据所述目标参照物的特征,从所述多个测量数据中确定与所述目标参照物对应的测量数据。
结合第一方面,或者第一方面的上述任一种可能的实现方式,在第一方面的第三种可能的实现方式中,所述目标参照物的特征包括所述目标参照物的几何特征和/或反射特征。
结合第一方面,或者第一方面的上述任一种可能的实现方式,在第一方面的第四种可能的实现方式中,所述通过第一传感器获得多个测量数据,其中每个所述测量数据至少包括速度测量信息之后,所述根据所述多个测量数据中与目标参照物对应的测量数据,得到第一传感器的运动状态之前,还包括:
根据第二传感器的数据,从所述第一传感器的多个测量数据中确定与所述目标参照物对应的测量数据。
结合第一方面,或者第一方面的上述任一种可能的实现方式,在第一方面的第五种可能的实现方式中,所述根据第二传感器的数据,从所述第一传感器的多个测量数据中确定与所述目标参照物对应的测量数据,包括:
将所述第一传感器的测量数据映射到第二传感器的测量数据的空间,或者,
将所述第二传感器的测量数据映射到所述第一传感器的测量数据的空间,或者,
将所述第一传感器的测量数据和第二传感器的测量数据映射到一个公共的空间,
根据由所述第二传感器测量数据确定的目标参照物,通过所述空间,确定所述第一传感器中目标参照物对应的测量数据。
结合第一方面,或者第一方面的上述任一种可能的实现方式,在第一方面的第六种可能的实现方式中,所述根据所述多个测量数据中与目标参照物对应的测量数据,得到第一传感器的运动状态,包括:
根据所述多个测量数据中与目标参照物对应的测量数据采用最小二乘LS估计和/或序贯分块滤波的方式,得到所述第一传感器的运动状态。可以理解,采用LS估计和/或序贯滤波估计的方式能够更有效地提高第一传感器运动状态(如速度)的估计精度。
结合第一方面,或者第一方面的上述任一种可能的实现方式,在第一方面的第七种可能的实现方式中,所述根据所述多个测量数据中与目标参照物对应的测量数据采用最小二乘LS估计和/或序贯分块滤波的方式,得到所述第一传感器的运动状态,包括:
根据对所述目标参照物对应的M个径向速度矢量及其对应的测量矩阵,进行序贯滤波,得到所述第一传感器的运动估计值,其中M≥2;所述径向速度矢量由所述多个测量数据中与目标参照物对应的测量数据中的K个径向速度测量值组成,所述对应的测量矩阵由K个方向余矢量组成,其中K≥1。
结合第一方面,或者第一方面的上述任一种可能的实现方式,在第一方面的第八种可能的实现方式中:
所述第一传感器的运动速度矢量为二维矢量,K=2,其测量矩阵为:
Figure PCTCN2020093486-appb-000001
其中,θ m,i分别为所述目标参照物的第m组测量数据中第i个方位角测量数据,i=1,2;
或者,所述第一传感器的运动速度矢量为三维矢量,K=3,其测量矩阵为:
Figure PCTCN2020093486-appb-000002
其中,θ m,i为所述目标参照物的第m组测量数据中第i个方位角测量数据,φ m,i为所述目标参照物的第m组测量数据中第i个俯仰角测量数据,i=1,2,3。
结合第一方面,或者第一方面的上述任一种可能的实现方式,在第一方面的第九种可能的实现方式中,所述序贯滤波的公式如下:
Figure PCTCN2020093486-appb-000003
Figure PCTCN2020093486-appb-000004
P m,1|0=P m-1,1|1
P m,1|1=(I-G m-1H m-1,K)P m,1|0
其中,
Figure PCTCN2020093486-appb-000005
为第m次滤波的速度矢量估计值,G m为增益矩阵,
Figure PCTCN2020093486-appb-000006
为第m个径向速度矢量测量值,R m,K为第m个径向速度矢量测量误差协方差矩阵,m=1,2,…,M。
第二方面,本申请实施例提供一种运动状态估计装置,该装置包括处理器、存储器和第一传感器,其中,所述存储器用于存储程序指令,所述处理器用于调用所述程序指令来执行如下操作:
通过所述第一传感器获得多个测量数据,其中每个所述测量数据至少包括速度测量信息;
根据所述多个测量数据中与目标参照物对应的测量数据,得到第一传感器的运动状态,所述运动状态至少包括第一传感器的速度矢量。
在上述装置中,通过第一传感器获得多个测量数据,根据其中与目标参照物对应的测量数据,所述测量数据至少包含速度测量信息。由于第一传感器与该目标参照物之间存在相对运动,第一传感器的测量数据可以包含对所述相对运动的速度的测量信息,因此,可以基于与目标参照物对应的测量数据,得到第一传感器的运动状态。此外,通常所述目标参照物空间相对传感器的分布多样,特别是与所述第一传感器有不同的几何关系,从而使得所述速度测量数据与所述第一传感器有不同的测量方程,特别是降低测量方程中的测量矩阵的条件数;而且,通过所述与目标参照物对应的测量数据众多,从而有效降低噪声或者干扰对运动状态估计的影响。因此,利用本发明所述的方法,可以有效利用与目标参照物对应的测量数据,特别是目标参照物相对于传感器的几何关系以及数量,有效降低测量误差或者干扰的影响,从而使得这种确定运动状态的方式获得更高的精度;此外,所述方法利用单帧数据即可得到传感器的运动估计,从而可以获得很好的实时性。
结合第二方面,在第二方面的第一种可能的实现方式中,所述目标参照物为相对于参考系静止的物体。
结合第二方面,或者第二方面的上述任一种可能的实现方式,在第二方面的第二种可能的实现方式中,所述通过所述第一传感器获得多个测量数据,其中每个所述测量数据至少包括速度测量信息之后,所述根据所述多个测量数据中与目标参照物对应的测量数据,得到第一传感器的运动状态之前,所述处理器还用于:
根据所述目标参照物的特征,从所述多个测量数据中确定与所述目标参照物对应的测量数据。
结合第二方面,或者第二方面的上述任一种可能的实现方式,在第二方面的第三种可能的实现方式中,所述目标参照物的特征包括所述目标参照物的几何特征和/或反射特征。
结合第二方面,或者第二方面的上述任一种可能的实现方式,在第二方面的第四种可能的实现方式中,所述通过所述第一传感器获得多个测量数据,其中每个所述测量数据至少包括速度测量信息之后,所述根据所述多个测量数据中与目标参照物对应的测量数据,得到第一传感器的运动状态之前,所述处理器还用于:
根据第二传感器的数据,从所述第一传感器的多个测量数据中确定与所述目标参照物对应的测量数据。
结合第二方面,或者第二方面的上述任一种可能的实现方式,在第二方面的第五种可能的实现方式中,所述根据第二传感器的数据,从所述第一传感器的多个测量数据中确定与所述目标参照物对应的测量数据,具体为:
将所述第一传感器的测量数据映射到第二传感器的测量数据的空间,或者,
将所述第二传感器的测量数据映射到所述第一传感器的测量数据的空间,或者,
将所述第一传感器的测量数据和第二传感器的测量数据映射到一个公共的空间,
根据由所述第二传感器测量数据确定的目标参照物,通过所述空间,确定所述第一传 感器中目标参照物对应的测量数据。
结合第二方面,或者第二方面的上述任一种可能的实现方式,在第二方面的第六种可能的实现方式中,所述根据所述多个测量数据中与目标参照物对应的测量数据,得到第一传感器的运动状态,具体为:
根据所述多个测量数据中与目标参照物对应的测量数据采用最小二乘LS估计和/或序贯分块滤波的方式,得到所述第一传感器的运动状态。可以理解,采用LS估计和/或序贯滤波估计的方式能够更有效地提高第一传感器运动状态(如速度)的估计精度。
结合第二方面,或者第二方面的上述任一种可能的实现方式,在第二方面的第七种可能的实现方式中,所述根据所述多个测量数据中与目标参照物对应的测量数据采用最小二乘LS估计和/或序贯分块滤波的方式,得到所述第一传感器的运动状态,具体为:
根据对所述目标参照物对应的M个径向速度矢量及其对应的测量矩阵,进行序贯滤波,得到所述第一传感器的运动估计值,其中M≥2;所述径向速度矢量由所述多个测量数据中与目标参照物对应的测量数据中的K个径向速度测量值组成,所述对应的测量矩阵由K个方向余矢量组成,其中K≥1。
结合第二方面,或者第二方面的上述任一种可能的实现方式,在第二方面的第八种可能的实现方式中:
所述第一传感器的运动速度矢量为二维矢量,K=2,其测量矩阵为:
Figure PCTCN2020093486-appb-000007
其中,θ m,i分别为所述目标参照物的第m组测量数据中第i个方位角测量数据,i=1,2;
或者,所述第一传感器的运动速度矢量为三维矢量,K=3,其测量矩阵为:
Figure PCTCN2020093486-appb-000008
其中,θ m,i为所述目标参照物的第m组测量数据中第i个方位角测量数据,φ mi为所述目标参照物的第m组测量数据中第i个俯仰角测量数据,i=1,2,3。
结合第二方面,或者第二方面的上述任一种可能的实现方式,在第二方面的第九种可能的实现方式中,所述序贯滤波的公式如下:
Figure PCTCN2020093486-appb-000009
Figure PCTCN2020093486-appb-000010
P m,1|0=P m-1,1|1
P m,1|1=(I-G m-1H m-1,K)P m,1|0
其中,
Figure PCTCN2020093486-appb-000011
为第m次滤波的速度矢量估计值,G m为增益矩阵,
Figure PCTCN2020093486-appb-000012
为第m个径向速度矢量测量值,R m,K为第m个径向速度矢量测量误差协方差矩阵,m=1,2,…,M。
第三方面,本申请实施例提供一种运动状态估计装置,该装置包括用于执行第一方面或者第一方面的任一可能的实现方式所描述的方法的部分或全部单元。
通过实施本发明实施例,通过第一传感器获得多个测量数据,根据其中与目标参照物对应的测量数据,所述测量数据至少包含速度测量信息。由于第一传感器与该目标参照物之间存在相对运动,第一传感器的测量数据可以包含对所述相对运动的速度的测量信息,因此,可以基于与目标参照物对应的测量数据,得到第一传感器的运动状态。此外,通常所述目标参照物空间相对传感器的分布多样,特别是与所述第一传感器有不同的几何关系,从而使得所述速度测量数据与所述第一传感器有不同的测量方程,特别是降低测量方程中的测量矩阵的条件数;而且,通过所述与目标参照物对应的测量数据众多,从而有效降低噪声或者干扰对运动状态估计的影响。因此,利用本发明所述的方法,可以有效利用与目标参照物对应的测量数据,特别是目标参照物相对于传感器的几何关系以及数量,有效降低测量误差或者干扰的影响,从而使得这种确定运动状态的方式获得更高的精度;此外,所述方法利用单帧数据即可得到传感器的运动估计,从而可以获得很好的实时性。进一步地,可以理解,采用LS估计和/或序贯滤波估计的方式能够更有效地提高第一传感器运动状态(如速度)的估计精度。
附图说明
以下对本发明实施例用到的附图进行介绍。
图1是现有技术中的一种车辆运动场景示意图;
图2是现有技术中一种雷达探测的目标对象的运动状态示意图;
图3是本发明实施例提供的一种运动状态估计方法的流程示意图;
图4是本发明实施例提供的一种雷达探测的测量数据的分布示意图;
图5是本发明实施例提供的一种摄像头拍摄画面的示意图;
图6是本发明实施例提供的一种目标参照物从像素坐标向雷达坐标映射的场景示意图;
图7是本发明实施例提供的一种基于雷的运动状态对探测的探测目标的运动状态进行补偿的场景示意图;
图8是本申请实施例提供的一种运动状态估计装置的结构示意图;
图9是本申请实施例提供的又一种运动状态估计装置的结构示意图。
具体实施方式
下面结合本发明实施例中的附图对本发明实施例进行描述。
请参见图3,是本发明实施例提供的一种运动状态估计方法,该方法的执行主体可以是传感器***或者融合感知***或者集成上述***的规划/控制***如辅助驾驶或者自动驾驶***等,可以是软件或者硬件(如与相应传感器通过无线或者有线连接或者集成在一起的运动状态估计装置)形式,以下不同的执行步骤可以集中式也可以分布式实现。
该方法包括但不限于如下步骤:
步骤S301:通过第一传感器获得多个测量数据,其中每个所述测量数据至少包括速度测量信息。
具体地,所述第一传感器可以是雷达、声纳或者超声波传感器,或者具有测量频移能 力的测向传感器器,所述测向传感器器通过测量接收信号相对已知频率的频移得到径向速度信息。所述第一传感器可以是车载、舰载、机载或者星载等传感器。例如该传感器可以为车辆、轮船、飞机或者无人机等***上的用于感知环境或者目标的传感器。例如,在辅助驾驶或者无人驾驶场景中,车辆要进行安全可靠的驾驶通常安装上述一种或者多种传感器,对周围环境或者物体的状态(包括运动状态)进行测量,并将测量数据的处理结果作为规划和控制的参考依据。
需要进一步指出的是,此处第一传感器的物理构成可以是一个或者多个物理传感器,例如其中各个物理传感器可以分别测量方位角、俯仰角以及径向速度,也可以是从多个物理传感器的测量数据导出所述方位角、俯仰角以及径向速度,此处不做限定。
所述测量数据至少包括速度测量信息,所述速度测量信息可以是径向速度测量信息,例如周围环境物体或者目标相对于传感器的径向速度,所述测量数据还可以包括角度测量信息,例如目标相对于传感器的方位角和/或俯仰角测量信息;还可以包括目标相对于传感器的距离测量信息。此外,所述测量数据还可以包括周围环境物体或者目标相对于传感器的方向余弦信息;上述测量数据信息还可以是传感器原始测量数据变换之后得到的信息,例如方向余弦信息可以从目标相对于传感器的方位角和/或俯仰角信息得到,或者从目标的直角坐标位置和距离测量得到。
本申请实施例中,以雷达或者声纳传感器为例,该传感器可以周期性或者非周期性地发射信号并从接收到的回波信号中得到测量数据,例如所述发射信号可以是线性调频信号,通过回波信号的时延可以得到目标的距离信息,通过多个回波信号之间的相位差可以得到该目标与传感器之间的径向速度信息,通过传感器的多个发射和/或接收天线阵列几何,可以得到目标相对于传感器的角度例如方位角和/或俯仰角信息。可以理解的是,由于周边环境物体或者目标的多样性,该传感器可以获得多个测量数据,供后续使用。图4示意了雷达传感器在一帧中获得的多个测量数据在空间的位置分布,每个测量数据所在的位置即为该测量数据点包含的位置信息(距离和方位角)所对应的位置。
步骤S302:根据所述多个测量数据中与目标参照物对应的测量数据,得到第一传感器的运动状态,所述运动状态至少包括第一传感器的速度矢量。
所述目标参照物可以是相对参考系静止的物体或者目标;例如以车载或者无人机机载传感器为例,所述参考系可以是大地坐标系,或者相对于大地匀速运动的惯性坐标系,所述目标参照物可以是周边环境中的物体,例如护栏、道路边沿、灯杆、建筑物等。以舰载传感器为例,所述目标参照物可以是水面浮标、灯塔、岸边或者岛屿建筑物等。以星载传感器为例,所述目标参照物可以是相对于恒星或者卫星静止或者匀速运动的参照物如飞船等。
在第一种可选的方案中,可以根据目标参照物的特征,从所述多个测量数据中得到所述目标参照物对应的测量数据。
所述目标参照物的特征可以是目标参照物的几何特征如曲线特征如直线、圆弧或者回旋螺线等,或者反射特征如雷达散射截面积(Radar Cross-Section,RCS)等。
以图4雷达测量数据为例,其中雷达测量包括距离、方位和径向速度测量信息。其中目标参照物为如图5所述的护栏或者路沿,所述目标参照物具有明显的几何特征,即其数 据为直线或者回旋螺线。利用霍夫变换(Hough Transform)等特征识别技术可以从上述多个测量数据中把上述目标参照物的数据分离出来。
以霍夫变换识别直线几何特征的目标参照物为例,其获取路沿/护栏的过程如下:
根据多个雷达距离和方位测量数据变换到霍夫变换空间,例如基于以下公式:
Figure PCTCN2020093486-appb-000013
其中r k和θ k为雷达测量的第k个距离和方位角。
Figure PCTCN2020093486-appb-000014
和ρ i为霍夫变换空间参数,对于不同的
Figure PCTCN2020093486-appb-000015
的取值,可以得到不同的ρ i取值,典型的
Figure PCTCN2020093486-appb-000016
为0和π之间的离散值。此外,需要指出的是,此处ρ i通常对
Figure PCTCN2020093486-appb-000017
经过量化得到。
对多个不同的雷达测量数据r k和θ k,可以累积对应的不同参数
Figure PCTCN2020093486-appb-000018
和ρ i的计数或者权重。
在霍夫变换空间中得到一个或者多个峰值对应的参数,例如
利用上述霍夫变换空间中不同参数
Figure PCTCN2020093486-appb-000019
和ρ i的计数或者权重,可以得到一个或者多个计数或者权重峰值对应的参数
Figure PCTCN2020093486-appb-000020
Figure PCTCN2020093486-appb-000021
为整数。
根据一个或者多个峰值对应的参数得到目标参照物对应的测量数据,例如对满足或者近似满足以下公式:
Figure PCTCN2020093486-appb-000022
或者不等式:
Figure PCTCN2020093486-appb-000023
其中T ρ为一个门限值,可以根据距离或者方位角或者上述参数
Figure PCTCN2020093486-appb-000024
和ρ i的量化间隔或者分辨率得到。
霍夫变换还可以识别其他几何特征的目标参照物,如圆弧线、回旋螺线等,此处不一一列举。
在第二种可选的方案中,还可以根据第二传感器的数据,从所述第一传感器的多个测量数据中得到所述目标参照物对应的测量数据。
具体地,所述第二传感器可以是视觉传感器如摄像头或者相机传感器,或者成像传感器如红外传感器或者激光雷达传感器等。
所述第二传感器可以测量所述第一传感器探测范围内的目标参照物,包括周边环境、物体或者目标等。
具体地,所述第二传感器可以与第一传感器安装在同一平台上,其数据可以在同一平台上传输;也可以安装在不同的平台上,通过通信管道交换测量数据,例如安装在路边或者其它车载或者机载***通过云端发送或者接收测量数据或者其它辅助信息,如变换参数信息。以第二传感器为摄像头或者摄像头模组为例,该摄像头或者摄像头模组可以用于拍摄雷达或者声纳或者超声波传感器探测范围内的图像或者视频,可以是第一传感器探测范围内的局部或者全部的图像或者视频。其中该图像可以是一帧也可以是多帧。图5是本申请实施例雷达传感器探测范围内摄像头拍摄的视频图像中显示的画面。
根据所述第二传感器的测量数据,可以确定所述目标参照物,例如,该目标参照物可以为相对于参考系静止的物体。
可选的,所述参考系可以是大地等,如前述;
可选的,可以基于传统的分类或者识别方法或者机器学习方法识别所述目标参照物,如通过参数回归、支持向量机、图像分割等方法;也可以基于人工智能(Artificial Intelligence,AI)如深度学习如深度神经网络等技术手段识别出第二传感器测量数据如视频或者图像中的目标参照物。
可选的,可以根据传感器的应用场景指定一个或者多个景物为目标参照物,例如,指定路边沿、路边标志牌、树木和建筑物中一项或者多项为目标参照物,预先存储该目标参照物的像素特征,从上述第二传感器的测量数据如图像或者视频中搜索与存储的像素特征相同,或者相似的像素特征,如果搜索到则认为所述图像或者视频中存在目标参照物,进而确定目标参照物在所述图像或者视频中的位置。总而言之,可以通过存储目标参照物的特征(包括但不限于像素特征),然后采用特征对比的方式搜索到上述图像中的目标参照物。
其中,所述根据第二传感器的测量数据,从所述第一传感器的多个测量数据中得到所述目标参照物对应的测量数据,可以包括:
将第一传感器的测量数据映射到第二传感器的测量数据的空间,或者,
将第二传感器的测量数据映射到第一传感器的测量数据的空间,或者,
将第一传感器的测量数据映射和第二传感器的测量数据映射到一个公共的空间;
根据由第二传感器的测量数据中确定的目标参照物,通过所述空间,确定所述第一传感器中目标参照物对应的测量数据。
可选的,所述第一传感器的测量数据的空间可以是以所述第一传感器的坐标系为参考的空间;所述第二传感器的测量数据的空间可以是以所述第二传感器的坐标系为参考的空间;
所述公共的空间可以是所述两个传感器所在的传感器平台所在的坐标系为参考的空间,例如,可以是车辆坐标系,舰船坐标系或者飞机坐标系等;也可以是大地坐标系或者以某一恒星或者行星或者卫星为参考的坐标系等。可选的,所述第一传感器的测量数据映射和第二传感器的测量数据映射到一个公共的空间,以车辆坐标系为例,可以事先测定第一传感器如雷达在车辆坐标系的安装位置和第二传感器如摄像头的安装位置,将第一传感器的测量数据和第二传感器的测量数据映射到车辆坐标系。
根据所述第一传感器的多个测量数据与目标参照物对应的测量数据可以确定所述传感器的运动状态。
需要说明的是,目标参照物如果为相对于大地坐标系静止的物体,那么由于传感器平台在运动,因此传感器测得的目标参照物相对于传感器平台或者传感器是运动的而非静止的。可以理解的是,在分离得到目标参照物的测量数据之后,可以基于目标参照物的测量数据得到目标参照物的运动状态,或者等价地得到传感器的运动状态,其实现过程如下描述。
以下以第一传感器为雷达、第二传感器为摄像头为例,说明实现过程,此处并不限定具体传感器。
具体来说,可以先将所述雷达得到的多个测量数据和所述摄像头得到目标参照物的数据映射到同一坐标空间;该同一坐标空间可以为二维或者多维的坐标空间,可选的,可以将所述雷达得到的多个测量数据映射到所述摄像头得到的目标参照物所在的图像坐标系中, 或者将所述摄像头得到的目标参照物映射到所述雷达得到的多个测量数据所在的雷达坐标系中,或者将二者映射到另外的一个公共坐标空间中。如图6所示,该目标参照物可以为路边沿601,602,603;图6示意了将所述雷达得到的多个测量数据从所在的雷达坐标系映射到目标参照物(通过粗黑线表示)所在的图像坐标系中的场景。
可选的,雷达得到的测量数据从所在的雷达坐标系映射到图像坐标系的投影映射关系式如公式1-1所示。
Figure PCTCN2020093486-appb-000025
在公式1-1中,A为摄像头(或摄像头模组)的内参矩阵,由摄像头本身决定,决定从像素坐标系到相平面坐标系的映射关系。B为外参矩阵,由摄像头和雷达的相对位置关系决定,决定从相平面坐标系到雷达平面坐标系的映射关系,z 1为景深信息。(x,y,z)为在雷达坐标系中的坐标(如果忽略垂直维度信息则z=0),(u,v)为目标参照物在像素坐标系中的坐标。
例如,对于无畸变的场景,其内参矩阵和外参矩阵可以分别为:
Figure PCTCN2020093486-appb-000026
其中f为焦距,R和T表示雷达坐标系和图像坐标系的相对旋转和相对偏移。对于无畸变的场景可以进一步修正,为现有技术,此处不进一步赘述。
雷达测量的位置数据通常为极坐标或者球坐标形式,可以先转换为直角坐标,然后利用上述公式1-1映射到图像平面坐标系。例如上述雷达数据中的距离和方位角可以转换为直角坐标x和y;上述雷达测量数据中的距离、方位角和俯仰角可以转换为直角坐标x、y和z。
可以理解的是,还可以存在其他映射规则,其他映射规则此处不一一举例。
利用上述投影变换1-1,上述雷达测量数据中的位置测量数据变换到图像坐标系,得到对应的像素位置(u,v)。利用该像素位置可以确定对应的雷达数据是否为目标参照物的雷达测量数据。
具体地,利用深度学习或者图像或者视频可以通过目标检测、图像分割或或者语义分割或者实例分割,并对目标参照物建立起其数学表示,例如用边框(Bounding Box)表示。可以确定上述雷达测量数据对应的像素是否落在上述目标参照物的像素点范围内,从而确定对应的雷达测量数据是否对应目标参照物。
作为一种实现,其中一个目标参照物的边框可以用以下F 1个不等式描述的区间表示:
a iu+b iv≤c i,i=1,2,…,F 1;        1-3
典型情况下F 1=4.如果雷达测量数据对应的像素(u,v)满足上述不等式,则它属于目标参照物对应的数据,否则不是目标参照物对应的数据。
作为另一种实现,其中一个目标参照物的边框可以用F 2个不等式描述的区间表示:
c i≤a iu+b iv≤d i,i=1,2,…,F 2;         1-4
典型情况下F 2=2.如果雷达测量数据对应的像素(u,v)满足上述不等式,则它属于目标参照物对应的数据,否则不是目标参照物对应的数据。
对于如何目标检测、图像分割或者语义分割或者实例分割,获取目标参照物的数学表示,以及判断雷达测量数据属于上述目标参照物的数据的具体实现方式,此处不加限定。
通过以上投影映射,上述雷达测量的多个测量数据和摄像头感知的目标参照物就处于同一个坐标空间,因此可以基于图像或视频检测、识别或者分割所述目标参照物,从而有效确认与目标参照物对应的雷达测量数据。
根据所述目标参照物对应的测量数据,可以确定所述第一传感器的运动状态,其中运动状态至少包括速度矢量;
所述第一传感器的测量数据至少包括速度信息,例如所述速度信息为径向速度信息。进一步地,所述测量数据还可以包括方位角和/或俯仰角信息或者方向余弦信息。
具体地,第一传感器的速度矢量可以根据以下方程确定传感器运动速度的估计:
Figure PCTCN2020093486-appb-000027
或者等价地,
Figure PCTCN2020093486-appb-000028
其中v s为第一传感器的速度矢量,v T为目标参照物的速度矢量,对于上述目标参照物而言,v s=-v T
因此,可以直接根据公式1-5得到第一传感器的速度矢量v s,或者等价地,根据公式1-6得到目标参照物的速度矢量v T,利用v s=-v T得到第一传感器的速度矢量v s。以下以1-5为例说明,基于1-6可以等价对应得到,本文不进一步赘述。
上述
Figure PCTCN2020093486-appb-000029
为所述第k个径向速度测量数据,
Figure PCTCN2020093486-appb-000030
为对应的测量误差,其均值为0,方差为
Figure PCTCN2020093486-appb-000031
其取值依赖于第一传感器的性能。
以2维速度矢量为例,v s和h k可以分别为
v s=[v s,x v s,y] T        1-7
h k=[Λ x v y]        1-8
其中v s,x和v s,y为第一传感器速度矢量的两个分量,[ ] T表示矩阵或者矢量的转置;Λ x和Λ y为方向余弦,可以直接通过第一传感器测量得到,也可以通过以下公式计算得到
Λ x=cosθ ky=sinθ k           1-9
其中θ k为方位角;
或者
Figure PCTCN2020093486-appb-000032
其中r k通过距离测量得到,或者通过以下公式计算
Figure PCTCN2020093486-appb-000033
以3维速度矢量为例,v s和h k可以分别为
v s=[v s,x v s,y v s,z] T     1-12
h k=[Λ x Λ y Λ z]      1-13
其中v s,x,v s,y和v s,z为第一传感器速度矢量的三个分量,[ ] T表示矩阵或者矢量的转置;Λ xy和Λ z为方向余弦,可以直接通过第一传感器测量得到,也可以通过以下公式计算得到
Λ x=cosφ kcosθ ky=cosφ ksinθ kz=sinφ k,  1-14
其中θ k为方位角,φ k为俯仰角;
或者
Figure PCTCN2020093486-appb-000034
其中r k通过距离测量得到,或者通过以下公式计算:
Figure PCTCN2020093486-appb-000035
利用上述测量方程,根据所述目标参照物对应的测量数据,可以确定所述第一传感器的运动状态,下面例举几种可选的实现方案以便理解。
具体的,可以基于最小二乘(Least Squared,LS)估计和/或序贯分块滤波得到所述第一传感器的运动状态。
方案一:基于最小二乘(Least Squared,LS)估计得到所述第一传感器的运动状态。
具体的,可以基于第一径向速度矢量和及其对应的测量矩阵得到所述第一传感器的速度矢量的最小二乘估计值。可选的,所述速度矢量的最小二乘估计值为:
Figure PCTCN2020093486-appb-000036
或者
Figure PCTCN2020093486-appb-000037
其中
Figure PCTCN2020093486-appb-000038
为传感器最小二乘估计值;
或者
Figure PCTCN2020093486-appb-000039
其中
Figure PCTCN2020093486-appb-000040
为传感器正则化最小二乘估计值;R为半正定矩阵或者正定矩阵,用于正则化,例如:
R=α·I          1-20
其中Ι为N 1阶的单位阵,α为非负或者正常数,例如
Figure PCTCN2020093486-appb-000041
γ≥0;
其中,第一径向速度矢量
Figure PCTCN2020093486-appb-000042
为由所述目标参照物对应的N 1个测量数据中的N 1个径向速度测量值组成的矢量;矩阵
Figure PCTCN2020093486-appb-000043
为与第一径向速度矢量
Figure PCTCN2020093486-appb-000044
对应的测量矩阵,N 1为大于1的正整数。
第一径向速度矢量
Figure PCTCN2020093486-appb-000045
和对应的测量矩阵
Figure PCTCN2020093486-appb-000046
满足以下测量方程,
Figure PCTCN2020093486-appb-000047
具体的,所述第一径向速度矢量
Figure PCTCN2020093486-appb-000048
可以表示为
Figure PCTCN2020093486-appb-000049
其中,
Figure PCTCN2020093486-appb-000050
表示所述目标参照物对应的第i 1个径向速度测量值,
Figure PCTCN2020093486-appb-000051
为与之对应的测量误差矢量,由对应的径向速度测量误差组成,如前述;与之相对应,测量矩阵
Figure PCTCN2020093486-appb-000052
可以表示为
Figure PCTCN2020093486-appb-000053
可选的,以第一传感器获得方位角和径向速度测量数据为例,所述径向速度测量矩阵
Figure PCTCN2020093486-appb-000054
Figure PCTCN2020093486-appb-000055
组成,其中
Figure PCTCN2020093486-appb-000056
为方位角测量值,其中N 1≥2。
可选的,以第一传感器获得方位角、俯仰角和径向速度测量数据为例,上述径向速度测量矩阵
Figure PCTCN2020093486-appb-000057
Figure PCTCN2020093486-appb-000058
Figure PCTCN2020093486-appb-000059
Figure PCTCN2020093486-appb-000060
组成,其中
Figure PCTCN2020093486-appb-000061
为方位角测量值,
Figure PCTCN2020093486-appb-000062
为俯仰角测量值,其中N 1≥3。
类似的,上述测量方程中的径向速度测量矩阵
Figure PCTCN2020093486-appb-000063
也可以由方向余弦得到,上述径向速度测量矩阵
Figure PCTCN2020093486-appb-000064
Figure PCTCN2020093486-appb-000065
组成,其中对于2维速度矢量,N 1≥2;对于3维速度矢量,N 1≥3。每个方向余弦矢量的各个分量如上所述,此处不进一步赘述。
作为一种实现,上述
Figure PCTCN2020093486-appb-000066
或者
Figure PCTCN2020093486-appb-000067
的选择,应该使得彼此间隔尽量大,可以得到更为精确的最小二乘估计。彼此间隔尽量大的角度选择可以使得上述测量矩阵的条件数尽可能小。
可选的,所述径向速度矢量中的各个径向速度分量的选择使得对应的测量矩阵的各个列矢量尽量彼此正交;
方案二:基于序贯分块滤波得到所述第一传感器的运动状态:
具体的,可以根据M个径向速度矢量及其对应的测量矩阵,基于序贯分块滤波得到所述第一传感器的运动状态,其中序贯滤波每次使用所述目标参照物对应的径向速度矢量由K个径向速度测量数据组成。
可选的,序贯滤波的第m次估计公式如下:
Figure PCTCN2020093486-appb-000068
其中G m为增益矩阵;
Figure PCTCN2020093486-appb-000069
由K个径向速度测量值组成,H m,K由K个径向速度测量矩阵 组成,如前述。对于二维速度矢量估计K≥2;对于三维速度矢量估计K≥3;
可选的,所述增益矩阵可以为:
Figure PCTCN2020093486-appb-000070
其中R m,K为径向速度矢量测量误差协方差矩阵,例如,可以为:
Figure PCTCN2020093486-appb-000071
P m,1|1=(I-G m-1H m-1,K)P m,1|0      1-26
P m,1|0=P m-1,1|1       1-27
可选的,作为一种实现,可以根据先验信息得到初始估计及其协方差P 0,1|1=P 0:
P 0=Q        1-28
Figure PCTCN2020093486-appb-000072
其中,Q为预先设定的速度估计协方差矩阵;
方案三:基于最小二乘和序贯分块滤波得到所述第一传感器的运动状态;
具体的,可以将所述目标参照物对应第一传感器的测量数据分为两部分,其中第一部分数据用于得到所述第一传感器的速度矢量的最小二乘估计值;第二部分数据用于得到所述第一传感器的速度矢量的序贯分块滤波估计值;所述第一传感器的速度矢量的最小二乘估计值作为序贯分块滤波的初始值。
可选的,作为一种实现,可以根据最小二乘估计得到初始估计及其协方差P 0,1|1=P 0:
P 0=P LS       1-30
Figure PCTCN2020093486-appb-000073
其中,
Figure PCTCN2020093486-appb-000074
或者
Figure PCTCN2020093486-appb-000075
或者,根据正则化最小二乘估计得到初始估计及其协方差P 0,1|1=P 0:
P 0=P RLS       1-32
Figure PCTCN2020093486-appb-000076
其中,
Figure PCTCN2020093486-appb-000077
其中,
Figure PCTCN2020093486-appb-000078
为传感器速度的第m次序贯分块滤波值,I K为KxK单位矩阵。
可选的,上述
Figure PCTCN2020093486-appb-000079
和H m,K对于不同的m可以互不相同;K的取值对于不同的m可以相同,可以不同,可以视情况不同选择;上述序贯滤波估计可以有效减少测量噪声的影响,从而提高传感器估计的精度。
需要说明的是,可以首先得到目标参照物的运动速度,根据以下关系得到传感器的运动速度:
Figure PCTCN2020093486-appb-000080
Figure PCTCN2020093486-appb-000081
或者,
Figure PCTCN2020093486-appb-000082
其中
Figure PCTCN2020093486-appb-000083
为目标参照物速度的最小二乘估计值;
或者,
Figure PCTCN2020093486-appb-000084
Figure PCTCN2020093486-appb-000085
其中
Figure PCTCN2020093486-appb-000086
为目标参照物速度的正则化最小二乘估计值;
或者
Figure PCTCN2020093486-appb-000087
Figure PCTCN2020093486-appb-000088
其中
Figure PCTCN2020093486-appb-000089
H m,K,P m-1如前述。
以传感器的方位角和径向速度测量和K=2为例,第m个第二径向速度矢量表示为
Figure PCTCN2020093486-appb-000090
其中,
Figure PCTCN2020093486-appb-000091
分别为所述目标参照物的第m组测量数据中的第1和第2个径向速度测量数据;与之对应的测量矩阵为:
Figure PCTCN2020093486-appb-000092
其中,θ m,i,i=1,2为所述目标参照物的第m组测量数据中的第i个方位角测量数据。
类似地,以传感器的方位角、俯仰角和径向速度测量和K=3为例,第m个第二径向速度矢量表示为
Figure PCTCN2020093486-appb-000093
其中,
Figure PCTCN2020093486-appb-000094
分别为所述目标参照物的第m组测量数据中第i个径向速度测量数据;与之对应的测量矩阵为:
Figure PCTCN2020093486-appb-000095
其中,θ m,i,i=1,2,3分别为所述目标参照物的第m组测量数据中第i个方位角测量数据,φ m,i,i=1,2,3分别为所述目标参照物的第m组测量数据中第i个俯仰角测量数据。
作为一种实现,M组测量数据的选择,应当使得各组测量数据对应的测量矩阵的条件数尽可能小;
作为一种实现,上述θ m,i,i=1,2或者θ m,im,i,i=1,2,3选择,应该使得对应的测量矩阵的各个列矢量尽量彼此正交;
可选的,第一传感器的运动状态除了包括第一传感器的速度矢量,还可以进一步包括第一传感器的位置。例如,可以以指定的时间起点作为参考,根据运动速度和时间间隔得到第一传感器的位置。
在获得第一传感器的运动状态之后,可以基于该运动状态执行各种控制,具体执行什么控制此处不做限定。
可选的,上述第一传感器的运动速度估计可以提供作为其它传感器的运动速度估计。 所述其它传感器为与第一传感器位于相同平台的传感器,例如与雷达/声纳/超声波传感器安装于相同车的摄像头或者视觉传感器或者成像传感器等。从而为其它传感器提供有效的速度估计;
可选的,可以根据所述第一传感器的运动状态,对目标对象的运动状态进行补偿,得到所述目标对象相对于大地坐标系的运动状态。本申请实施例中,该目标对象可以为探测车辆、障碍物、人、动物、或者其他物体。
如图7所示,其中左下图为以上得到的第一传感器的运动状态(如位置),右边图为探测装置探测的目标对象的运动状态(如位置),左上图为根据第一传感器的运动状态,对探测装置探测的目标对象的运动状态进行补偿,得到目标对象相对于大地坐标系的运动状态(位置)。
以上描述的是测量第一传感器的运动状态,实际上,该第一传感器所处的平台上可能还存在其他器件(如其他传感器),因此该平台其他器件的运动状态与该第一传感器的运动状态相同或者相近,因此以上估计第一传感器的运动状态也可以等同于估计其他器件的运动状态。因此若出现采用上述原理估计该其他器件的运动状态的方案,同样落入本发明实施例的保护范围。
在图3所描述的方法中,通过第一传感器获得多个测量数据,根据其中与目标参照物对应的测量数据,所述测量数据至少包含速度测量信息。由于第一传感器与该目标参照物之间存在相对运动,第一传感器的测量数据可以包含对所述相对运动的速度的测量信息,因此,可以基于与目标参照物对应的测量数据,得到第一传感器的运动状态。此外,通常所述目标参照物空间相对传感器的分布多样,特别是与所述第一传感器有不同的几何关系,从而使得所述速度测量数据与所述第一传感器有不同的测量方程,特别是降低测量方程中的测量矩阵的条件数;而且,通过所述与目标参照物对应的测量数据众多,从而有效降低噪声或者干扰对运动状态估计的影响。因此,利用本发明所述的方法,可以有效利用与目标参照物对应的测量数据,特别是目标参照物相对于传感器的几何关系以及数量,有效降低测量误差或者干扰的影响,从而使得这种确定运动状态的方式获得更高的精度;此外,所述方法利用单帧数据即可得到传感器的运动估计,从而可以获得很好的实时性。进一步地,可以理解,采用LS估计和/或序贯滤波估计的方式能够更有效地提高第一传感器运动状态(如速度)的估计精度。
上述详细阐述了本发明实施例的方法,下面提供了本发明实施例的装置。
请参见图8,图8是本发明实施例提供的一种运动状态估计装置80的结构示意图,可选的,该装置80可以是传感器***、融合感知***或者集成上述***的规划/控制***如辅助驾驶或者自动驾驶***等,可以是软件或者硬件。可选的,所述装置可以安装或者集成在车辆、轮船、飞机或者无人机等设备上,也可以安装或者连接于云端。该装置可以包括获取单元801和估计单元802,其中:
获取单元801用于通过第一传感器获得多个测量数据,其中每个所述测量数据至少包括速度测量信息;
估计单元802用于根据所述多个测量数据中与目标参照物对应的测量数据,得到第一 传感器的运动状态,所述运动状态至少包括第一传感器的速度矢量。
在上述装置中,通过第一传感器获得多个测量数据,根据其中与目标参照物对应的测量数据,所述测量数据至少包含速度测量信息。由于第一传感器与该目标参照物之间存在相对运动,第一传感器的测量数据可以包含对所述相对运动的速度的测量信息,因此,可以基于与目标参照物对应的测量数据,得到第一传感器的运动状态。此外,通常所述目标参照物空间相对传感器的分布多样,特别是与所述第一传感器有不同的几何关系,从而使得所述速度测量数据与所述第一传感器有不同的测量方程,特别是降低测量方程中的测量矩阵的条件数;而且,通过所述与目标参照物对应的测量数据众多,从而有效降低噪声或者干扰对运动状态估计的影响。因此,利用本发明所述的方法,可以有效利用与目标参照物对应的测量数据,特别是目标参照物相对于传感器的几何关系以及数量,有效降低测量误差或者干扰的影响,从而使得这种确定运动状态的方式获得更高的精度;此外,所述方法利用单帧数据即可得到传感器的运动估计,从而可以获得很好的实时性。
在一种可能的实现方式中,所述目标参照物为相对于参考系静止的物体。所述参考系可以是大地或者大地坐标系或者相对于大地的惯性坐标系。
在又一种可能的实现方式中,所述通过第一传感器获得多个测量数据,其中每个所述测量数据至少包括速度测量信息之后,所述根据所述多个测量数据中与目标参照物对应的测量数据,得到第一传感器的运动状态之前,还包括:
根据所述目标参照物的特征,从所述多个测量数据中确定与所述目标参照物对应的测量数据。
在又一种可能的实现方式中,所述目标参照物的特征包括所述目标参照物的几何特征和/或反射特征。
在又一种可能的实现方式中,所述通过第一传感器获得多个测量数据,其中每个所述测量数据至少包括速度测量信息之后,所述根据所述多个测量数据中与目标参照物对应的测量数据,得到第一传感器的运动状态之前,还包括:
根据第二传感器的数据,从所述第一传感器的多个测量数据中确定与所述目标参照物对应的测量数据。
在又一种可能的实现方式中,所述根据第二传感器的数据,从所述第一传感器的多个测量数据中确定与所述目标参照物对应的测量数据,包括:
将所述第一传感器的测量数据映射到第二传感器的测量数据的空间,或者,
将所述第二传感器的测量数据映射到所述第一传感器的测量数据的空间,或者,
将所述第一传感器的测量数据和第二传感器的测量数据映射到一个公共的空间,
根据由所述第二传感器测量数据确定的目标参照物,通过所述空间,确定所述第一传感器中目标参照物对应的测量数据。
在又一种可能的实现方式中,所述根据所述多个测量数据中与目标参照物对应的测量数据,得到第一传感器的运动状态,包括:
根据所述多个测量数据中与目标参照物对应的测量数据采用最小二乘LS估计和/或序贯分块滤波的方式,得到所述第一传感器的运动状态。可以理解,采用LS估计和/或序贯滤波估计的方式能够更有效地提高第一传感器运动状态(如速度)的估计精度。
在又一种可能的实现方式中,所述根据所述多个测量数据中与目标参照物对应的测量数据采用最小二乘LS估计和/或序贯分块滤波的方式,得到所述第一传感器的运动状态,包括:
根据对所述目标参照物对应的M个径向速度矢量及其对应的测量矩阵,进行序贯滤波,得到所述第一传感器的运动估计值,其中M≥2;所述径向速度矢量由所述多个测量数据中与目标参照物对应的测量数据中的K个径向速度测量值组成,所述对应的测量矩阵由K个方向余矢量组成,其中K≥1。
在又一种可能的实现方式中:
所述第一传感器的运动速度矢量为二维矢量,K=2,所述径向速度矢量的测量矩阵为:
Figure PCTCN2020093486-appb-000096
其中,θ m,i分别为所述目标参照物的第m组测量数据中第i个方位角测量数据,i=1,2;
或者,所述第一传感器的运动速度矢量为三维矢量,K=3,所述径向速度矢量的测量矩阵为:
Figure PCTCN2020093486-appb-000097
其中,θ m,i为所述目标参照物的第m组测量数据中第i个方位角测量数据,φ mi为所述目标参照物的第m组测量数据中第i个俯仰角测量数据,i=1,2,3。
在又一种可能的实现方式中,所述序贯滤波的公式如下:
Figure PCTCN2020093486-appb-000098
Figure PCTCN2020093486-appb-000099
P m,1|0=P m-1,1|1
P m,1|1=(I-G m-1H m-1,K)P m,1|0
其中,
Figure PCTCN2020093486-appb-000100
为第m次滤波的速度矢量估计值,G m为增益矩阵,
Figure PCTCN2020093486-appb-000101
为第m个径向速度矢量测量值,R m,K为第m个径向速度矢量测量误差协方差矩阵,m=1,2,…,M。
需要说明的是,各个单元的实现还可以对应参照图3所示的方法实施例的相应描述。
在图8所描述的装置80中,通过第一传感器获得多个测量数据,根据其中与目标参照物对应的测量数据,所述测量数据至少包含速度测量信息。由于第一传感器与该目标参照物之间存在相对运动,第一传感器的测量数据可以包含对所述相对运动的速度的测量信息,因此,可以基于与目标参照物对应的测量数据,得到第一传感器的运动状态。此外,通常所述目标参照物空间相对传感器的分布多样,特别是与所述第一传感器有不同的几何关系,从而使得所述速度测量数据与所述第一传感器有不同的测量方程,特别是降低测量方程中的测量矩阵的条件数;而且,通过所述与目标参照物对应的测量数据众多,从而有效降低噪声或者干扰对运动状态估计的影响。因此,利用本发明所述的方法,可以有效利用与目 标参照物对应的测量数据,特别是目标参照物相对于传感器的几何关系以及数量,有效降低测量误差或者干扰的影响,从而使得这种确定运动状态的方式获得更高的精度;此外,所述方法利用单帧数据即可得到传感器的运动估计,从而可以获得很好的实时性。进一步地,可以理解,采用LS估计和/或序贯滤波估计的方式能够更有效地提高第一传感器运动状态(如速度)的估计精度。
请参见图9,图9是本发明实施例提供的一种运动状态估计90,该装置90包括处理器901、存储器902和第一传感器903,所述处理器901、存储器902和第一传感器903通过总线904相互连接。
存储器902包括但不限于是随机存储记忆体(random access memory,RAM)、只读存储器(read-only memory,ROM)、可擦除可编程只读存储器(erasable programmable read only memory,EPROM)、或便携式只读存储器(compact disc read-only memory,CD-ROM),该存储器902用于相关程序指令及数据。第一传感器903用于采集测量数据。
处理器901可以是一个或多个中央处理器(central processing unit,CPU),在处理器901是一个CPU的情况下,该CPU可以是单核CPU,也可以是多核CPU。
装置90中的处理器901用于读取所述存储器902中存储的程序指令,执行以下操作:
通过所述第一传感器903获得多个测量数据,其中每个所述测量数据至少包括速度测量信息;
根据所述多个测量数据中与目标参照物对应的测量数据,得到第一传感器的运动状态,所述运动状态至少包括第一传感器的速度矢量。
在上述装置中,通过第一传感器获得多个测量数据,根据其中与目标参照物对应的测量数据,所述测量数据至少包含速度测量信息。由于第一传感器与该目标参照物之间存在相对运动,第一传感器的测量数据可以包含对所述相对运动的速度的测量信息,因此,可以基于与目标参照物对应的测量数据,得到第一传感器的运动状态。此外,通常所述目标参照物空间相对传感器的分布多样,特别是与所述第一传感器有不同的几何关系,从而使得所述速度测量数据与所述第一传感器有不同的测量方程,特别是降低测量方程中的测量矩阵的条件数;而且,通过所述与目标参照物对应的测量数据众多,从而有效降低噪声或者干扰对运动状态估计的影响。因此,利用本发明所述的方法,可以有效利用与目标参照物对应的测量数据,特别是目标参照物相对于传感器的几何关系以及数量,有效降低测量误差或者干扰的影响,从而使得这种确定运动状态的方式获得更高的精度;此外,所述方法利用单帧数据即可得到传感器的运动估计,从而可以获得很好的实时性。
在一种可能的实现方式中,所述目标参照物为相对于参考系静止的物体。
可选的,所述参考系可以是大地或者大地坐标系或者相对于大地的惯性坐标系等。
在又一种可能的实现方式中,所述通过所述第一传感器获得多个测量数据,其中每个所述测量数据至少包括速度测量信息之后,所述根据所述多个测量数据中与目标参照物对应的测量数据,得到第一传感器的运动状态之前,所述处理器901还用于:
根据所述目标参照物的特征,从所述多个测量数据中确定与所述目标参照物对应的测量数据。
在又一种可能的实现方式中,所述目标参照物的特征包括所述目标参照物的几何特征和/或反射特征。
在又一种可能的实现方式中,所述通过所述第一传感器获得多个测量数据,其中每个所述测量数据至少包括速度测量信息之后,所述根据所述多个测量数据中与目标参照物对应的测量数据,得到第一传感器的运动状态之前,所述处理器901还用于:
根据第二传感器的数据,从所述第一传感器的多个测量数据中确定与所述目标参照物对应的测量数据。
在又一种可能的实现方式中,所述根据第二传感器的数据,从所述第一传感器的多个测量数据中确定与所述目标参照物对应的测量数据,具体为:
将所述第一传感器的测量数据映射到第二传感器的测量数据的空间,或者,
将所述第二传感器的测量数据映射到所述第一传感器的测量数据的空间,或者,
将所述第一传感器的测量数据和第二传感器的测量数据映射到一个公共的空间,
根据由所述第二传感器的测量数据确定的目标参照物,通过所述空间,确定所述第一传感器中目标参照物对应的测量数据。
在又一种可能的实现方式中,所述根据所述多个测量数据中与目标参照物对应的测量数据,得到第一传感器的运动状态,具体为:
根据所述多个测量数据中与目标参照物对应的测量数据采用最小二乘LS估计和/或序贯分块滤波的方式,得到所述第一传感器的运动状态。可以理解,采用LS估计和/或序贯滤波估计的方式能够更有效地提高第一传感器运动状态(如速度)的估计精度。
在又一种可能的实现方式中,所述根据所述多个测量数据中与目标参照物对应的测量数据采用最小二乘LS估计和/或序贯分块滤波的方式,得到所述第一传感器的运动状态,具体为:
根据对所述目标参照物对应的M个径向速度矢量及其对应的测量矩阵,进行序贯滤波,得到所述第一传感器的运动估计值,其中M≥2;所述径向速度矢量由所述多个测量数据中与目标参照物对应的测量数据中的K个径向速度测量值组成,所述对应的测量矩阵由K个方向余矢量组成,其中K≥1。
在又一种可能的实现方式中:
所述第一传感器的运动速度矢量为二维矢量,K=2,其测量矩阵为:
Figure PCTCN2020093486-appb-000102
其中,θ m,i分别为所述目标参照物的第m组测量数据中第i个方位角测量数据,i=1,2;
或者,所述第一传感器的运动速度矢量为三维矢量,K=3,其测量矩阵为:
Figure PCTCN2020093486-appb-000103
其中,θ m,i为所述目标参照物的第m组测量数据中第i个方位角测量数据,φ m,i为所述目标参照物的第m组测量数据中第i个俯仰角测量数据,i=1,2,3。
在又一种可能的实现方式中,所述序贯滤波的公式如下:
Figure PCTCN2020093486-appb-000104
Figure PCTCN2020093486-appb-000105
P m,1|0=P m-1,1|1
P m,1|1=(I-G m-1H m-1,K)P m,1|0
其中,
Figure PCTCN2020093486-appb-000106
为第m次滤波的速度矢量估计值,G m为增益矩阵,
Figure PCTCN2020093486-appb-000107
为第m个径向速度矢量测量值,R m,K为第m个径向速度矢量测量误差协方差矩阵,m=1,2,…,M。
需要说明的是,各个操作的实现还可以对应参照图3所示的方法实施例的相应描述。
在图9所描述的装置90中,通过第一传感器获得多个测量数据,根据其中与目标参照物对应的测量数据,所述测量数据至少包含速度测量信息。由于第一传感器与该目标参照物之间存在相对运动,第一传感器的测量数据可以包含对所述相对运动的速度的测量信息,因此,可以基于与目标参照物对应的测量数据,得到第一传感器的运动状态。此外,通常所述目标参照物空间相对传感器的分布多样,特别是与所述第一传感器有不同的几何关系,从而使得所述速度测量数据与所述第一传感器有不同的测量方程,特别是降低测量方程中的测量矩阵的条件数;而且,通过所述与目标参照物对应的测量数据众多,从而有效降低噪声或者干扰对运动状态估计的影响。因此,利用本发明所述的方法,可以有效利用与目标参照物对应的测量数据,特别是目标参照物相对于传感器的几何关系以及数量,有效降低测量误差或者干扰的影响,从而使得这种确定运动状态的方式获得更高的精度;此外,所述方法利用单帧数据即可得到传感器的运动估计,从而可以获得很好的实时性。进一步地,可以理解,采用LS估计和/或序贯滤波估计的方式能够更有效地提高第一传感器运动状态(如速度)的估计精度。
本发明实施例还提供一种芯片***,所述芯片***包括至少一个处理器,存储器和接口电路,所述存储器、所述接口电路和所述至少一个处理器通过线路互联,所述至少一个存储器中存储有程序指令;所述程序指令被所述处理器执行时,实现图3所示的方法流程。
本发明实施例还提供一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当其在处理器上运行时,实现图3所示的方法流程。
本发明实施例还提供一种计算机程序产品,当所述计算机程序产品在处理器上运行时,实现图3所示的方法流程。
综上所述,通过实施本发明实施例,通过第一传感器获得多个测量数据,根据其中与目标参照物对应的测量数据,所述测量数据至少包含速度测量信息。由于第一传感器与该目标参照物之间存在相对运动,第一传感器的测量数据可以包含对所述相对运动的速度的测量信息,因此,可以基于与目标参照物对应的测量数据,得到第一传感器的运动状态。此外,通常所述目标参照物空间相对传感器的分布多样,特别是与所述第一传感器有不同的几何关系,从而使得所述速度测量数据与所述第一传感器有不同的测量方程,特别是降低测量方程中的测量矩阵的条件数;而且,通过所述与目标参照物对应的测量数据众多,从而有效降低噪声或者干扰对运动状态估计的影响。因此,利用本发明所述的方法,可以有效利用与目标参照物对应的测量数据,特别是目标参照物相对于传感器的几何关系以及 数量,有效降低测量误差或者干扰的影响,从而使得这种确定运动状态的方式获得更高的精度;此外,所述方法利用单帧数据即可得到传感器的运动估计,从而可以获得很好的实时性。进一步地,可以理解,采用LS估计和/或序贯滤波估计的方式能够更有效地提高第一传感器运动状态(如速度)的估计精度。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,该流程可以由计算机程序来指令相关的硬件完成,该程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。而前述的存储介质包括:ROM或随机存储记忆体RAM、磁碟或者光盘等各种可存储程序代码的介质。

Claims (21)

  1. 一种运动状态估计方法,其特征在于,包括:
    通过第一传感器获得多个测量数据,其中每个所述测量数据至少包括速度测量信息;
    根据所述多个测量数据中与目标参照物对应的测量数据,得到第一传感器的运动状态,所述运动状态至少包括第一传感器的速度矢量。
  2. 根据权利要求1所述的方法,其特征在于,所述目标参照物为相对于参考系静止的物体。
  3. 根据权利要求1或2所述的方法,其特征在于,所述通过第一传感器获得多个测量数据,其中每个所述测量数据至少包括速度测量信息之后,所述根据所述多个测量数据中与目标参照物对应的测量数据,得到第一传感器的运动状态之前,还包括:
    根据所述目标参照物的特征,从所述多个测量数据中确定与所述目标参照物对应的测量数据。
  4. 根据权利要求3所述的方法,其特征在于,所述目标参照物的特征包括所述目标参照物的几何特征和/或反射特征。
  5. 根据权利要求1或2所述的方法,其特征在于,所述通过第一传感器获得多个测量数据,其中每个所述测量数据至少包括速度测量信息之后,所述根据所述多个测量数据中与目标参照物对应的测量数据,得到第一传感器的运动状态之前,还包括:
    根据第二传感器的测量数据,从所述第一传感器的多个测量数据中确定与所述目标参照物对应的测量数据。
  6. 根据权利要求5所述的方法,其特征在于,所述根据第二传感器的数据,从所述第一传感器的多个测量数据中确定与所述目标参照物对应的测量数据,包括:
    将所述第一传感器的测量数据映射到第二传感器的测量数据的空间,或者,
    将所述第二传感器的测量数据映射到所述第一传感器的测量数据的空间,或者,
    将所述第一传感器的测量数据和第二传感器的测量数据映射到一个公共的空间,
    根据由所述第二传感器的测量数据确定的目标参照物,通过所述空间,确定所述第一传感器中目标参照物对应的测量数据。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,所述根据所述多个测量数据中与目标参照物对应的测量数据,得到第一传感器的运动状态,包括:
    根据所述多个测量数据中与目标参照物对应的测量数据采用最小二乘LS估计和/或序贯分块滤波的方式,得到所述第一传感器的运动状态。
  8. 根据权利要求7所述的方法,其特征在于,所述根据所述多个测量数据中与目标参照物对应的测量数据采用最小二乘LS估计和/或序贯分块滤波的方式,得到所述第一传感器的运动状态,包括:
    根据对所述目标参照物对应的M个径向速度矢量及其对应的测量矩阵,进行序贯滤波,得到所述第一传感器的运动估计值,其中M≥2;所述径向速度矢量由所述多个测量数据中与目标参照物对应的测量数据中的K个径向速度测量值组成,所述对应的测量矩阵由K个方向余矢量组成,其中K≥1。
  9. 根据权利要求8所述的方法,其特征在于:
    所述第一传感器的运动速度矢量为二维矢量,K=2,所述径向速度矢量对应的测量矩阵为:
    Figure PCTCN2020093486-appb-100001
    其中,θ m,i分别为所述目标参照物的第m组测量数据中第i个方位角测量数据,i=1,2;
    或者,所述第一传感器的运动速度矢量为三维矢量,K=3,所述径向速度矢量对应的测量矩阵为:
    Figure PCTCN2020093486-appb-100002
    其中,θ m,i为所述目标参照物的第m组测量数据中第i个方位角测量数据,φ m,i为所述目标参照物的第m组测量数据中第i个俯仰角测量数据,i=1,2,3;m=1,2,..,M。
  10. 根据权利要求9所述的方法,其特征在于,所述序贯滤波的公式如下:
    Figure PCTCN2020093486-appb-100003
    Figure PCTCN2020093486-appb-100004
    P m,1|0=P m-1,1|1
    P m,1|1=(I-G m-1H m-1,K)P m,1|0
    其中,
    Figure PCTCN2020093486-appb-100005
    为第m次滤波的速度矢量估计值,G m为增益矩阵,
    Figure PCTCN2020093486-appb-100006
    为第m个径向速度矢量测量值,R m,K为第m个径向速度矢量测量误差协方差矩阵,m=1,2,…,M。
  11. 一种运动状态估计装置,其特征在于,包括处理器、存储器和第一传感器,其中,所述存储器用于存储程序指令,所述处理器用于调用所述程序指令来执行如下操作:
    通过所述第一传感器获得多个测量数据,其中每个所述测量数据至少包括速度测量信息;
    根据所述多个测量数据中与目标参照物对应的测量数据,得到第一传感器的运动状态,所述运动状态至少包括第一传感器的速度矢量。
  12. 根据权利要求11所述的装置,其特征在于,所述目标参照物为相对于参考系静止的物体。
  13. 根据权利要求11或12所述的装置,其特征在于,所述通过所述第一传感器获得多个测量数据,其中每个所述测量数据至少包括速度测量信息之后,所述根据所述多个测量数据中与目标参照物对应的测量数据,得到第一传感器的运动状态之前,所述处理器还用于:
    根据所述目标参照物的特征,从所述多个测量数据中确定与所述目标参照物对应的测量数据。
  14. 根据权利要求13所述的装置,其特征在于,所述目标参照物的特征包括所述目标参照物的几何特征和/或反射特征。
  15. 根据权利要求11或12所述的装置,其特征在于,所述通过所述第一传感器获得多个测量数据,其中每个所述测量数据至少包括速度测量信息之后,所述根据所述多个测量数据中与目标参照物对应的测量数据,得到第一传感器的运动状态之前,所述处理器还用于:
    根据第二传感器的测量数据,从所述第一传感器的多个测量数据中确定与所述目标参照物对应的测量数据。
  16. 根据权利要求15所述的装置,其特征在于,所述根据第二传感器的数据,从所述第一传感器的多个测量数据中确定与所述目标参照物对应的测量数据,具体为:
    将所述第一传感器的测量数据映射到第二传感器的测量数据的空间,或者,
    将所述第二传感器的测量数据映射到所述第一传感器的测量数据的空间,或者,
    将所述第一传感器的测量数据和第二传感器的测量数据映射到一个公共的空间,
    根据由所述第二传感器的测量数据确定的目标参照物,通过所述空间,确定所述第一传感器中目标参照物对应的测量数据。
  17. 根据权利要求11-16任一项所述的装置,其特征在于,所述根据所述多个测量数据中与目标参照物对应的测量数据,得到第一传感器的运动状态,具体为:
    根据所述多个测量数据中与目标参照物对应的测量数据采用最小二乘LS估计和/或序贯分块滤波的方式,得到所述第一传感器的运动状态。
  18. 根据权利要求17所述的装置,其特征在于,所述根据所述多个测量数据中与目标参照物对应的测量数据采用最小二乘LS估计和/或序贯分块滤波的方式,得到所述第一传感器的运动状态,具体为:
    根据对所述目标参照物对应的M个径向速度矢量及其对应的测量矩阵,进行序贯滤波, 得到所述第一传感器的运动估计值,其中M≥2;所述径向速度矢量由所述多个测量数据中与目标参照物对应的测量数据中的K个径向速度测量值组成,所述对应的测量矩阵由K个方向余矢量组成,其中K≥1。
  19. 根据权利要求18所述的装置,其特征在于:
    所述第一传感器的运动速度矢量为二维矢量,K=2,所述径向速度矢量对应的测量矩阵为:
    Figure PCTCN2020093486-appb-100007
    其中,θ m,i分别为所述目标参照物的第m组测量数据中第i个方位角测量数据,i=1,2;
    或者,所述第一传感器的运动速度矢量为三维矢量,K=3,所述径向速度矢量对应的测量矩阵为:
    Figure PCTCN2020093486-appb-100008
    其中,θ m,i为所述目标参照物的第m组测量数据中第i个方位角测量数据,φ m,i为所述目标参照物的第m组测量数据中第i个俯仰角测量数据,i=1,2,3;m=1,2,..,M。
  20. 根据权利要求19所述的装置,其特征在于,所述序贯滤波的公式如下:
    Figure PCTCN2020093486-appb-100009
    Figure PCTCN2020093486-appb-100010
    P m,1|0=P m-1,1|1
    P m,1|1=(I-G m-1H m-1,K)P m,1|0
    其中,
    Figure PCTCN2020093486-appb-100011
    为第m次滤波的速度矢量估计值,G m为增益矩阵,
    Figure PCTCN2020093486-appb-100012
    为第m个径向速度矢量测量值,R m,K为第m个径向速度矢量测量误差协方差矩阵,m=1,2,…,M。
  21. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质用于存储计算机程序,当所述计算机程序在计算机上运行时,实现权利要求1-10任一项所述的方法。
PCT/CN2020/093486 2019-06-06 2020-05-29 一种运动状态估计方法及装置 WO2020244467A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20818619.7A EP3964863A4 (en) 2019-06-06 2020-05-29 METHOD AND DEVICE FOR MOTION STATE ESTIMATION
US17/542,699 US20220089166A1 (en) 2019-06-06 2021-12-06 Motion state estimation method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910503710.9A CN112050830B (zh) 2019-06-06 2019-06-06 一种运动状态估计方法及装置
CN201910503710.9 2019-06-06

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/542,699 Continuation US20220089166A1 (en) 2019-06-06 2021-12-06 Motion state estimation method and apparatus

Publications (1)

Publication Number Publication Date
WO2020244467A1 true WO2020244467A1 (zh) 2020-12-10

Family

ID=73608994

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/093486 WO2020244467A1 (zh) 2019-06-06 2020-05-29 一种运动状态估计方法及装置

Country Status (4)

Country Link
US (1) US20220089166A1 (zh)
EP (1) EP3964863A4 (zh)
CN (1) CN112050830B (zh)
WO (1) WO2020244467A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11872994B2 (en) * 2021-10-30 2024-01-16 Zoox, Inc. Estimating vehicle velocity

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6300896B1 (en) * 1998-12-17 2001-10-09 Daimlerchrysler Ag Use of a device in a vehicle, using which the environment of the vehicle can be identified by means of radar beams
CN101320089A (zh) * 2007-06-05 2008-12-10 通用汽车环球科技运作公司 用于车辆动力估计的雷达、激光雷达和摄像机增强的方法
US20100321235A1 (en) * 2009-06-23 2010-12-23 Symeo Gmbh Imaging Method Utilizing a Synthetic Aperture, Method for Determining a Relative Velocity Between a Wave-Based Sensor and an Object, or Apparatus for Carrying Out the Methods
CN104040369A (zh) * 2012-01-05 2014-09-10 罗伯特·博世有限公司 用于在车辆中不依赖于车轮的速度测量的方法和设备
CN107132542A (zh) * 2017-05-02 2017-09-05 北京理工大学 一种基于光学和多普勒雷达的小天体软着陆自主导航方法
CN108663676A (zh) * 2018-07-25 2018-10-16 中联天通科技(北京)有限公司 一种新型组合导航中毫米波测速雷达***
CN110554376A (zh) * 2018-05-30 2019-12-10 福特全球技术公司 用于运载工具的雷达测程法

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2764075B1 (fr) * 1997-05-30 1999-08-27 Thomson Csf Procede de recalage de navigation d'un mobile au moyen d'une cartographie radar de zones de terrain a relief accentue
ITRM20070399A1 (it) * 2007-07-19 2009-01-20 Consiglio Nazionale Ricerche Metodo di elaborazione di dati rilevati mediante radar ad apertura sintetica (synthetic aperture radar - sar) e relativo sistema di telerilevamento.
JP6629242B2 (ja) * 2014-05-28 2020-01-15 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. マルチチャネルppg信号を使用するモーションアーチファクト低減
JP6425130B2 (ja) * 2014-12-18 2018-11-21 パナソニックIpマネジメント株式会社 レーダ装置及びレーダ状態推定方法
KR102460043B1 (ko) * 2016-06-17 2022-11-03 로베르트 보쉬 게엠베하 차량의 적응형 순항 제어를 위한 추월 가속 지원
EP3349033A1 (en) * 2017-01-13 2018-07-18 Autoliv Development AB Enhanced object detection and motion estimation for a vehicle environment detection system
CN108872975B (zh) * 2017-05-15 2022-08-16 蔚来(安徽)控股有限公司 用于目标跟踪的车载毫米波雷达滤波估计方法、装置及存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6300896B1 (en) * 1998-12-17 2001-10-09 Daimlerchrysler Ag Use of a device in a vehicle, using which the environment of the vehicle can be identified by means of radar beams
CN101320089A (zh) * 2007-06-05 2008-12-10 通用汽车环球科技运作公司 用于车辆动力估计的雷达、激光雷达和摄像机增强的方法
US20100321235A1 (en) * 2009-06-23 2010-12-23 Symeo Gmbh Imaging Method Utilizing a Synthetic Aperture, Method for Determining a Relative Velocity Between a Wave-Based Sensor and an Object, or Apparatus for Carrying Out the Methods
CN104040369A (zh) * 2012-01-05 2014-09-10 罗伯特·博世有限公司 用于在车辆中不依赖于车轮的速度测量的方法和设备
CN107132542A (zh) * 2017-05-02 2017-09-05 北京理工大学 一种基于光学和多普勒雷达的小天体软着陆自主导航方法
CN110554376A (zh) * 2018-05-30 2019-12-10 福特全球技术公司 用于运载工具的雷达测程法
CN108663676A (zh) * 2018-07-25 2018-10-16 中联天通科技(北京)有限公司 一种新型组合导航中毫米波测速雷达***

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3964863A4

Also Published As

Publication number Publication date
CN112050830B (zh) 2023-06-02
EP3964863A1 (en) 2022-03-09
CN112050830A (zh) 2020-12-08
EP3964863A4 (en) 2022-06-15
US20220089166A1 (en) 2022-03-24

Similar Documents

Publication Publication Date Title
JP7398506B2 (ja) ローカライゼーション基準データを生成及び使用する方法及びシステム
CN111326023B (zh) 一种无人机航线预警方法、装置、设备及存储介质
EP3137850B1 (en) Method and system for determining a position relative to a digital map
US20190101649A1 (en) Systems, devices, and methods for autonomous vehicle localization
US11525682B2 (en) Host vehicle position estimation device
CN111391823A (zh) 一种用于自动泊车场景的多层地图制作方法
Quist et al. Radar odometry on fixed-wing small unmanned aircraft
CN110926474A (zh) 卫星/视觉/激光组合的城市峡谷环境uav定位导航方法
CN112558023A (zh) 传感器的标定方法和装置
CN110889808A (zh) 一种定位的方法、装置、设备及存储介质
CN108844538B (zh) 一种基于视觉/惯导的无人机避障航点生成方法
JP2023525927A (ja) 車両位置推定システム及び方法
Ivancsits et al. Visual navigation system for small unmanned aerial vehicles
WO2020244467A1 (zh) 一种运动状态估计方法及装置
CN110865394A (zh) 一种基于激光雷达数据的目标分类***及其数据处理方法
Quist et al. Radar odometry on small unmanned aircraft
Quist UAV navigation and radar odometry
CN115965847A (zh) 交叉视角下多模态特征融合的三维目标检测方法和***
Iannucci et al. Cross-Modal Localization: Using automotive radar for absolute geolocation within a map produced with visible-light imagery
US11288520B2 (en) Systems and methods to aggregate and distribute dynamic information of crowdsourcing vehicles for edge-assisted live map service
CN112050829B (zh) 一种运动状态确定方法及装置
CN113917875A (zh) 一种自主无人***开放通用智能控制器、方法及存储介质
US20170023941A1 (en) Methods and apparatus for positioning aircraft based on images of mobile targets
CN113470342B (zh) 一种自运动估计的方法及装置
Jiménez et al. LiDAR-based SLAM algorithm for indoor scenarios

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20818619

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020818619

Country of ref document: EP

Effective date: 20211202