CN115039095A - Target tracking method and target tracking device - Google Patents

Target tracking method and target tracking device Download PDF

Info

Publication number
CN115039095A
CN115039095A CN202080095340.0A CN202080095340A CN115039095A CN 115039095 A CN115039095 A CN 115039095A CN 202080095340 A CN202080095340 A CN 202080095340A CN 115039095 A CN115039095 A CN 115039095A
Authority
CN
China
Prior art keywords
time
state quantity
state
updating
update
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080095340.0A
Other languages
Chinese (zh)
Inventor
周鹏
冯源
张欢
李选富
吴祖光
李维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN115039095A publication Critical patent/CN115039095A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/17Function evaluation by approximation methods, e.g. inter- or extrapolation, smoothing, least mean square method

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computational Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

A target tracking method and a target tracking device in the field of artificial intelligence are disclosed, the method comprises: acquiring the actual observed quantity of a moving object acquired by a sensor; under the condition that the acquisition time of the actual observed quantity is earlier than the updating time of the current state quantity of the moving object, the estimated observed quantity is obtained according to a state transition model corresponding to the moving object; and updating the current state quantity according to the actual observed quantity and the estimated observed quantity, and taking the updated state quantity as the state quantity corresponding to the updating time. The target tracking can be realized under the condition that the collection time of the observed quantity is out of order, and the out-of-order jumping of the timestamp of the state quantity along with the out-of-order change of the timestamp of the observed quantity is avoided.

Description

Target tracking method and target tracking device Technical Field
The present disclosure relates to the field of artificial intelligence, and more particularly, to a target tracking method and a target tracking apparatus.
Background
The target tracking method can be applied to many scenes such as target detection, lane line detection, positioning and the like. A common target tracking method needs Bayesian estimation (Bayesian estimation) by using measurement data of a sensor and combining a state transition model of a tracked object. The target tracking may be implemented, for example, using Bayes filters. Specifically, the measurement data of the sensor is input to a bayesian filter resulting in an estimate of the state of the tracked object. However, in the case where there are a plurality of sensors, there may be a case where the measurement data of one sensor arrives at the bayesian filter later than it should due to a transmission path or the like. FIG. 1 is a schematic diagram of the temporal misordering of measurement data input to a Bayesian filter. The abscissa in fig. 1 shows the real time (real time), and the data transmitted by the sensor comprises a timestamp (header: Xms) and measurement data (data), and the time of the timestamp is used to indicate that the measurement or acquisition of the measurement data is the Xms time. The measured data measured by the sensor 0 at the time of 1ms is input into the Bayes filter at the time of 3ms, and the measured data measured by the sensor 1 at the time of 0ms is input into the Bayes filter at the time of 4ms, so that the time of the measured data input into the Bayes filter is disordered. In this case, the bayesian filter outputs a state quantity with a time stamp of 1ms calculated based on the measurement data measured at 1ms, and then outputs a state quantity with a time stamp of 0ms calculated based on the measurement data measured at 0ms, which causes a jitter in the time stamp of the output state quantity according to a change in the time stamp of the measurement data.
Two approaches can be generally taken to solve the above problems. One method is to judge whether the time stamp of the measurement data is later than the time stamp of the latest updated state quantity in the Bayesian filter, and only the measurement data of which the collection time is later than the update time of the latest updated state quantity is input into the Bayesian filter. However, this method directly causes the lack of the observed quantity, which results in the decrease of the confidence of the output result. Another approach is to extrapolate the late observations to the current time. However, extrapolating in the observation space relies on strong assumptions, which tend to enlarge the error.
Disclosure of Invention
The application provides a target tracking method and a target tracking device, which can realize target tracking under the condition of disorder of the acquisition time of an observed quantity and avoid disorder jump of a timestamp of a state quantity along with disorder change of the timestamp of the observed quantity.
In a first aspect, a target tracking method is provided, including: acquiring the actual observed quantity of a moving object acquired by a sensor; under the condition that the acquisition time of the actual observed quantity is earlier than the updating time of the current state quantity of the moving object, the estimated observed quantity is obtained according to a state transition model corresponding to the moving object; and updating the current state quantity according to the actual observed quantity and the estimated observed quantity, and taking the updated state quantity as the state quantity corresponding to the updating time.
The observed amount may include a speed of the moving object, an acceleration of the moving object, or a position of the moving object, and the like. For example, the observed quantity may include the position and/or speed of the moving object measured by the millimeter wave radar, and the like.
The state quantity may include the speed of the moving object, the acceleration of the moving object, the position of the moving object, or the like.
The update time of the current state quantity of the moving object can be understood as the time when the state quantity of the moving object is updated last time.
Optionally, the state transition model of the moving object comprises a kinematic model of the moving object. For example, the kinematic model may include a uniform linear motion model or the like.
According to the scheme of the embodiment of the application, the state transition is carried out according to the state transition model corresponding to the moving object, so that the current state quantity is updated, and the accurate latest state quantity corresponding to the updating time can be obtained. Meanwhile, the updated state quantity is the state quantity corresponding to the latest updating time instead of the state quantity corresponding to the acquisition time, so that disorder and jumping of the output state quantity on time are avoided.
With reference to the first aspect, in some implementations of the first aspect, obtaining the estimated observation according to the state transition model corresponding to the moving object includes: and obtaining a likelihood function of the observed quantity corresponding to the acquisition time according to a state transition model corresponding to the moving object, wherein the estimated observed quantity is determined according to the likelihood function of the observed quantity corresponding to the acquisition time, the state transition model is determined according to a first probability density function, and the first probability density function is a probability density function for transferring the current state quantity to the state quantity corresponding to the acquisition time.
With reference to the first aspect, in some implementations of the first aspect, updating the current state quantity according to the actual observed quantity and the estimated observed quantity, and taking the updated state quantity as the state quantity corresponding to the update time includes: updating the probability density function of the current state quantity according to the likelihood function of the observed quantity corresponding to the acquisition time; and determining the state quantity corresponding to the updating time according to the probability density function of the updated state quantity.
With reference to the first aspect, in certain implementations of the first aspect, the likelihood function for the observation corresponding to the acquisition time satisfies:
g(z k-1 |X k ')=∫g(z k-1 |X)f k-1|k (X|X k ')dX
wherein g (z) k-1 |X k ') denotes with respect to t k-1 Observed quantity z corresponding to time k-1 X represents t k-1 The state quantity, X, of the time update k ' means in the absence of z k-1 In case of (2) t k The state quantity of the time update, g (z) k-1 | X) is represented in z k-1 For t k-1 With respect to z in the case of updating the state quantity at a time k-1 Likelihood function of f k-1|k (X|X k ') is a first probability density function, which is a function of t k State quantity X of time update k ' transfer to t k-1 Probability density function of the state quantities updated at the time.
With reference to the first aspect, in certain implementations of the first aspect, the updated probability density function of the state quantities satisfies:
Figure PCTCN2020075556-APPB-000001
wherein f is k|k (X k |z k ) Represents t k The state quantity X of the time update k Of a probability density function of f' k|k (X k '|z k ,z k-2 ) Is shown in the absence of z k-1 In case of (2) t k State quantity X of time update k ' probability density function of z k Represents t k Observed quantity, z, collected at a moment k Set of observations { z } representing k time acquisitions 1 ,z 2 ,...,z k-2 ,z k-1 ,z k },z k-2 Set of observations { z ] representing k-2 time instances acquisition 1 ,z 2 ,...,z k-2 }。
With reference to the first aspect, in some implementations of the first aspect, updating the current state quantity according to the actual observed quantity and the estimated observed quantity, and taking the updated state quantity as the state quantity corresponding to the update time includes:
determining the expectation of the state quantity corresponding to the updating moment according to the actual observed quantity and the estimated observed quantity;
taking the expectation of the state quantity corresponding to the updating time as the state quantity corresponding to the updating time;
the expectation of the state quantity corresponding to the updating moment is related to a Kalman gain value, the Kalman gain value is related to a first covariance, an observation matrix of the acquisition moment, a covariance of the observation matrix and a variance of the observation quantity of the acquisition moment, and the first covariance refers to the covariance of the state quantity transferred from the updating moment to the acquisition moment.
With reference to the first aspect, in certain implementations of the first aspect, the kalman gain value satisfies:
Figure PCTCN2020075556-APPB-000002
Var(z k-1 ) Satisfies the following conditions:
Figure PCTCN2020075556-APPB-000003
P k-1|k satisfies the following conditions:
Figure PCTCN2020075556-APPB-000004
wherein, P k-1|k Indicating the time t of update k Transfer to acquisition time t k-1 Covariance of the state quantity of (1), P' k|k Indicated in the absence of t k-1 Observed quantity z acquired at any moment k-1 In case of (2) t k Covariance of the state quantities of the time of day updates, F k-1|k Represents from t k Time shift to t k-1 State transition matrix of time, Q k Representing the covariance of the prediction matrix, H k-1 Denotes t k-1 The observation matrix at time, Var (z) k-1 ) Represents t k-1 Observed quantity z acquired at any moment k-1 Variance of R k-1 Representing the covariance of the observation matrix.
With reference to the first aspect, in certain implementations of the first aspect, the expectation of the state quantity corresponding to the update time satisfies:
Figure PCTCN2020075556-APPB-000005
wherein x is k|k Denotes t k The expectation of the state quantities updated at the moment,
Figure PCTCN2020075556-APPB-000006
represents t k-1 An estimated observed quantity of time.
With reference to the first aspect, in certain implementations of the first aspect, the estimated value of the observed quantity corresponding to the acquisition time satisfies:
Figure PCTCN2020075556-APPB-000007
wherein,
Figure PCTCN2020075556-APPB-000008
represents t k-1 Estimated observation of time, H k-1 Represents t k-1 Observation matrix of time of day, F k-1|k Represents from t k Time shift to t k-1 State transition matrix of time of day, x' k|k Indicated in the absence of t k-1 Observed quantity z acquired at any moment k-1 T in the case of k Expectation of state quantity of time update.
With reference to the first aspect, in some implementation manners of the first aspect, obtaining the estimated observed quantity according to a state transition model corresponding to the moving object when the acquisition time of the actual observed quantity is earlier than the update time of the current state quantity of the moving object includes: and under the condition that the acquisition time of the observed quantity is earlier than the updating time of the current state quantity of the moving object and the time difference between the acquisition time and the updating time is less than or equal to the threshold value, obtaining the estimated observed quantity according to the state transition model corresponding to the moving object.
According to the scheme of the embodiment of the application, the state transition is carried out only under the condition that the time difference between the acquisition time and the updating time is smaller than or equal to the threshold value, and then the state quantity is updated, so that the situation that the confidence coefficient of the updated state quantity is reduced due to the fact that the state quantity is updated by the observation quantity under the condition that the time difference is overlarge can be avoided.
In a second aspect, a target tracking apparatus is provided, which includes an obtaining module and a processing module, where the obtaining module is configured to: acquiring the actual observed quantity of a moving object acquired by a sensor; the processing module is used for: under the condition that the acquisition time of the actual observed quantity is earlier than the updating time of the current state quantity of the moving object, the estimated observed quantity is obtained according to a state transition model corresponding to the moving object; and updating the current state quantity according to the actual observed quantity and the estimated observed quantity, and taking the updated state quantity as the state quantity corresponding to the updating time.
According to the scheme of the embodiment of the application, the state transition is carried out according to the state transition model corresponding to the moving object, so that the current state quantity is updated, and the accurate latest state quantity corresponding to the updating time can be obtained. Meanwhile, the updated state quantity is the state quantity corresponding to the latest updating time instead of the state quantity corresponding to the acquisition time, so that disorder and jumping of the output state quantity on time are avoided.
With reference to the second aspect, in some implementations of the second aspect, the processing module is configured to: and obtaining a likelihood function of the observed quantity corresponding to the acquisition time according to a state transition model corresponding to the moving object, wherein the estimated observed quantity is determined according to the likelihood function of the observed quantity corresponding to the acquisition time, the state transition model is determined according to a first probability density function, and the first probability density function is a probability density function for transferring the current state quantity to the state quantity corresponding to the acquisition time.
With reference to the second aspect, in some implementations of the second aspect, the processing module is configured to: updating the probability density function of the current state quantity according to the likelihood function of the observed quantity corresponding to the acquisition time; and determining the state quantity corresponding to the updating time according to the probability density function of the updated state quantity.
With reference to the second aspect, in some implementations of the second aspect, the likelihood function for the observation corresponding to the acquisition time satisfies:
g(z k-1 |X k ')=∫g(z k-1 |X)f k-1|k (X|X k ')dX
wherein g (z) k-1 |X k ') indicates with respect to t k-1 Observed quantity z corresponding to time k-1 X represents t k-1 The state quantity, X, of the time update k ' means in the absence of z k-1 In case of (2) t k The state quantity of the time update, g (z) k-1 | X) is represented in z k-1 For t k-1 With respect to z in the case of updating the state quantity at a time k-1 Likelihood function of f k-1|k (X|X k ') is a first probability density function, which is a function of t k The state quantity X of the time update k ' transfer to t k-1 Probability density function of the state quantities updated at the time.
With reference to the second aspect, in some implementations of the second aspect, the updated probability density function of the state quantities satisfies:
Figure PCTCN2020075556-APPB-000009
wherein f is k|k (X k |z k ) Represents t k State quantity X of time update k Probability density function of (2), f' k|k (X k '|z k ,z k-2 ) Is shown in the absence of z k-1 In case of (2) t k State quantity X of time update k ' probability density function of z k Denotes t k Observed quantities, z, collected at times k Set of observations { z ] representing k time acquisitions 1 ,z 2 ,...,z k-2 ,z k-1 ,z k },z k-2 Set of observations that represent k-2 time instances acquisition { z } 1 ,z 2 ,...,z k-2 }。
With reference to the second aspect, in some implementations of the second aspect, the processing module is configured to: determining the expectation of the state quantity corresponding to the updating moment according to the actual observed quantity and the estimated observed quantity; taking the expectation of the state quantity corresponding to the updating time as the state quantity corresponding to the updating time; the expectation of the state quantity corresponding to the updating moment is related to a Kalman gain value, the Kalman gain value is related to a first covariance, an observation matrix of the acquisition moment, a covariance of the observation matrix and a variance of the observation quantity of the acquisition moment, and the first covariance refers to the covariance of the state quantity transferred from the updating moment to the acquisition moment.
With reference to the second aspect, in certain implementations of the second aspect, the kalman gain value satisfies:
Figure PCTCN2020075556-APPB-000010
Var(z k-1 ) Satisfies the following conditions:
Figure PCTCN2020075556-APPB-000011
P k-1|k satisfies the following conditions:
Figure PCTCN2020075556-APPB-000012
wherein, P k-1|k Indicating the time t of update k Transition to acquisition time t k-1 Covariance of the state quantity of (2), P' k|k Indicated in the absence of t k-1 Observed quantity z acquired at any moment k-1 In case of (2) t k Covariance of the state quantities updated at the moment, F k-1|k Denotes from t k Time shift to t k-1 State transition matrix of time, Q k Representing the covariance of the prediction matrix, H k-1 Represents t k-1 The observation matrix at time, Var (z) k-1 ) Represents t k-1 Observed quantity z acquired at any moment k-1 Variance of R k-1 Representing the covariance of the observation matrix.
With reference to the second aspect, in some implementations of the second aspect, the expectation of the state quantity corresponding to the update time satisfies:
Figure PCTCN2020075556-APPB-000013
wherein x is k|k Represents t k The expectation of the state quantities updated at the moment,
Figure PCTCN2020075556-APPB-000014
denotes t k-1 An estimated observation of time of day.
With reference to the second aspect, in some implementations of the second aspect, the estimated value of the observed quantity corresponding to the acquisition time satisfies:
Figure PCTCN2020075556-APPB-000015
wherein,
Figure PCTCN2020075556-APPB-000016
represents t k-1 Estimated observation of time, H k-1 Represents t k-1 Observation matrix of time of day, F k-1|k Represents from t k Time shift to t k-1 State transition matrix of time of day, x' k|k Indicated in the absence of t k-1 Observed quantity z acquired at any moment k-1 T in the case of k Expectation of state quantity of time update.
With reference to the second aspect, in some implementations of the second aspect, the processing module is configured to: and under the condition that the acquisition time of the actual observed quantity is earlier than the updating time of the current state quantity of the moving object and the time difference between the acquisition time and the updating time is less than or equal to the threshold value, obtaining the estimated observed quantity according to the state transition model corresponding to the moving object.
It will be appreciated that extensions, definitions, explanations and explanations of relevant content in the above-described first aspect also apply to the same content in the second aspect.
In a third aspect, an apparatus for tracking a target is provided, the apparatus comprising: a memory for storing a program; a processor for executing the memory-stored program, the processor being adapted to perform the method of the first aspect when the memory-stored program is executed.
In a fourth aspect, there is provided a computer program product comprising: computer program code for causing a computer to perform the method of the first aspect described above when the computer program product is run on a computer.
In a fifth aspect, there is provided a computer readable storage medium storing a computer program which, when run on a computer, causes the computer to perform the method of the first aspect described above.
It is to be understood that, in the present application, the method of the first aspect may specifically refer to the method of the first aspect as well as any one of the various implementations of the first aspect.
Drawings
FIG. 1 is a schematic diagram of Bayesian filtering of measurement data input;
FIG. 2 is a schematic structural diagram of a vehicle according to an embodiment of the present disclosure;
FIG. 3 is a schematic structural diagram of a computer system according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an application of a cloud-side command autonomous driving vehicle according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a target tracking apparatus according to an embodiment of the present application;
FIG. 6 is a schematic flow chart diagram of a target tracking method provided by an embodiment of the present application;
FIG. 7 is a schematic structural diagram of another object tracking device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of another object tracking device provided in an embodiment of the present application.
Detailed Description
The technical solution in the present application will be described below with reference to the accompanying drawings.
Fig. 2 is a functional block diagram of the vehicle 100 according to the embodiment of the present application.
Where the vehicle 100 may be a manually driven vehicle, or the vehicle 100 may be configured to be in a fully or partially autonomous driving mode.
In one example, the vehicle 100 may control the own vehicle while in the autonomous driving mode, and may determine a current state of the vehicle and its surroundings by human operation, determine a possible behavior of at least one other vehicle in the surroundings, and determine a confidence level corresponding to a likelihood that the other vehicle performs the possible behavior, controlling the vehicle 100 based on the determined information. While the vehicle 100 is in the autonomous driving mode, the vehicle 100 may be placed into operation without human interaction.
Various subsystems may be included in the vehicle 100, such as a travel system 110, a sensing system 120, a control system 130, one or more peripherals 140, as well as a power supply 160, a computer system 150, and a user interface 170.
Alternatively, vehicle 100 may include more or fewer subsystems, and each subsystem may include multiple elements. In addition, each of the sub-systems and elements of the vehicle 100 may be interconnected by wire or wirelessly.
For example, the travel system 110 may include components for providing powered motion to the vehicle 100. In one embodiment, the travel system 110 may include an engine 111, a transmission 112, an energy source 113, and wheels 114/tires. Wherein the engine 111 may be an internal combustion engine, an electric motor, an air compression engine, or other type of engine combination; for example, a hybrid engine composed of a gasoline engine and an electric motor, and a hybrid engine composed of an internal combustion engine and an air compression engine. The engine 111 may convert the energy source 113 into mechanical energy.
Illustratively, the energy source 113 may include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electrical power. The energy source 113 may also provide energy to other systems of the vehicle 100.
For example, the transmission 112 may include a gearbox, a differential, and a drive shaft; wherein the transmission 112 may transmit mechanical power from the engine 111 to the wheels 114.
In one embodiment, the transmission 112 may also include other devices, such as a clutch. Wherein the drive shaft may comprise one or more shafts that may be coupled to one or more wheels 114.
For example, the sensing system 120 may include several sensors that sense information about the environment surrounding the vehicle 100.
For example, the sensing system 120 may include a positioning system 121 (e.g., a GPS system, a beidou system, or other positioning system), an inertial measurement unit 122 (IMU), a radar 123, a laser range finder 124, and a camera 125. The sensing system 120 may also include sensors of internal systems of the monitored vehicle 100 (e.g., an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, etc.). Sensor data from one or more of these sensors may be used to detect the object and its corresponding characteristics (position, shape, orientation, velocity, etc.). Such detection and identification is a critical function of the safe operation of the autonomous vehicle 100.
The positioning system 121 may be used, among other things, to estimate the geographic location of the vehicle 100. The IMU122 may be used to sense position and orientation changes of the vehicle 100 based on inertial acceleration. In one embodiment, the IMU122 may be a combination of an accelerometer and a gyroscope.
For example, the radar 123 may utilize radio signals to sense objects within the surrounding environment of the vehicle 100. In some embodiments, in addition to sensing objects, radar 123 may also be used to sense the speed and/or heading of an object.
For example, the laser rangefinder 124 may utilize a laser to sense objects in the environment in which the vehicle 100 is located. In some embodiments, laser rangefinder 124 may include one or more laser sources, laser scanners, and one or more detectors, among other system components.
Illustratively, the camera 125 may be used to capture multiple images of the surrounding environment of the vehicle 100. For example, the camera 125 may be a still camera or a video camera.
As shown in fig. 2, the control system 130 is for controlling the operation of the vehicle 100 and its components. Control system 130 may include various elements, such as may include a steering system 131, a throttle 132, a braking unit 133, a computer vision system 134, a route control system 135, and an obstacle avoidance system 136.
For example, the steering system 131 may be operable to adjust the heading of the vehicle 100. For example, in one embodiment, a steering wheel system. The throttle 132 may be used to control the operating speed of the engine 111 and thus the speed of the vehicle 100.
For example, the brake unit 133 may be used to control the vehicle 100 to decelerate; the brake unit 133 may use friction to slow the wheel 114. In other embodiments, the brake unit 133 may convert the kinetic energy of the wheel 114 into an electrical current. The braking unit 133 may take other forms to slow the rotational speed of the wheels 114 to control the speed of the vehicle 100.
As shown in FIG. 2, the computer vision system 134 may be operable to process and analyze images captured by the camera 125 to identify objects and/or features in the environment surrounding the vehicle 100. Such objects and/or features may include traffic signals, road boundaries, and obstacles. The computer vision system 134 may use object recognition algorithms, motion from motion (SFM) algorithms, video tracking, and other computer vision techniques. In some embodiments, the computer vision system 134 may be used to map an environment, track objects, estimate the speed of objects, and so forth.
For example, route control system 135 may be used to determine a travel route for vehicle 100. In some embodiments, route control system 135 may combine data from sensors, GPS, and one or more predetermined maps to determine a travel route for vehicle 100.
As shown in fig. 2, obstacle avoidance system 136 may be used to identify, evaluate, and avoid or otherwise negotiate potential obstacles in the environment of vehicle 100.
In one example, the control system 130 may additionally or alternatively include components other than those shown and described. Or may reduce some of the components shown above.
As shown in fig. 2, the vehicle 100 may interact with external sensors, other vehicles, other computer systems, or users through peripherals 140; the peripheral devices 140 may include, among other things, a wireless communication system 141, an in-vehicle computer 142, a microphone 143, and/or a speaker 144.
In some embodiments, the peripheral device 140 may provide a means for the vehicle 100 to interact with the user interface 170. For example, the in-vehicle computer 142 may provide information to a user of the vehicle 100. The user interface 116 may also operate the in-vehicle computer 142 to receive user input; the in-vehicle computer 142 may be operated through a touch screen. In other cases, the peripheral device 140 may provide a means for the vehicle 100 to communicate with other devices located within the vehicle. For example, the microphone 143 may receive audio (e.g., voice commands or other audio input) from a user of the vehicle 100. Similarly, the speaker 144 may output audio to a user of the vehicle 100.
As depicted in fig. 2, wireless communication system 141 may wirelessly communicate with one or more devices, either directly or via a communication network. For example, wireless communication system 141 may use 3G cellular communication; for example, Code Division Multiple Access (CDMA), EVD0, global system for mobile communications (GSM)/General Packet Radio Service (GPRS), or 4G cellular communications, such as Long Term Evolution (LTE); or, 5G cellular communication. The wireless communication system 141 may communicate with a Wireless Local Area Network (WLAN) using wireless fidelity (WiFi).
In some embodiments, the wireless communication system 141 may communicate directly with devices using an infrared link, bluetooth, or ZigBee protocols (ZigBee); other wireless protocols, such as various vehicular communication systems, for example, the wireless communication system 141 may include one or more Dedicated Short Range Communications (DSRC) devices that may include public and/or private data communications between vehicular and/or roadside stations.
As shown in fig. 2, a power supply 160 may provide power to various components of the vehicle 100. In one embodiment, power source 160 may be a rechargeable lithium ion or lead acid battery. One or more battery packs of such batteries may be configured as a power source to provide power to various components of the vehicle 100. In some embodiments, the power source 160 and the energy source 113 may be implemented together, such as in some all-electric vehicles.
Illustratively, some or all of the functionality of the vehicle 100 may be controlled by a computer system 150, wherein the computer system 150 may include at least one processor 151, the processor 151 executing instructions 153 stored in a non-transitory computer readable medium, for example, in a memory 152. The computer system 150 may also be a plurality of computing devices that control individual components or subsystems of the vehicle 100 in a distributed manner.
For example, the processor 151 may be any conventional processor, such as a commercially available CPU.
Alternatively, the processor may be a dedicated device such as an ASIC or other hardware-based processor. Although fig. 2 functionally illustrates a processor, memory, and other elements of a computer in the same block, those skilled in the art will appreciate that the processor, computer, or memory may actually comprise multiple processors, computers, or memories that may or may not be stored within the same physical housing. For example, the memory may be a hard drive or other storage medium located in a different enclosure than the computer. Thus, reference to a processor or computer will be understood to include reference to a collection of processors or computers or memories that may or may not operate in parallel. Rather than using a single processor to perform the steps described herein, some components, such as the steering and deceleration components, may each have their own processor that performs only computations related to the component-specific functions.
In various aspects described herein, the processor may be located remotely from the vehicle and in wireless communication with the vehicle. In other aspects, some of the processes described herein are executed on a processor disposed within the vehicle and others are executed by a remote processor, including taking the steps necessary to execute a single maneuver.
In some embodiments, the memory 152 may contain instructions 153 (e.g., program logic), which instructions 153 may be executed by the processor 151 to perform various functions of the vehicle 100, including those described above. The memory 152 may also contain additional instructions, including instructions to send data to, receive data from, interact with, and/or control one or more of the travel system 110, the sensing system 120, the control system 130, and the peripheral devices 140, for example.
Illustratively, in addition to instructions 153, memory 152 may also store data such as road maps, route information, location, direction, speed of the vehicle, and other such vehicle data, among other information. Such information may be used by the vehicle 100 and the computer system 150 during operation of the vehicle 100 in autonomous, semi-autonomous, and/or manual modes.
As shown in fig. 2, user interface 170 may be used to provide information to and receive information from a user of vehicle 100. Optionally, the user interface 170 may include one or more input/output devices within the collection of peripheral devices 140, such as a wireless communication system 141, an in-vehicle computer 142, a microphone 143, and a speaker 144.
In embodiments of the present application, the computer system 150 may control the functions of the vehicle 100 based on inputs received from various subsystems (e.g., the travel system 110, the sensing system 120, and the control system 130) and from the user interface 170. For example, the computer system 150 may utilize inputs from the control system 130 in order to control the brake unit 133 to avoid obstacles detected by the sensing system 120 and the obstacle avoidance system 136. In some embodiments, the computer system 150 is operable to provide control over many aspects of the vehicle 100 and its subsystems.
Alternatively, one or more of these components described above may be mounted or associated separately from the vehicle 100. For example, the memory 152 may exist partially or completely separate from the vehicle 100. The above components may be communicatively coupled together in a wired and/or wireless manner.
Optionally, the above components are only an example, in an actual application, components in the above modules may be added or deleted according to an actual need, and fig. 2 should not be construed as limiting the embodiment of the present application.
Alternatively, the vehicle 100 may be an autonomous automobile traveling on a road, and objects within its surrounding environment may be identified to determine an adjustment to the current speed. The object may be another vehicle, a traffic control device, or another type of object. In some examples, each identified object may be considered independently, and based on the respective characteristics of the object, such as its current speed, acceleration, separation from the vehicle, etc., may be used to determine the speed at which the autonomous vehicle is to be adjusted.
Optionally, the vehicle 100 or a computing device associated with the vehicle 100 (e.g., the computer system 150, the computer vision system 134, the memory 152 of fig. 2) may predict behavior of the identified objects based on characteristics of the identified objects and the state of the surrounding environment (e.g., traffic, rain, ice on the road, etc.).
Optionally, each identified object is dependent on the behavior of each other, and therefore, it is also possible to consider all identified objects together to predict the behavior of a single identified object. The vehicle 100 is able to adjust its speed based on the predicted behaviour of said identified object. In other words, the autonomous vehicle is able to determine that the vehicle will need to adjust (e.g., accelerate, decelerate, or stop) to a steady state based on the predicted behavior of the object. In this process, other factors may also be considered to determine the speed of the vehicle 100, such as the lateral position of the vehicle 100 in the road on which it is traveling, the curvature of the road, the proximity of static and dynamic objects, and so forth.
In addition to providing instructions to adjust the speed of the autonomous vehicle, the computing device may provide instructions to modify the steering angle of the vehicle 100 to cause the autonomous vehicle to follow a given trajectory and/or to maintain a safe lateral and longitudinal distance from objects in the vicinity of the autonomous vehicle (e.g., cars in adjacent lanes on the road).
The vehicle 100 may be a car, a truck, a motorcycle, a bus, a boat, an airplane, a helicopter, a lawn mower, an amusement car, a playground vehicle, construction equipment, a trolley, a golf cart, a train, a trolley, etc., and the embodiment of the present invention is not particularly limited.
In one possible implementation, the vehicle 100 shown in fig. 2 may be an autonomous vehicle, and the autonomous system is described in detail below.
Fig. 3 is a schematic diagram of an automatic driving system provided in an embodiment of the present application.
The autopilot system shown in fig. 3 includes a computer system 201, wherein the computer system 201 includes a processor 203, and the processor 203 is coupled to a system bus 205. Processor 203 may be one or more processors, where each processor may include one or more processor cores. A display adapter 207(video adapter), which may drive a display 209, the display 209 coupled with the system bus 205. System bus 205 may be coupled to an input/output (I/O) bus 213 through a bus bridge 211, and I/O interface 215 may be coupled to the I/O bus. The I/O interface 215 communicates with various I/O devices, such as an input device 217 (e.g., keyboard, mouse, touch screen, etc.), a media tray 221 (e.g., CD-ROM, multimedia interface, etc.). Transceiver 223 may send and/or receive radio communication signals and camera 255 may capture digital video images of the scene and motion. Among the interfaces connected to the I/O interface 215 may be USB ports 225.
The processor 203 may be any conventional processor, such as a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, or a combination thereof.
Alternatively, the processor 203 may be a dedicated device such as an Application Specific Integrated Circuit (ASIC); the processor 203 may be a neural network processor or a combination of a neural network processor and a conventional processor as described above.
Optionally, in various embodiments described herein, the computer system 201 may be located remotely from the autonomous vehicle and may communicate wirelessly with the autonomous vehicle. In other aspects, some processes described herein are executed on a processor disposed within an autonomous vehicle, others being executed by a remote processor, including taking the actions required to perform a single maneuver.
Computer system 201 may communicate with software deploying server 249 via network interface 229. The network interface 229 may be a hardware network interface, such as a network card. The network 227 may be an external network, such as the internet, or an internal network, such as an ethernet or a Virtual Private Network (VPN). Optionally, the network 227 may also be a wireless network, such as a wifi network, a cellular network, or the like.
As shown in FIG. 3, a hard drive interface is coupled to system bus 205, and a hard drive interface 231 may be coupled to hard drive 233, and a system memory 235 is coupled to system bus 205. The data running in system memory 235 may include an operating system 237 and application programs 243. Operating system 237 may include a parser 239(shell) and a kernel 241(kernel), among other things. The shell 239 is an interface between the user and the kernel of the operating system. Shell can be the outermost layer of the operating system; the shell may manage the interaction between the user and the operating system, such as waiting for user input, interpreting the user input to the operating system, and processing the output results of the various operating systems. Kernel 241 may be comprised of those portions of the operating system used to manage memory, files, peripherals, and system resources. Interacting directly with the hardware, the operating system kernel typically runs processes and provides inter-process communication, CPU slot management, interrupts, memory management, IO management, and the like. Applications 243 include programs related to controlling the automatic driving of a vehicle, such as programs that manage the interaction of an automatically driven vehicle with obstacles on the road, programs that control the route or speed of an automatically driven vehicle, and programs that control the interaction of an automatically driven vehicle with other automatically driven vehicles on the road. Application programs 243 also exist on the system of software deploying server 249. In one embodiment, the computer system 201 may download an application from the software deployment server 249 when the autopilot-related program 247 needs to be executed.
Illustratively, a sensor 253 can be associated with the computer system 201, and the sensor 253 can be used to detect the environment surrounding the computer 201.
For example, the sensor 253 can detect animals, cars, obstacles, crosswalks, etc., and further the sensor can detect the environment around the objects such as the animals, cars, obstacles, crosswalks, etc., such as: the environment surrounding the animal, e.g., other animals present around the animal, weather conditions, ambient light brightness, etc.
Alternatively, if the computer system 201 is located on an autonomous vehicle, the sensor may be a camera, infrared sensor, chemical detector, microphone, or the like.
Illustratively, the sensor 253 may be plural. The plurality of sensors may be configured to detect a position of an obstacle around the vehicle, the position of the obstacle being derived based on data acquired by the plurality of sensors. Specifically, obtaining the position of the obstacle based on the data acquired by the plurality of sensors may be implemented by the target tracking method according to the embodiment of the present application.
In one example, the computer system 150 shown in FIG. 2 may also receive information from, or transfer information to, other computer systems. Alternatively, sensor data collected from the sensing system 120 of the vehicle 100 may be transferred to another computer for processing of this data.
For example, as shown in fig. 4, data from computer system 312 may be transmitted via a network to cloud-side server 320 for further processing. The network and intermediate nodes may include various configurations and protocols, including the internet, world wide web, intranets, virtual private networks, wide area networks, local area networks, private networks using proprietary communication protocols of one or more companies, ethernet, WiFi, and HTTP, as well as various combinations of the foregoing; such communications may be by any device capable of communicating data to and from other computers, such as modems and wireless interfaces.
In one example, server 320 may comprise a server having multiple computers, such as a load balancing server farm, that exchange information with different nodes of a network for the purpose of receiving, processing, and transmitting data from computer system 312. The server may be configured similar to computer system 312, with processor 330, memory 340, instructions 350, and data 360.
Illustratively, the data 360 of the server 320 may include information regarding the road conditions surrounding the vehicle. For example, the server 320 may receive, detect, store, update, and transmit information related to vehicle road conditions.
The relevant content of the bayesian filter is described in detail below.
A state space refers to a set of state variables that describe all possible states of a system. The automobile can be regarded as a system, the operation of the automobile by a user can be regarded as an input variable, when an operation signal is input, the operation signal can clearly affect the speed, the acceleration, the angular velocity and other variables of the automobile, and the affected variables can be regarded as state variable components of the system.
Observation refers to the process of obtaining state variable estimates, either directly or indirectly through some measurement means.
Bayesian estimation means that from any moment k-1, the probability distribution of state prior estimation of the next moment k is calculated, which is called prediction; and then after the observed value of the k moment is obtained, correcting the prior estimation obtained in the prediction link to obtain the posterior estimation of the state of the k moment, which is called as updating.
The Bayesian recursive estimation is applied to actual engineering and is called Bayesian filtering.
The Bayesian filter is divided into a prediction part and an updating part.
The prediction (predict) process satisfies formula (1):
f k+1|k (X|z k )=∫f k+1|k (X|X')f k|k (X'|z k )dX' (1)
the update process satisfies formula (2) and formula (3):
Figure PCTCN2020075556-APPB-000017
f k+1 (z k+1 |z k )=∫f k+1 (z k+1 |X)f k+1|k (X|z k )dX (3)
wherein, X is a state quantity, i.e. an output quantity of the bayesian filter, and a space where the state quantity is located is a state space. z is the observed quantity, i.e. input quantity of the Bayesian filter, viewThe space where the measurement is located is the observation space. z is a radical of k+1 Is observed at the k +1 th time, z k ={z 0 ,z 1 ,...,z k The set of k previous observations; f () is a probability density function.
Fig. 5 is a schematic diagram of a target tracking device according to an embodiment of the present application. The target tracking device 402 may be implemented in a computer system 401. The target tracking device includes a prediction unit 410 and an update unit 420. The prediction unit 410 includes a determination module 411, and the update unit 420 includes an observation transfer module 422.
Further, the prediction unit 410 may also include a state transition module 412. Further, the updating unit 420 may further include an updating module 421.
In order to better understand the implementation process of the target tracking method according to the embodiment of the present application, the functions of the respective modules in fig. 5 are briefly described below.
The determining module 411 is configured to determine whether the collection time of the observed quantity is earlier than the update time of the current state quantity. That is, whether the observed quantity is a late observed quantity is determined. The current state quantity refers to a state quantity that is updated last. Specifically, when the acquisition time of the observed quantity is earlier than the update time of the latest state quantity, the observed quantity is a late observed quantity.
It should be understood that, in the embodiment of the present application, the observed quantity acquired by the sensor is the actual observed quantity.
And the state transition module 412 is configured to obtain a predicted value of the current updated state quantity according to the state quantity updated last time when the acquisition time of the observed quantity is not earlier than the update time of the current state quantity. The concrete process satisfies formula (1).
The updating module 421 is configured to update the state quantity according to the predicted value of the state quantity updated this time obtained by the state transition module 412. The concrete process satisfies formula (2) and formula (3).
And an observed quantity transferring unit 422, configured to perform state transfer on the observed quantity to obtain an estimated observed quantity when a collection time of the observed quantity is earlier than an update time of the current state quantity. Observation quantity transfer unit 422 is further configured to update the current state quantity according to the actual observation quantity and the estimated observation quantity, and use the updated state quantity as the state quantity corresponding to the update time.
The target tracking method 500 of the embodiment of the present application is described in detail below with reference to fig. 6. Fig. 6 is a schematic flow chart diagram of a target tracking method 500 according to an embodiment of the present application. The method shown in fig. 6 may be performed by the target tracking apparatus in the embodiment of the present application. The method 500 includes steps S510 to S560. Step S510 to step S560 will be described in detail below.
And S510, acquiring the actual observed quantity of the moving object acquired by the sensor. It should be understood that, in the embodiment of the present application, the observed quantity acquired by the sensor is an actual observed quantity, and the actual observed quantity may also be referred to as "observed quantity" in the embodiment of the present application. The observed quantity may include the speed of the moving object, the acceleration of the moving object, or the position of the moving object, etc. For example, the observed quantity may include the position and/or speed of the moving object measured by the millimeter wave radar, and the like.
S520, judging whether the acquisition time of the observed quantity is earlier than the updating time of the current state quantity of the moving object.
If the collection time of the observed quantity is not earlier than the update time of the current state quantity of the moving object, step S530 is executed. If the acquisition time of the observed quantity is earlier than the update time of the current state quantity of the moving object, step S540 is executed.
The acquisition time of the observed quantity is earlier than the update time of the current state quantity of the moving object, and it can be understood that the observed quantity can be originally used for updating the current state quantity of the moving object, but cannot be used for updating the current state quantity of the moving object actually.
The time of acquisition of the observation may be indicated by a timestamp of the observation.
The state quantity may include the speed of the moving object, the acceleration of the moving object, the position of the moving object, or the like. For example, when the method 500 is used to track the position of a target, the state quantity may be the result of the tracking, which may be the position of a moving object.
The update time of the current state quantity of the moving object can be understood as the time when the state quantity of the moving object is updated last time.
The state quantity of the moving object and the update time of the state quantity can be saved in a tracking list in the Bayesian filter.
Specifically, it is determined whether the acquisition time of the observed quantity is earlier than the update time of the current state quantity of the moving object, which may be determined whether the timestamp of the observed quantity is earlier than the latest update time in the tracking list.
And S530, updating the current state quantity of the moving object, and taking the updated state quantity as the state quantity corresponding to the acquisition time of the observed quantity.
Specifically, the current state quantity of the moving object may be updated by a bayesian filter. For example, the current state quantity of the moving object may be updated according to the above-described formula (1), formula (2), and formula (3).
And S540, judging whether the time difference between the acquisition time of the observed quantity and the updating time of the current state quantity of the moving object is greater than a first threshold value.
If the time difference is greater than a first threshold, the observation is discarded. That is, the state quantity is not updated by the input observed quantity. This can avoid a decrease in the confidence of the updated state quantity due to the state quantity being updated by the observed quantity when the time difference is too large.
If the time difference is less than or equal to the first threshold, step S550 is performed.
Alternatively, in step S540, it may be determined whether a time difference between the acquisition time of the observed quantity and the update time of the current state quantity of the moving object is greater than or equal to a first threshold.
If the time difference is greater than or equal to a first threshold, the observation is discarded. That is, the state quantity is not updated by the input observed quantity. If the time difference is less than the first threshold, step S550 is performed.
It should be noted that step S520, step S530, and step S540 are optional steps, and the method 500 in the embodiment of the present application may perform step S550 after step S510.
And S550, under the condition that the acquisition time of the observed quantity is earlier than the updating time of the current state quantity of the moving object, obtaining the estimated observed quantity according to the state transition model corresponding to the moving object.
When the method 500 is used to track the position of a target, the state transition model may be a kinematic model of a moving object. For example, the kinematic model may include a uniform linear motion model or the like.
Specifically, a likelihood function of the observed quantity corresponding to the acquisition time is obtained according to a state transition model corresponding to the moving object, the estimated observed quantity is determined according to the likelihood function of the observed quantity corresponding to the acquisition time, the state transition model is determined according to a first probability density function, and the first probability density function is a probability density function for transferring the current state quantity to the state quantity corresponding to the acquisition time.
Step S550 is explained below with reference to the formula.
t k Observed quantity z acquired at any moment k Inputting a Bayesian filter based on the observed quantity z k Updating the state quantity to obtain the state quantity X k ' as Current State quantity X k '. Accordingly, the latest state quantity is updated to t k At the moment, i.e. the current state quantity X k ' update time t k The moment of time. At t k After time t k-1 Observed quantity z acquired at any moment k-1 And inputting the input into a Bayesian filter. The observation quantity z is collected by the Bayesian filter at a time earlier than the update time of the current state quantity k And receives observation z after updating state quantity k-1
If the observed quantity z k-1 The moment of inputting the Bayesian filter is earlier than the observed quantity z k The moment when the Bayesian filter is input, then t k-1 Likelihood function g (z) of time of day k-1 |X k-1 )。
If the observed quantity z k-1 The moment of inputting the Bayesian filter is not earlier than the observed quantity z k The moment when the Bayesian filter is input, then t k Time t k-1 The state transition model at the moment is f k-1|k (X|X k ') the state transition is a Markov transition. f. of k-1|k (X|X k ') is the first probability density function, which is defined by t k State quantity X of time update k ' transfer to t k-1 Probability density function of the state quantities updated at the time.
The likelihood function of the observed quantity corresponding to the acquisition time satisfies the following conditions:
g(z k-1 |X k ')=∫g(z k-1 |X)f k-1|k (X|X k ')dX
wherein g (z) k-1 |X k ') denotes with respect to t k-1 Observed quantity z corresponding to time k-1 X represents t k-1 Quantity of state of the time update, X k ' means in the absence of z k-1 In case of (2) t k The state quantity of the time update, g (z) k-1 | X) is represented in z k-1 For t k-1 With respect to z in case of updating the state quantity at a time k-1 The likelihood function of (c).
And S560, updating the current state quantity according to the actual observed quantity and the estimated observed quantity, and taking the updated state quantity as the state quantity corresponding to the updating time.
Specifically, the probability density function of the current state quantity may be updated according to a likelihood function of the observed quantity corresponding to the collection time. And determining the state quantity corresponding to the updating time according to the probability density function of the updated state quantity.
Step S560 is described below with reference to the formula.
The probability density function of the updated state quantity satisfies:
Figure PCTCN2020075556-APPB-000018
wherein f is k|k (X k |z k ) Represents t k State quantity X of time update k Of a probability density function of f' k|k (X k '|z k ,z k-2 ) Is shown in the absence of z k-1 In case of (2) t k State quantity X of time update k ' probability density function. I.e. the current state quantity X k ' corresponding posterior probability. z is a radical of formula k Represents t k Observed quantity, z, collected at a moment k Set of observations { z ] representing k time acquisitions 1 ,z 2 ,...,z k-2 ,z k-1 ,z k },z k-2 Set of observations { z ] representing k-2 time instances acquisition 1 ,z 2 ,...,z k-2 }。
Further, kalman filtering is an implementation form of the bayesian filter, and the step S560 is described by taking the kalman filter as an example.
Step S560 may include determining an expectation of a state quantity corresponding to the update time according to the actual observed quantity and the estimated observed quantity; the expectation of the state quantity corresponding to the update time is taken as the state quantity corresponding to the update time.
Wherein the expectation of the state quantity corresponding to the update time is related to the Kalman gain value. The kalman gain value is related to the first covariance, the observation matrix at the acquisition time, the covariance of the observation matrix, and the variance of the observed quantity at the acquisition time. The first covariance refers to the covariance of the state quantities transferred from the update time to the acquisition time.
Optionally, the kalman gain value satisfies:
Figure PCTCN2020075556-APPB-000019
Var(z k-1 ) Satisfies the following conditions:
Figure PCTCN2020075556-APPB-000020
P k-1|k satisfies the following conditions:
Figure PCTCN2020075556-APPB-000021
wherein, P k-1|k Indicating the time t of update k Transfer to acquisition time t k-1 Covariance of the state quantity of (1), P' k|k Indicated in the absence of t k-1 Observed quantity z acquired at any moment k-1 In case of (2) t k Covariance of the state quantities of the time of day updates, F k-1|k Represents from t k Time shift to t k-1 State transition matrix of time, Q k Representing the covariance of the prediction matrix, H k-1 Represents t k-1 The observation matrix at time, Var (z) k-1 ) Represents t k-1 Observed quantity z acquired at any moment k-1 Variance of (A), R k-1 Representing the covariance of the observation matrix.
The expectation of the state quantity corresponding to the update time satisfies:
Figure PCTCN2020075556-APPB-000022
wherein x is k|k Represents t k The expectation of the state quantity updated at a moment,
Figure PCTCN2020075556-APPB-000023
represents t k-1 An estimated observation of time of day. The expectation x of the state quantity corresponding to the time of updating k|k As the state quantity corresponding to the update time.
Optionally, the estimation value of the observed quantity corresponding to the acquisition time satisfies:
Figure PCTCN2020075556-APPB-000024
wherein,
Figure PCTCN2020075556-APPB-000025
denotes t k-1 Estimated observation of time, H k-1 Represents t k-1 Observation matrix of time, F k-1|k Represents from t k Time shift to t k-1 State transition matrix of time of day, x' k|k Is indicated in the absence of said t k-1 Observed quantity z acquired at any moment k-1 T in the case of k Expectation of the state quantity updated at the moment.
Figure PCTCN2020075556-APPB-000026
Represents t k-1 The estimated observation at a time can be understood as being passed through the state transition matrix F k-1|k Expected x 'of state quantity' k|k Transfer to t k-1 Time of day, then based on the observation matrix H k-1 To obtain t k-1 Expectation of observed quantity of time
Figure PCTCN2020075556-APPB-000027
As said t k-1 An estimated observation of time of day.
Further, t is calculated k Covariance of the state quantities updated at the moment. t is t k Time of day updatingThe covariance of the state quantities satisfies:
P k|k =(I-K' k H k-1 )P k-1|k
wherein, P k|k Denotes t k Covariance of the state quantities updated at the moment.
According to the scheme of the embodiment of the application, the state transition is carried out according to the state transition model corresponding to the moving object, the current state quantity is updated, the accurate state quantity corresponding to the latest updating time can be obtained instead of the state quantity corresponding to the acquisition time, and disorder jump of the output state quantity in time is avoided.
Step S550 and step S560 will be described below, taking the example where the method 500 is applied to a tracking target.
R represents the distance between the sensor and the target. x ═ R denotes a one-dimensional vector.
In the absence of said t k-1 Observed quantity z acquired at any moment k-1 T in the case of k The state quantity x of the time update k Satisfy the requirement of
Figure PCTCN2020075556-APPB-000028
x k ~N(x;x′ k|k ,P′ k|k ) Indicating that the distance between the sensor and the target is one desired to be x' k|k Variance is P' k|k The random number of (2). P' k|k May be 1.
x' k|k Is indicated in the absence of said t k-1 Observed quantity z acquired at any moment k-1 T in the case of k Expectation of state quantity of time update. P' k|k Indicated in the absence of t k-1 Observed quantity z acquired at any moment k-1 In case of (2) t k Covariance of the state quantities updated at the time of day.
E.g., x' k|k Can be 3, P' k|k Can be 1, t k-1 View of the moment collectionMeasurement z k-1 May be 4. In this case, P k-1|k Satisfies the following conditions:
Figure PCTCN2020075556-APPB-000029
t k-1 observed quantity z acquired at any moment k-1 Variance of (Var) (z) k-1 ) Satisfies the following conditions:
Figure PCTCN2020075556-APPB-000030
the Kalman gain value satisfies:
Figure PCTCN2020075556-APPB-000031
t k-1 the estimated observation at the time satisfies:
Figure PCTCN2020075556-APPB-000032
t k expectation x of state quantity of time update k|k Satisfies the following conditions:
Figure PCTCN2020075556-APPB-000033
wherein x is k|k As a result of the output, i.e. t k The state quantity corresponding to the moment.
t k Covariance P of state quantities updated at time k|k Satisfies the following conditions:
P k|k =(I-K' k H k-1 )P k-1|k =(1-1*1)*1=0
it should be understood that the above examples are intended to assist those skilled in the art in understanding the embodiments of the present application and are not intended to limit the embodiments of the present application to the particular values or particular scenarios illustrated. It will be apparent to those skilled in the art from this disclosure that various equivalent modifications or changes may be made, and such modifications or changes are intended to fall within the scope of the embodiments of the present application.
The target tracking method according to the embodiment of the present application is described in detail above with reference to fig. 6, and the apparatus according to the embodiment of the present application is described in detail below with reference to fig. 7 to 8. It should be understood that the target tracking device in the embodiment of the present application may execute the target tracking method in the embodiment of the present application, that is, the following specific working processes of various products, and reference may be made to the corresponding processes in the embodiment of the foregoing method.
Fig. 7 is a schematic block diagram of a target tracking device provided in an embodiment of the present application. It should be understood that the target tracking apparatus 1000 may perform the target tracking method shown in fig. 6. The target tracking apparatus 1000 includes: an acquisition unit 1010 and a processing unit 1020.
The obtaining unit 1010 is configured to obtain an actual observed amount of the moving object collected by the sensor. The processing unit 1020 is configured to obtain an estimated observed quantity according to a state transition model corresponding to the moving object when the acquisition time of the actual observed quantity is earlier than the update time of the current state quantity of the moving object; and updating the current state quantity according to the actual observed quantity and the estimated observed quantity, and taking the updated state quantity as the state quantity corresponding to the updating time.
Optionally, the processing unit 1020 is configured to: and obtaining a likelihood function of the observed quantity corresponding to the acquisition time according to a state transition model corresponding to the moving object, wherein the estimated observed quantity is determined according to the likelihood function of the observed quantity corresponding to the acquisition time, the state transition model is determined according to a first probability density function, and the first probability density function is a probability density function for transferring the current state quantity to the state quantity corresponding to the acquisition time.
Optionally, the processing unit 1020 is configured to: updating the probability density function of the current state quantity according to the likelihood function of the observed quantity corresponding to the acquisition time; and determining the state quantity corresponding to the updating time according to the probability density function of the updated state quantity.
Optionally, the likelihood function for the observation corresponding to the acquisition time satisfies:
g(z k-1 |X k ')=∫g(z k-1 |X)f k-1|k (X|X k ')dX
wherein g (z) k-1 |X k ') indicates with respect to t k-1 Observed quantity z corresponding to time k-1 X represents t k-1 The state quantity, X, of the time update k ' means in the absence of z k-1 In case of (2) t k The state quantity of the time update, g (z) k-1 | X) is represented in z k-1 For t k-1 With respect to z in the case of updating the state quantity at a time k-1 Likelihood function of (a), f k-1|k (X|X k ') is a first probability density function, which is a function of t k The state quantity X of the time update k ' transfer to t k-1 Probability density function of the state quantities updated at the time.
Optionally, the updated probability density function of the state quantities satisfies:
Figure PCTCN2020075556-APPB-000034
wherein f is k|k (X k |z k ) Represents t k State quantity X of time update k Probability density function of (2), f' k|k (X k '|z k ,z k-2 ) Is shown in the absence of z k-1 In case of (2) t k State quantity X of time update k ' probability density function of z k Denotes t k Observed quantity, z, collected at a moment k To representSet of observations collected at k times { z } 1 ,z 2 ,...,z k-2 ,z k-1 ,z k },z k-2 Set of observations { z ] representing k-2 time instances acquisition 1 ,z 2 ,...,z k-2 }。
Optionally, the processing unit 1020 is configured to: determining the expectation of the state quantity corresponding to the updating moment according to the actual observed quantity and the estimated observed quantity; taking the expectation of the state quantity corresponding to the updating time as the state quantity corresponding to the updating time; the expectation of the state quantity corresponding to the updating moment is related to a Kalman gain value, the Kalman gain value is related to a first covariance, an observation matrix of the acquisition moment, a covariance of the observation matrix and a variance of the observation quantity of the acquisition moment, and the first covariance refers to the covariance of the state quantity transferred from the updating moment to the acquisition moment.
Optionally, the kalman gain value satisfies:
Figure PCTCN2020075556-APPB-000035
Var(z k-1 ) Satisfies the following conditions:
Figure PCTCN2020075556-APPB-000036
P k-1|k satisfies the following conditions:
Figure PCTCN2020075556-APPB-000037
wherein, P k-1|k Indicating the time t of update k Transfer to acquisition time t k-1 Covariance of the state quantity of (2), P' k|k Indicated in the absence of t k-1 Observed quantity z acquired at any moment k-1 In case of (2) t k Time of dayCovariance of the updated state quantities, F k-1|k Represents from t k Time shift to t k-1 State transition matrix of time, Q k Representing the covariance of the prediction matrix, H k-1 Represents t k-1 The observation matrix at time, Var (z) k-1 ) Represents t k-1 Observed quantity z acquired at any moment k-1 Variance of (A), R k-1 Representing the covariance of the observation matrix.
Optionally, the expectation of the state quantity corresponding to the update time satisfies:
Figure PCTCN2020075556-APPB-000038
wherein x is k|k Denotes t k The expectation of the state quantities updated at the moment,
Figure PCTCN2020075556-APPB-000039
denotes t k-1 An estimated observed quantity of time.
Optionally, the estimation value of the observed quantity corresponding to the acquisition time satisfies:
Figure PCTCN2020075556-APPB-000040
wherein,
Figure PCTCN2020075556-APPB-000041
represents t k-1 Estimated observation of time, H k-1 Represents t k-1 Observation matrix of time of day, F k-1|k Represents from t k Time shift to t k-1 State transition matrix of time, x' k|k Indicated in the absence of t k-1 Observed quantity z acquired at any moment k-1 T in the case of k Expectation of state quantity of time update.
Optionally, the processing unit 1020 is configured to: and under the condition that the acquisition time of the observed quantity is earlier than the updating time of the current state quantity of the moving object and the time difference between the acquisition time and the updating time is less than or equal to the threshold value, obtaining the estimated observed quantity according to the state transition model corresponding to the moving object.
The target tracking apparatus 1000 is embodied as a functional unit. The term "unit" herein may be implemented in software and/or hardware, and is not particularly limited thereto.
For example, a "unit" may be a software program, a hardware circuit, or a combination of both that implement the above-described functions. The hardware circuitry may include an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (e.g., a shared processor, a dedicated processor, or a group of processors) and memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that support the described functionality.
Accordingly, the units of the respective examples described in the embodiments of the present application can be realized in electronic hardware, or a combination of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
Fig. 8 is a schematic hardware structure diagram of a target tracking apparatus according to an embodiment of the present application.
As shown in fig. 8, the target tracking apparatus 1200 (the target tracking apparatus 1200 may be specifically a computer device) includes a memory 1201, a processor 1202, a communication interface 1203, and a bus 1204. The memory 1201, the processor 1202, and the communication interface 1203 are communicatively connected to each other through a bus 1204.
The memory 1201 may be a Read Only Memory (ROM), a static memory device, a dynamic memory device, or a Random Access Memory (RAM). The memory 1201 may store a program, and when the program stored in the memory 1201 is executed by the processor 1202, the processor 1202 is configured to perform the steps of the object tracking method of the embodiment of the present application, for example, perform the steps shown in fig. 6.
It should be understood that the target tracking device shown in the embodiment of the present application may be a server, for example, a server in a cloud, or may also be a chip configured in the server in the cloud.
The processor 1202 may be a general-purpose Central Processing Unit (CPU), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits, and is configured to execute related programs to implement the target tracking method of the embodiment of the present application.
The processor 1202 may also be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the object tracking method of the present application may be implemented by integrated logic circuits of hardware or instructions in the form of software in the processor 1202.
The processor 1202 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, or discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 1201, and the processor 1202 reads information in the memory 1201, and completes functions required to be executed by units included in the object tracking apparatus shown in fig. 7 in the embodiment of the present application in combination with hardware thereof, or executes the object tracking method shown in fig. 6 in the embodiment of the method of the present application.
The communication interface 1203 enables communication between the object tracking apparatus 1200 and other devices or communication networks using transceiver means such as, but not limited to, a transceiver.
The bus 1204 may include a pathway to transfer information between the various components of the target tracking apparatus 1200 (e.g., the memory 1201, the processor 1202, the communication interface 1203).
It should be noted that although the target tracking apparatus 1200 described above shows only memories, processors, and communication interfaces, in particular implementations, those skilled in the art will appreciate that the target tracking apparatus 1200 may also include other components necessary to achieve proper operation. Also, those skilled in the art will appreciate that the target tracking apparatus 1200 described above may also include hardware components to implement other additional functions, according to particular needs.
Further, those skilled in the art will appreciate that the target tracking apparatus 1200 described above may also include only those components necessary to implement the embodiments of the present application, and need not include all of the components shown in FIG. 8.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one type of logical functional division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or an access network device) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (22)

  1. A target tracking method, comprising:
    acquiring the actual observed quantity of a moving object acquired by a sensor;
    under the condition that the acquisition time of the actual observed quantity is earlier than the updating time of the current state quantity of the moving object, obtaining an estimated observed quantity according to a state transition model corresponding to the moving object;
    and updating the current state quantity according to the actual observed quantity and the estimated observed quantity, and taking the updated state quantity as the state quantity corresponding to the updating time.
  2. The method of claim 1, wherein obtaining the estimated observations from the state transition models corresponding to the moving objects comprises:
    and obtaining a likelihood function of the observed quantity corresponding to the acquisition time according to a state transition model corresponding to the moving object, wherein the estimated observed quantity is determined according to the likelihood function of the observed quantity corresponding to the acquisition time, the state transition model is determined according to a first probability density function, and the first probability density function is a probability density function for transferring the current state quantity to the state quantity corresponding to the acquisition time.
  3. The method according to claim 2, wherein the updating the current state quantity according to the actual observation quantity and the estimated observation quantity, and taking the updated state quantity as the state quantity corresponding to the update time includes:
    updating the probability density function of the current state quantity according to the likelihood function of the observed quantity corresponding to the acquisition time;
    and determining the state quantity corresponding to the updating time according to the probability density function of the updated state quantity.
  4. A method as claimed in claim 2 or 3, wherein the likelihood function for the observation corresponding to the acquisition instant satisfies:
    g(z k-1 |X k ')=∫g(z k-1 |X)f k-1|k (X|X k ')dX;
    wherein, g (z) k-1 |X k ') denotes with respect to t k-1 Observed quantity z corresponding to time k-1 X represents t k-1 The state quantity, X, of the time update k ' means in the absence of z k-1 In case of (2) t k The amount of state of the time update, g (z) k-1 | X) is represented in z k-1 For t k-1 With respect to z in the case of updating the state quantity at a time k-1 Likelihood function of f k-1|k (X|X k ') is the first probability density function, the first probability density function is a function of t k The state quantity X of the time update k ' transfer to t k-1 Probability density function of the state quantities updated at the time.
  5. The method of claim 4, wherein the updated probability density function of state quantities satisfies:
    Figure PCTCN2020075556-APPB-100001
    wherein f is k|k (X k |z k ) Represents t k State quantity X of time update k Of a probability density function of f' k|k (X k '|z k ,z k-2 ) Is shown in the absence of z k-1 In case of (2) t k State quantity X of time update k ' probability density function of z k Represents t k Observed quantities, z, collected at times k Set of observations { z ] representing k time acquisitions 1 ,z 2 ,...,z k-2 ,z k-1 ,z k },z k-2 Set of observations { z ] representing k-2 time instances acquisition 1 ,z 2 ,...,z k-2 }。
  6. The method according to claim 1, wherein the updating the current state quantity based on the actual observation and the estimated observation, and the setting the updated state quantity as the state quantity corresponding to the update time comprises:
    determining the expectation of the state quantity corresponding to the updating moment according to the actual observed quantity and the estimated observed quantity;
    taking the expectation of the state quantity corresponding to the updating time as the state quantity corresponding to the updating time;
    wherein the expectation of the state quantity corresponding to the update time is related to a Kalman gain value, the Kalman gain value is related to a first covariance, an observation matrix of the acquisition time, a covariance of the observation matrix and a variance of the observation quantity of the acquisition time, and the first covariance refers to a covariance of the state quantity transferred from the update time to the acquisition time.
  7. The method of claim 6, wherein the Kalman gain value satisfies:
    Figure PCTCN2020075556-APPB-100002
    Var(z k-1 ) Satisfies the following conditions:
    Figure PCTCN2020075556-APPB-100003
    P k-1|k satisfies the following conditions:
    Figure PCTCN2020075556-APPB-100004
    wherein, P k-1|k Indicating the time t of update k Transfer to acquisition time t k-1 Covariance of the state quantity of (1), P' k|k Indicated in the absence of t k-1 Observed quantity z acquired at any moment k-1 In case of (2) t k Covariance of the state quantities updated at the moment, F k-1|k Represents from t k Time shift to t k-1 State transition matrix of time, Q k Representing the covariance of the prediction matrix, H k-1 Represents t k-1 The observation matrix at time, Var (z) k-1 ) Denotes t k-1 Observed quantity z acquired at any moment k-1 Variance of (A), R k-1 Representing the covariance of the observation matrix.
  8. The method of claim 7, wherein the expectation of the state quantity corresponding to the update time satisfies:
    Figure PCTCN2020075556-APPB-100005
    wherein x is k|k Represents t k The expectation of the state quantity updated at a moment,
    Figure PCTCN2020075556-APPB-100006
    represents t k-1 An estimated observation of time of day.
  9. The method of any of claims 6-8, wherein the estimating the observation satisfies:
    Figure PCTCN2020075556-APPB-100007
    wherein,
    Figure PCTCN2020075556-APPB-100008
    represents t k-1 Estimated observation of time, H k-1 Represents t k-1 Observation matrix of time, F k-1|k Represents from t k Time shift to t k-1 State transition matrix of time, x' k|k Is indicated in the absence of said t k-1 Observed quantity z acquired at any moment k-1 T in the case of k Expectation of state quantity of time update.
  10. The method of any one of claims 1 to 9, wherein obtaining the estimated observation quantity according to the state transition model corresponding to the moving object when the acquisition time of the observation quantity is earlier than the update time of the current state quantity of the moving object comprises:
    and under the condition that the acquisition time of the actual observed quantity is earlier than the updating time of the current state quantity of the moving object, and the time difference between the acquisition time and the updating time is less than or equal to a threshold value, obtaining the estimated observed quantity according to a state transition model corresponding to the moving object.
  11. An object tracking device, comprising an acquisition module and a processing module, wherein,
    the acquisition module is used for: acquiring the actual observed quantity of a moving object acquired by a sensor;
    the processing module is used for:
    under the condition that the acquisition time of the actual observed quantity is earlier than the updating time of the current state quantity of the moving object, obtaining an estimated observed quantity according to a state transition model corresponding to the moving object;
    and updating the current state quantity according to the actual observed quantity and the estimated observed quantity, and taking the updated state quantity as the state quantity corresponding to the updating time.
  12. The apparatus of claim 11, wherein the processing module is to:
    and obtaining a likelihood function of the observed quantity corresponding to the acquisition time according to a state transition model corresponding to the moving object, wherein the estimated observed quantity is determined according to the likelihood function of the observed quantity corresponding to the acquisition time, the state transition model is determined according to a first probability density function, and the first probability density function is a probability density function for transferring the current state quantity to the state quantity corresponding to the acquisition time.
  13. The apparatus of claim 12, wherein the processing module is to:
    updating the probability density function of the current state quantity according to the likelihood function of the observed quantity corresponding to the acquisition time;
    and determining the state quantity corresponding to the updating time according to the probability density function of the updated state quantity.
  14. The apparatus of claim 12 or 13, wherein the likelihood function for the observation corresponding to the acquisition instant satisfies:
    g(z k-1 |X k ')=∫g(z k-1 |X)f k-1|k (X|X k ')dX;
    wherein g (z) k-1 |X k ') indicates with respect to t k-1 Observed quantity z corresponding to time k-1 X represents t k-1 The state quantity, X, of the time update k ' means in the absence of z k-1 In case of (2) t k The state quantity of the time update, g (z) k-1 X) is represented at z k-1 For t k-1 With respect to z in the case of updating the state quantity at a time k-1 Likelihood function of f k-1|k (X|X k ') is the first probability density function, the first probability density function is a function of t k State quantity X of time update k ' transfer to t k-1 Probability density function of the state quantities updated at the time.
  15. The apparatus of claim 14, wherein the updated probability density function of state quantities satisfies:
    Figure PCTCN2020075556-APPB-100009
    wherein f is k|k (X k |z k ) Represents t k State quantity X of time update k Of a probability density function of f' k|k (X k '|z k ,z k-2 ) Is shown in the absence of z k-1 In case of (2) t k State quantity X of time update k ' probability density function of z k Represents t k Observed quantity, z, collected at a moment k Set of observations { z ] representing k time acquisitions 1 ,z 2 ,...,z k-2 ,z k-1 ,z k },z k-2 Views representing k-2 time instants of acquisitionSet of measurements { z } 1 ,z 2 ,...,z k-2 }。
  16. The apparatus of claim 11, wherein the processing module is to:
    determining the expectation of the state quantity corresponding to the updating moment according to the actual observed quantity and the estimated observed quantity;
    taking the expectation of the state quantity corresponding to the updating time as the state quantity corresponding to the updating time;
    wherein the expectation of the state quantity corresponding to the update time is related to a Kalman gain value, the Kalman gain value is related to a first covariance, an observation matrix of the acquisition time, a covariance of the observation matrix and a variance of the observation quantity of the acquisition time, and the first covariance refers to a covariance of the state quantity transferred from the update time to the acquisition time.
  17. The apparatus of claim 16, wherein the kalman gain value satisfies:
    Figure PCTCN2020075556-APPB-100010
    Var(z k-1 ) Satisfies the following conditions:
    Figure PCTCN2020075556-APPB-100011
    P k-1|k satisfies the following conditions:
    Figure PCTCN2020075556-APPB-100012
    wherein, P k-1|k Indicating the time t of update k Transition to acquisition time t k-1 Covariance of the state quantity of (1), P' k|k Indicated in the absence of t k-1 Observed quantity z acquired at any moment k-1 In case of (2) t k Covariance of the state quantities of the time of day updates, F k-1|k Denotes from t k Time shift to t k-1 State transition matrix of time, Q k Representing the covariance of the prediction matrix, H k-1 Represents t k-1 The observation matrix at time, Var (z) k-1 ) Denotes t k-1 Observed quantity z acquired at any moment k-1 Variance of (A), R k-1 Representing the covariance of the observation matrix.
  18. The apparatus of claim 17, wherein the expectation of the state quantity corresponding to the update time satisfies:
    Figure PCTCN2020075556-APPB-100013
    wherein x is k|k Represents t k The expectation of the state quantity updated at a moment,
    Figure PCTCN2020075556-APPB-100014
    denotes t k-1 An estimated observation of time of day.
  19. The apparatus of any of claims 16-18, wherein the estimated observation satisfies:
    Figure PCTCN2020075556-APPB-100015
    wherein,
    Figure PCTCN2020075556-APPB-100016
    represents t k-1 Estimated observation of time, H k-1 Represents t k-1 Observation matrix of time, F k-1|k Represents from t k Time shift to t k-1 State transition matrix of time, x' k|k Is indicated in the absence of said t k-1 Observed quantity z acquired at any moment k-1 T in the case of k Expectation of state quantity of time update.
  20. The apparatus of any of claims 11 to 19, wherein the processing module is to:
    and under the condition that the acquisition time of the actual observed quantity is earlier than the updating time of the current state quantity of the moving object, and the time difference between the acquisition time and the updating time is less than or equal to a threshold value, obtaining the estimated observed quantity according to a state transition model corresponding to the moving object.
  21. An object tracking device comprising at least one processor and a memory, the at least one processor coupled to the memory for reading and executing instructions in the memory to perform the method of any of claims 1 to 10.
  22. A computer-readable storage medium, having stored thereon a computer program which, when run on a computer, causes the computer to perform the method of any one of claims 1 to 10.
CN202080095340.0A 2020-02-17 2020-02-17 Target tracking method and target tracking device Pending CN115039095A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/075556 WO2021163846A1 (en) 2020-02-17 2020-02-17 Target tracking method and target tracking apparatus

Publications (1)

Publication Number Publication Date
CN115039095A true CN115039095A (en) 2022-09-09

Family

ID=77390277

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080095340.0A Pending CN115039095A (en) 2020-02-17 2020-02-17 Target tracking method and target tracking device

Country Status (2)

Country Link
CN (1) CN115039095A (en)
WO (1) WO2021163846A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113986787B (en) * 2021-10-09 2023-06-30 河南源网荷储电气研究院有限公司 Multi-CPU communication data detection method and system
CN115131426B (en) * 2022-07-28 2024-03-22 苏州轻棹科技有限公司 Processing method for estimating center point of rear axle of vehicle
CN115442762B (en) * 2022-08-22 2024-05-03 浙江工业大学 Target tracking method based on distributed consistency filtering of wireless sensor network

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104331902B (en) * 2014-10-11 2018-10-16 深圳超多维科技有限公司 Method for tracking target, tracks of device and 3D display method and display device
CN104994470B (en) * 2015-06-26 2018-07-31 北京航空航天大学 A kind of wireless sensor network collaboration tracking method merged with RSS based on TOA
CN106021194B (en) * 2016-05-19 2017-10-03 哈尔滨工业大学 A kind of multi-sensor multi-target tracking bias estimation method
JP2018055539A (en) * 2016-09-30 2018-04-05 パナソニックIpマネジメント株式会社 State calculation device for moving object, state calculation method, program and recording medium containing the same

Also Published As

Publication number Publication date
WO2021163846A1 (en) 2021-08-26

Similar Documents

Publication Publication Date Title
CN109901574B (en) Automatic driving method and device
CN110379193B (en) Behavior planning method and behavior planning device for automatic driving vehicle
CN113168708B (en) Lane line tracking method and device
CN112703506B (en) Lane line detection method and device
CN113460042B (en) Vehicle driving behavior recognition method and recognition device
WO2021102955A1 (en) Path planning method for vehicle and path planning apparatus for vehicle
CN110371132B (en) Driver takeover evaluation method and device
US20220080972A1 (en) Autonomous lane change method and apparatus, and storage medium
CN113835421B (en) Method and device for training driving behavior decision model
CN112534483B (en) Method and device for predicting vehicle exit
CN113498529B (en) Target tracking method and device
CN115039095A (en) Target tracking method and target tracking device
WO2022156309A1 (en) Trajectory prediction method and apparatus, and map
US20230048680A1 (en) Method and apparatus for passing through barrier gate crossbar by vehicle
WO2022017307A1 (en) Autonomous driving scenario generation method, apparatus and system
CN111950726A (en) Decision method based on multi-task learning, decision model training method and device
CN112810603B (en) Positioning method and related product
CN113859265B (en) Reminding method and device in driving process
CN114179812A (en) Control method and device for assisting driving
CN114842440B (en) Automatic driving environment sensing method and device, vehicle and readable storage medium
CN114327842A (en) Multitask deployment method and device
CN114643983A (en) Control method and device
CN112639910B (en) Method and device for observing traffic elements
CN113799794A (en) Method and device for planning longitudinal motion parameters of vehicle
CN114092898A (en) Target object sensing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination