WO2023072269A1 - 对象跟踪 - Google Patents

对象跟踪 Download PDF

Info

Publication number
WO2023072269A1
WO2023072269A1 PCT/CN2022/128396 CN2022128396W WO2023072269A1 WO 2023072269 A1 WO2023072269 A1 WO 2023072269A1 CN 2022128396 W CN2022128396 W CN 2022128396W WO 2023072269 A1 WO2023072269 A1 WO 2023072269A1
Authority
WO
WIPO (PCT)
Prior art keywords
historical
current
neighbor
objects
node
Prior art date
Application number
PCT/CN2022/128396
Other languages
English (en)
French (fr)
Inventor
李经纬
王哲
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2023072269A1 publication Critical patent/WO2023072269A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Definitions

  • This disclosure relates to the field of computer technology, and in particular, to object tracking.
  • Object tracking is a technology that determines the trajectory and state of motion of objects in images based on multiple frames of images, and can be applied to autonomous driving scenarios of intelligent driving devices (such as autonomous vehicles, vehicles equipped with assisted driving systems, robots, etc.).
  • intelligent driving devices such as autonomous vehicles, vehicles equipped with assisted driving systems, robots, etc.
  • high-precision object tracking is an important part of vehicle intelligence and automation, and is the basis of intelligent driving device perception, control, path planning and other modules.
  • the intelligent driving device can be equipped with image acquisition devices such as laser radar to locate the objects around it, and track the position of the recognized objects, correlate the continuous detection results in time series, and use the object tracking results to determine the movement of the objects
  • the trajectory estimates the motion state of the detected object, and then accurately predicts the driving route of the intelligent driving device.
  • Embodiments of the present disclosure at least provide an object tracking method, device electronic equipment, and a storage medium.
  • an embodiment of the present disclosure provides an object tracking method, including: acquiring the position information of multiple current objects detected in the current frame image, and obtaining the position information of multiple current objects detected in the history frame images before the current frame image position information of historical objects; wherein, the time interval between the acquisition of the historical frame image and the current frame image is less than or equal to a preset time threshold; for each historical object in the plurality of historical objects, based on the history The position information of the object in the historical frame image is used to generate the predicted position information of the historical object in the current frame image; for each current object in the plurality of current objects, based on the position information of the plurality of current objects The location information determines the neighbor topology graph of the current object, wherein the neighbor topology graph of the current object includes a first node representing the location characteristics of the current object, a second node representing the location characteristics of the neighbor objects of the current object, and the A connection edge between the first node and the second node; for each historical object in the plurality of historical objects, determine the
  • the embodiment of the present disclosure also provides an object tracking device, including: an acquisition module, configured to acquire the position information of multiple current objects detected in the current frame image, and the historical frame images before the current frame image The position information of a plurality of historical objects detected in; wherein, the time interval between the acquisition of the historical frame image and the current frame image is less than or equal to the preset time threshold; a generation module, for the plurality of historical objects For each historical object in, generate the predicted position information of the historical object in the current frame image based on the position information of the historical object in the historical frame image; the determination module is configured to target the plurality of current objects For each current object in , determine the neighbor topology map of the current object based on the position information of the multiple current objects, wherein the neighbor topology map of the current object includes a first node representing the position feature of the current object, representing the The second node of the position feature of the neighbor object of the current object and the connecting edge between the first node and the second node; and for each historical object in the plurality of
  • an embodiment of the present disclosure further provides an electronic device, including: a processor, a memory, and a bus, the memory stores machine-readable instructions executable by the processor, and when the electronic device is running, the processing The processor communicates with the memory through a bus, and the machine-readable instructions execute the steps in the first aspect when executed by the processor.
  • the embodiments of the present disclosure further provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps in the above-mentioned first aspect are executed.
  • FIG. 1 shows a flow chart of an object tracking method provided by an embodiment of the present disclosure
  • FIG. 2 shows a flow chart of generating a neighbor topology map provided by an embodiment of the present disclosure
  • Fig. 3 shows a schematic diagram of an object tracking device provided by an embodiment of the present disclosure
  • Fig. 4 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
  • a and/or B may mean that A exists alone, A and B exist simultaneously, or B exists alone.
  • at least one herein means any one of a variety or any combination of at least two of the more, for example, including at least one of A, B, and C, which may mean including from A, Any one or more elements selected from the set formed by B and C.
  • object tracking in dense scenes has the defect of low tracking accuracy.
  • special scenes such as dense crowds
  • there is a complex occlusion relationship between detected objects and it is easy to detect objects in different frames during object tracking.
  • the incorrect association of the objects resulting in the wrong results of object tracking, so that there is a certain risk in driving.
  • an embodiment of the present disclosure provides an object tracking method, device, electronic device, and computer-readable storage medium, using the location information of the current object and the historical object to determine the neighbor topology map of the current object and the historical object, and using the current object Object tracking can be performed on neighbor topology graphs of existing objects and neighbor topology graphs of historical objects to improve the accuracy of object tracking.
  • the embodiment of the present disclosure discloses an object tracking method, which can be applied to an electronic device with computing capabilities, such as a server.
  • the object tracking method may include the following steps:
  • the above-mentioned current frame image can be collected by an image acquisition device.
  • the image acquisition device can be a monocular camera, a multi-eye camera, a laser radar, an acoustic radar, etc., and the collected images can be point cloud data, depth images, ordinary images, etc. .
  • the above-mentioned image acquisition device can be deployed on an intelligent driving device.
  • the intelligent driving device can be a self-driving vehicle, a vehicle equipped with an auxiliary driving system, or a robot.
  • the image acquisition device in this embodiment takes a laser radar as an example, and the laser radar can acquire Point cloud data of surrounding objects, and determine the position information of each detected object according to the point cloud data.
  • the position information can also include the attitude information of the object, such as the length and width of the object , height, and the deflection angle of the object, etc.
  • the above-mentioned current frame image and the historical frame image may be two consecutive frames of images, that is, the historical frame image is the previous frame image of the current frame image, and the above-mentioned current object is an object that appears in the detection result of the current frame point cloud data, and the above-mentioned
  • the historical objects are objects that appear in the detection results of the historical frame point cloud data.
  • the historical frame image and the current frame image may also be two discontinuous frames, and the time interval between the two acquisitions is less than or equal to a preset threshold, so as to ensure that the two frames of images can be used for object tracking.
  • the same object may be detected in the current frame image and the historical frame image. For example, 60 objects are detected in the historical frame image, and 70 objects are detected in the current frame image. Among them, it is possible There are 40 objects detected in the historical frame image and the current frame image at the same time, 20 objects in the historical frame image are not detected in the current frame image, and 30 objects in the current frame image are not in the historical frame image.
  • its motion path can be determined according to its position information in the historical frame image and the current frame image, and the object can be tracked.
  • the position information of the above-mentioned historical objects is based on the position measurement when the image acquisition device collects the historical frame image, and cannot represent the position information of the historical object in the current frame image.
  • the position information of the historical object in the current frame image can be predicted by using the position information of the historical object to obtain predicted position information.
  • the above predicted position information is the prediction result of the historical object in the current frame image, which can be determined by using the motion state information of the historical object in the historical frame image and/or the motion state information of the image acquisition device.
  • the motion state information includes but not limited to position offset and the like.
  • the predicted position information of the above-mentioned historical objects in the current frame image may be generated through the following steps:
  • the position of the historical object is offset to obtain predicted position information of the historical object in the current frame image.
  • the coordinate system used by the lidar detection result is based on itself as the origin. Since the intelligent driving device is usually in motion, the image acquisition device on the intelligent driving device is also in motion, which leads to the detection of historical frame images The result is not consistent with the coordinate system of the detection result of the current frame image.
  • the position information when the current frame image is collected by the image acquisition device, and the position information when the historical frame image is collected can determine the position offset vector, thereby determining the offset of the coordinate system of the two frames of images, using the position offset vector (for example, a translation vector and/or a rotation vector) to offset the position of the historical object to obtain the predicted position information of the historical object in the current frame image.
  • the position offset vector for example, a translation vector and/or a rotation vector
  • the predicted position information of the historical object in the current frame image can be obtained.
  • each current object among the multiple current objects determine a neighbor topology map of the current object based on the location information of the multiple current objects, where the neighbor topology map of the current object includes a The first node of the position feature, the second node representing the position feature of the neighbor object of the current object, and the connection edge between the first node and the second node; and, for the plurality of historical objects
  • the above-mentioned neighbor topological graph can be composed of nodes and connection edges, and each object in the current object can have its corresponding neighbor topological graph, wherein, there is a first node in the neighbor topological graph of the current object, which is used to represent the location characteristics of the current object , the first node corresponds to at least one second node, and the second node is used to represent the location feature of the neighbor node corresponding to the first node.
  • the neighbor node of the node is the neighbor object of the object corresponding to the node.
  • the location feature can be Determined according to the location information corresponding to the node.
  • Each object in the historical object may have its corresponding neighbor topology graph, wherein, there is a third node in the neighbor topology graph of the historical object, which is used to represent the predicted location feature of the historical object, and the third node corresponds to at least one fourth node, The fourth node is used to represent the predicted location features of the neighbor objects of the historical object.
  • the following steps may be used to determine the neighbor topology map of the current object:
  • the neighbor objects of the current object can be determined according to the position information of multiple current objects. If it is determined that the Euclidean distance between the current object and the other object is less than or equal to a preset threshold, the other object may be used as a neighbor object of the current object.
  • the location characteristics of each current object can also be determined based on the location information of the current object. Specifically, the data of each dimension in the location information of the current object can be extracted to obtain the location characteristics of each current object. After that, it can be combined into An N-dimensional feature vector to get the feature vector corresponding to the position feature.
  • the position information includes coordinate information, size information and deflection angle
  • the position features may include x-axis features, y-axis features, z-axis features of the coordinate system, and length features length, width features width, height of the size information The characteristic height, and the deflection angle characteristic yaw.
  • the node corresponding to the current object may be generated based on the location characteristics of the current object, and then the corresponding nodes may be generated based on the location characteristics of the neighbor objects of the current object.
  • the location information of each node matches its corresponding location feature.
  • connection edges connecting each node can be generated.
  • the connection edges connect the first node with its corresponding second node, and there is no connection edge between the second nodes.
  • the connection edge It may be a directed edge from the first node to the second node. If a first node has K second nodes, its corresponding neighbor topology graph includes K+1 nodes and K connection edges.
  • This embodiment enables the neighbor topology map to reflect the location features of the current object and its neighbors, and uses the location features of the current object and its neighbors to determine the object tracking result, improving the accuracy of object tracking.
  • the location can be characterized as Indicates the center point coordinates, length, width, height and orientation angle of the i-th object;
  • the topological relationship between the 0th neighbor object and its neighbor objects in the collection is represented by a directed graph, and the neighbor topology graph is obtained, which is recorded as It has K+1 vertices and K directed edges, all of which are directed edges from the 0th neighbor node to other nodes, because the total number of nodes is K+1, so the number of edges is K. in is a collection of nodes, Represents the feature of the i kth vertex, the feature here is the position feature, which can be expressed by (x, y, z, l, w, h, yaw), or can be obtained from Extracted from the original point cloud data, it can also be a combination of the above two features.
  • (x, y, z, l, w, h, yaw) can be used as an example to describe.
  • the set of edges is defined as where f diff
  • f diff is a function of directed edges computed from nodes. Since nodes are generally N-dimensional vectors, for example, two vectors can be directly subtracted to obtain the vector f diffmitted of the connecting edge.
  • the neighbor topology graph of the historical object can be determined in a similar manner.
  • the flow chart of generating a neighbor topology map can first generate each node according to the location characteristics of each object, and determine the neighbor objects of each object respectively, according to the location characteristics of the neighbor objects and The location feature of the current object calculates the connection edge, and splices the generated nodes and the connection edge to obtain a neighbor topology map.
  • the current objects can be matched with historical objects, and the corresponding relationship between each current object and each historical object can be determined.
  • the corresponding relationship can be Including matching, adding and disappearing, among them, if a current object and a historical object are the same object, then the relationship between the current object and the historical object can be a match; if a current object and each historical object are not the same object, the current object is a newly added object, and its correspondence with the historical object can be newly added; if a historical object and each current object are not the same object, then the correspondence between the historical object and the current object can be disappear.
  • the tracking result of each object can be determined according to the determined corresponding relationship, and the object tracking result can be updated.
  • the similarity between the neighbor topological graph of each current object and the neighbor topological graph of each historical object may be determined; and then the object tracking result is updated based on the determined similarity.
  • the corresponding relationship between each current object and each historical object may be determined according to the similarity between each current object and the neighbor topology map of each historical object, and then the object tracking result is updated according to the determined corresponding relationship.
  • the similarity between the neighbor topological map of the current object and the neighbor topological map of each of the historical objects may be determined; based on the similarity, update the corresponding The object tracking results.
  • the following steps may be used to determine the similarity between the neighbor topology graph of each current object among the multiple current objects and the neighbor topology graph of each historical object:
  • the Euclidean distance between the current object and the historical object is less than or equal to a preset distance threshold, based on the current object's neighbor topology map and the history object's neighbor topology map, determining a first similarity between a node of the current object and a node of the historical object, and a second similarity between a node of a neighbor object of the current object and a node of a neighbor object of the historical object;
  • the position deviation between the position of each object in the current frame image and the position in the historical frame image is small, if a current object and a historical object For the same object, the similarity between its neighbor topological maps should be high, and the Euclidean distance between them should be small. If the Euclidean distance is greater than the preset distance threshold, it is considered that they are not the same object, and the similarity is set to 0, if the Euclidean distance is less than or equal to the preset distance threshold, it can be considered that the current object and the historical object may be in a matching relationship.
  • the node of the current object can be determined based on the neighbor topology map of the current object and the historical object
  • the feature vector difference between the position feature of the current object and the position feature of the historical object can be calculated, and the determined difference is modulo-processed, and then multiplied by -1 , get the first similarity
  • the second similarity between the nodes of the neighbor objects of the current object and the nodes of the neighbor objects of the history object may be determined according to the neighbor topology graph of the current object and the neighbor topology graph of the historical object.
  • the neighbor topology graph of the current object may be determined according to the neighbor topology graph of the current object and the neighbor topology graph of the historical object.
  • the similarity matrix determines the similarity matrix to obtain a neighbor similarity matrix neighbour_matrix of x*y, and each element in the neighbor similarity matrix is the node corresponding to the neighbor object of the current object and the neighbor object of the corresponding historical object
  • the third similarity between the nodes use the Hungarian matching algorithm or other matching algorithms to solve the neighbor similarity matrix, and get a set of optimal matching relationship neighbour_match, for example, you can take the pair of neighbors with the highest third similarity
  • the node is neighbour_match
  • neighbour_match includes the neighbor object of the current object and the neighbor object of the historical object matched with it, that is, to determine whether the neighbor objects of the current object and the neighbor objects of the historical object are the same object, after obtaining the neighbour_match , the third similarities in neighbour_match can be added to obtain the second similarity.
  • the first similarity and the second similarity can be added to obtain the i-th current object and the j-th historical object similarity_matrix(i,j) between the neighbor topological graphs of historical objects.
  • a similarity matrix similarity_matrix can be formed.
  • the similarity matrix By solving the similarity matrix, the corresponding relationship between each current object and each historical object can be obtained, and then, according to The determined correspondences update the object tracking results.
  • the obtained similarity matrix similarity_matrix can be solved by algorithms such as greedy nearest neighbor and Hungarian matching, so as to obtain the corresponding relationship between the current object and the historical object.
  • the specific method can be the same as the method for determining the neighbour_match.
  • the obtained corresponding relationship can be used as the matching result of the current object and the historical object.
  • different methods can be used to update the object tracking result.
  • the position information of the current object can be used to update the object tracking result corresponding to the historical object; is newly added), an object tracking result for the current object can be newly created, and the position information of the current object is used as its corresponding object tracking result; for each historical object in the plurality of historical objects, in response to determining that there is no
  • the current object matched by the historical object that is, the corresponding result is disappearing
  • Equal to the retention time threshold that is, whether the historical object has not been detected for a period that reaches the preset retention time threshold, so as to determine whether to retain or clear the object tracking result of the historical object.
  • the intelligent driving device loaded with the image acquisition device that collects the above-mentioned current frame image and historical frame image can be controlled based on the updated object tracking result, for example, there is a detected Adjust the driving route, driving speed, etc.
  • the location information of the current object and the location information of the historical object are used to determine the neighbor topology map of the current object and the neighbor topology map of the historical object, and the object tracking is carried out by using the neighbor topology map of the current object and the neighbor topology map of the historical object, and the object tracking is improved. Tracking accuracy.
  • the present disclosure also discloses an object tracking device, each module in the device can implement each step in the object tracking method of each of the above-mentioned embodiments, and can achieve the same beneficial effect, therefore, The same part will not be repeated here.
  • the object tracking device includes:
  • An acquisition module 310 configured to acquire position information of a plurality of current objects detected in the current frame image, and position information of a plurality of historical objects detected in a history frame image before the current frame image; wherein, the history The time interval between the frame image and the acquisition of the current frame image is less than or equal to a preset time threshold;
  • a generating module 320 configured to, for each of the multiple historical objects, generate predicted position information of the historical object in the current frame image based on the position information of the historical object in the historical frame image ;
  • the determining module 330 determines a neighbor topology map of the current object based on the location information of the multiple current objects, wherein the neighbor topology map of the current object includes The first node of the location feature of the object, the second node representing the location feature of the neighbor object of the current object, and the connection edge between the first node and the second node; and for the plurality of historical objects For each historical object, determine the neighbor topology graph of the historical object based on the predicted location information of the multiple historical objects, wherein the neighbor topology graph of the historical object includes a third node representing the predicted location feature of the historical object, representing The fourth node of the predicted location feature of the neighbor object of the historical object and the connecting edge between the third node and the fourth node;
  • An update module 340 configured to update object tracking results based on the neighbor topology graphs of the multiple current objects and the neighbor topology graphs of the multiple historical objects.
  • the generating module 320 is specifically configured to:
  • the position of the historical object in the historical frame image is offset to obtain the predicted position information of the historical object in the current frame image.
  • the determining module 330 determines, for each current object among the multiple current objects, the neighbor topology map of the current object based on the location information of the multiple current objects, Used for:
  • a connection edge connecting the first node and the second node is generated to obtain a neighbor topology graph of the current object.
  • the updating module 340 is specifically configured to:
  • For each current object determine the similarity between the neighbor topology map of the current object and the neighbor topology map of each of the historical objects;
  • the update module 340 determines the similarity between the current object's neighbor topological map and the neighbor topological maps of each of the historical objects, it is used to:
  • the Euclidean distance between the current object and the historical object is less than or equal to a preset distance threshold, based on the current object's neighbor topology map and the history object's neighbor topology map, determining a first similarity between a node of the current object and a node of the historical object, and a second similarity between a node of a neighbor object of the current object and a node of a neighbor object of the historical object;
  • the update module 340 determines the similarity between the current object's neighbor topological map and the neighbor topological maps of each of the historical objects, it is used to:
  • the updating module 340 determines the first relationship between the node of the current object and the node of the historical object based on the topology graph of the neighbors of the current object and the topology graph of the neighbors of the historical object.
  • similarity is used for:
  • the first similarity is determined based on a difference between a feature vector corresponding to the position feature of the current object and a feature vector corresponding to the position feature of the historical object.
  • the update module 340 determines the relationship between the node of the neighbor object of the current object and the neighbor object of the history object based on the neighbor topology graph of the current object and the neighbor topology graph of the historical object.
  • the second similarity of nodes is used for:
  • first neighbor object of the current object For a first neighbor object of the current object, based on the neighbor topology map of the current object and the neighbor topology map of the historical object, determine the third similarity between the first neighbor object and each neighbor object of the historical object degree; the first neighbor object is any one of the neighbor objects of the current object;
  • the second similarity between the node of the neighbor object of the current object and the node of the neighbor object of the history object is determined.
  • the updating module 340 when updating the object tracking result corresponding to the current object based on the similarity, the updating module 340 is configured to:
  • For the current object based on the similarity, determine a matching result between the current object and each of the historical objects;
  • an object tracking result corresponding to the historical object is updated using the location information of the current object.
  • the updating module 340 when updating the object tracking result corresponding to the current object based on the similarity, is further configured to:
  • an object tracking result corresponding to the current object is established using the location information of the current object.
  • the updating module 340 when updating the object tracking result corresponding to the current object based on the similarity, is further configured to:
  • each historical object in the plurality of historical objects in response to determining that there is no current object matching the historical object, based on the acquisition time and current time of the historical frame image corresponding to the historical object, determine to keep or clear the historical object The object tracking result for the object.
  • the device further includes a control module, configured to:
  • an intelligent driving device loaded with an image acquisition device for acquiring the current frame image and the historical frame image is controlled.
  • an embodiment of the present disclosure further provides an electronic device 400, as shown in FIG. 4 , which is a schematic structural diagram of the electronic device 400 provided by the embodiment of the present disclosure, including:
  • Memory 42 is used for storing execution order, comprises memory 421 and external memory 422; Memory 421 here is also called internal memory, is used for temporarily storing the operation data in processor 41, and with The data exchanged by the external memory 422 such as hard disk, the processor 41 exchanges data with the external memory 422 through the memory 421, when the electronic device 400 is running, the processor 41 communicates with the memory 42 through the bus 43, so that the processor 41 executes the following instructions :
  • each current object in the plurality of current objects determine a neighbor topology map of the current object based on the position information of the plurality of current objects, wherein the neighbor topology map of the current object includes a position characteristic representing the current object The first node of the current object, the second node representing the position characteristics of the neighbor objects of the current object, and the connection edge between the first node and the second node; and for each history object in the plurality of history objects object, determining a neighbor topology graph of the historical object based on the predicted location information of the multiple historical objects, wherein the neighbor topology graph of the historical object includes a third node representing the predicted location feature of the historical object, and a node representing the historical object A fourth node of the predicted location feature of the neighbor object and a connecting edge between the third node and the fourth node;
  • An object tracking result is updated based on the neighbor topology graphs of the multiple current objects and the neighbor topology graphs of the multiple historical objects.
  • Embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored. When the computer program is run by a processor, the steps of the object tracking method described in the foregoing method embodiments are executed.
  • the storage medium may be a volatile or non-volatile computer-readable storage medium.
  • the computer program product of the object tracking method provided by the embodiments of the present disclosure includes a computer-readable storage medium storing program code, and the instructions included in the program code can be used to execute the steps of the object tracking method described in the above method embodiments
  • program code can be used to execute the steps of the object tracking method described in the above method embodiments
  • An embodiment of the present disclosure further provides a computer program, which implements any one of the methods in the preceding embodiments when the computer program is executed by a processor.
  • the computer program product can be specifically realized by means of hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK) etc. wait.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the functions are realized in the form of software function units and sold or used as independent products, they can be stored in a non-volatile computer-readable storage medium executable by a processor.
  • the technical solution of the present disclosure is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disc and other media that can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

本公开提供了一种对象跟踪方法、装置、电子设备及存储介质,该方法包括:获取在当前帧图像中检测到的多个当前对象的位置信息,以及在当前帧图像之前的历史帧图像中检测到的多个历史对象的位置信息;针对所述多个历史对象中的每个历史对象,基于该历史对象在所述历史帧图像中的位置信息,生成该历史对象在所述当前帧图像中的预测位置信息;针对所述多个当前对象中的每个当前对象,基于所述多个当前对象的位置信息确定该当前对象的邻居拓扑图,以及针对所述多个历史对象中的每个历史对象,基于所述多个历史对象的预测位置信息确定该历史对象的邻居拓扑图;基于所述多个当前对象的邻居拓扑图与所述多个历史对象的邻居拓扑图,更新对象跟踪结果。

Description

对象跟踪
相关申请的交叉引用
本申请要求在2021年10月29日提交至中国专利局、申请号为CN2021112719237的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
技术领域
本公开涉及计算机技术领域,具体而言,涉及对象跟踪。
背景技术
对象跟踪是一种根据多帧图像确定图像中对象的运动轨迹、运动状态的技术,可以应用于智能行驶装置(如自动驾驶车辆、装有辅助驾驶***的车辆、机器人等)的自动驾驶场景。随着技术水平的提高,智能行驶装置得到广泛应用,而高精度的对象跟踪是车辆智能化、自动化的重要部分,是智能行驶装置感知、控制、路径规划等模块的基础。通常,智能行驶装置可以装载有激光雷达等图像采集装置对其周围的对象进行定位,并对识别到的对象进行位置跟踪,将时序上连续的检测结果进行关联,利用对象跟踪结果确定对象的运动轨迹,对检测到的对象进行运动状态估计,进而准确地预测智能行驶装置的行驶路线。
发明内容
本公开实施例至少提供一种对象跟踪方法、装置电子设备及存储介质。
第一方面,本公开实施例提供了一种对象跟踪方法,包括:获取在当前帧图像中检测到的多个当前对象的位置信息,以及在当前帧图像之前的历史帧图像中检测到的多个历史对象的位置信息;其中,所述历史帧图像与所述当前帧图像的采集的时间间隔小于或等于预设时间阈值;针对所述多个历史对象中的每个历史对象,基于该历史对象在所述历史帧图像中的位置信息,生成该历史对象在所述当前帧图像中的预测位置信息;针对所述多个当前对象中的每个当前对象,基于所述多个当前对象的位置信息确定该当前对象的邻居拓扑图,其中,该当前对象的邻居拓扑图包括表示该当前对象的位置特征的第一节点、表示该当前对象的邻居对象的位置特征的第二节点以及所述第一节点与所述第二节点之间的连接边;针对所述多个历史对象中的每个历史对象,基于所述多个历史对象的预测位置信息确定该历史对象的邻居拓扑图,其中,该历史对象的邻居拓扑图包含表示该历史对象的预测位置特征的第三节点、表示该历史对象的邻居对象的预测位置特征的第四节点以及所述第三节点与所述第四节点之间的连接边;基于所述多个当前对象的邻居拓扑图与所述多个历史对象的邻居拓扑图,更新对象跟踪结果。
第二方面,本公开实施例还提供一种对象跟踪装置,包括:获取模块,用于获取在当前帧图像中检测到的多个当前对象的位置信息,以及在当前帧图像之前的历史帧图像中检测到的多个历史对象的位置信息;其中,所述历史帧图像与所述当前帧图像的采集的时间间隔小于或等于预设时间阈值;生成模块,用于针对所述多个历史对象中的每个历史对象,基于该历史对象在所述历史帧图像中的位置信息,生成该历史对象在所述当前帧图像中的预测位置信息;确定模块,用于针对所述多个当前对象中的每个当前对象,基于所述多个当前对象的位置信息确定该当前对象的邻居拓扑图,其中,该当前对象的邻居拓扑图包括表示该当前对象的位置特征的第一节点、表示该当前对象的邻居对象的位置特征的第二节点以及所述第一节点与所述第二节点之间的连接边;以及针对所述多个历史对象中的每个历史对象,基于所述多个历史对象的预测位置信息确定该历史对象的邻居拓扑图,其中,该历史对象的邻居拓扑图包含表示该历史对象的预测位置特征的第三节点、表示该历史对象的邻居对象的预测位置特征的第四节点以及所述第三节点与所述第四节点之间的连接边;更新模块,用于基于所述多个当前对象的邻居拓扑图与所述多个历史对象的邻居拓扑图,更新对象跟踪结果。
第三方面,本公开实施例还提供一种电子设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行上述第一方面中的步骤。
第四方面,本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上述第一方面中的步骤。
为使本公开的上述目的、特征和优点能更明显易懂,下文特举较佳实施例,并配合所附附图,作详细说明如下。
附图说明
为了更清楚地说明本公开实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,此处的附图被并入说明书中并构成本说明书中的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。应当理解,以下附图仅示出了本公开的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。
图1示出了本公开实施例所提供的一种对象跟踪方法的流程图;
图2示出了本公开实施例所提供的生成邻居拓扑图的流程图;
图3示出了本公开实施例所提供的一种对象跟踪装置的示意图;
图4示出了本公开实施例所提供的一种电子设备的示意图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中附图,对本公开实施例中的技术方案进行清楚、完整地描述,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。通常在此处描述和示出的本公开实施例的组件可以以各种不同的配置来布置和设计。因此,以下对本公开的实施例的详细描述并非旨在限制本公开要求保护的范围,而是仅仅表示本公开的选定实施例。基于本公开的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。
本文中术语“和/或”表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A、同时存在A和B或单独存在B这三种情况。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合,例如,包括A、B、C中的至少一种,可以表示包括从A、B和C构成的集合中选择的任意一个或多个元素。
相关技术中,密集场景下对象跟踪存在跟踪精确度低的缺陷,如在密集人群等特殊场景下,检测到的对象之间存在复杂的遮挡关系,在进行对象跟踪时容易将不同帧内检测到的对象错误关联,导致对象跟踪的结果错误,使行驶存在一定风险。
有鉴于此,本公开实施例提供了一种对象跟踪方法、装置、电子设备及计算机可读存储介质,利用当前对象及历史对象的位置信息确定当前对象及历史对象的邻居拓扑图,利用当前对象的邻居拓扑图及历史对象的邻居拓扑图进行对象跟踪,提高对象跟踪精确度。
下面通过具体的实施例,对本公开公开的对象跟踪方法、装置、电子设备及计算机可读存储介质进行说明。
如图1所示,本公开实施例公开了一种对象跟踪方法,该方法可以应用于具有计算能力的电子设备上,例如服务器等。具体地,该对象跟踪方法可以包括如下步骤:
S110、获取在当前帧图像中检测到的多个当前对象的位置信息,以及在当前帧 图像之前的历史帧图像中检测到的多个历史对象的位置信息;其中,所述历史帧图像与所述当前帧图像的采集的时间间隔小于或等于预设时间阈值。
上述当前帧图像可以为图像采集装置采集得到的,图像采集装置可以为单目摄像机、多目摄像机、激光雷达、声波雷达等装置,采集到的图像可以为点云数据、深度图像、普通图像等。上述图像采集装置可以部署在智能行驶装置上,智能行驶装置可以为自动驾驶车辆、装有辅助驾驶***的车辆或者机器人等,本实施例的图像采集装置以激光雷达为例,激光雷达能够获取到周围物体的点云数据,并根据点云数据确定检测到的各个对象的位置信息,除了对象在预设坐标系下的坐标外,位置信息还可以包括对象的姿态信息,如对象的长、宽、高,以及对象的偏转角等。
上述当前帧图像与历史帧图像可以是连续的两帧图像,也即,历史帧图像为当前帧图像的前一帧图像,上述当前对象为当前帧点云数据的检测结果中出现的对象,上述历史对象则为历史帧点云数据的检测结果中出现的对象。历史帧图像与当前帧图像也可以是非连续的两帧图像,二者采集的时间间隔小于或等于预设阈值,以保证两帧图像能够用于对象跟踪。
在激光雷达的检测结果中,当前帧图像和历史帧图像中可能检测到同一个对象,比如,在历史帧图像中检测到60个对象,在当前帧图像中检测到了70个对象,其中,可能存在40个对象同时在历史帧图像和当前帧图像中被检测到,历史帧图像中有20个对象没有在当前帧图像中被检测到,当前帧图像中有30个对象没有在历史帧图像中检测到,对于在历史帧图像和当前帧图像中同时被检测到的对象,即可根据其在历史帧图像和当前帧图像的位置信息确定其运动路径,对该对象进行跟踪。
为实现对象跟踪,针对每个历史对象,需要先确定该历史对象与当前帧图像中的各个对象之间是否能够匹配,即是否为同一对象。
S120、针对所述多个历史对象中的每个历史对象,基于该历史对象在所述历史帧图像中的位置信息,生成该历史对象在所述当前帧图像中的预测位置信息。
上述历史对象的位置信息是基于图像采集装置采集历史帧图像时的位置测量的,并不能表示历史对象在当前帧图像中的位置信息,例如,对于部署在智能行驶车辆上的激光雷达装置,由于智能行驶车辆处于行驶状态,该激光雷达装置的位置是随时变化的。因此,可以利用历史对象的位置信息对历史对象在当前帧图像中的位置信息进行预测,得到预测位置信息。
上述预测位置信息为历史对象在当前帧图像中的预测结果,可以利用历史对象在历史帧图像中的运动状态信息和/或图像采集装置的运动状态信息确定。所述运动 状态信息包括但不限于位置偏移量等。
示例性的,可以通过以下步骤生成上述历史对象在当前帧图像中的预测位置信息:
基于图像采集装置采集当前帧图像时的位置信息、以及所述图像采集装置采集历史帧图像时的位置信息,确定所述图像采集装置从采集所述历史帧图像的位置到采集所述当前帧图像的位置的位置偏移向量;
基于所述位置偏移向量,对所述历史对象的位置进行偏移,得到所述历史对象在当前帧图像中的预测位置信息。
通常,激光雷达检测结果采用的坐标系是以自身为原点的,由于智能行驶装置通常处于运动状态中,因此,智能行驶装置上的图像采集装置也处于运动状态中,这导致历史帧图像的检测结果与当前帧图像的检测结果的坐标系并不一致。但是可以通过图像采集装置采集当前帧图像时的位置信息,以及在采集历史帧图像时的位置信息,确定位置偏移向量,从而确定两帧图像的坐标系的偏移量,利用位置偏移向量(例如平移向量和/或旋转向量)将历史对象的位置进行偏移,得到历史对象在当前帧图像中的预测位置信息。
这样,通过利用目标车辆的位置偏移向量对历史对象的位置信息进行偏移,可以得到历史对象在当前帧图像中的预测位置信息。
S130、针对所述多个当前对象中的每个当前对象,基于所述多个当前对象的位置信息确定该当前对象的邻居拓扑图,其中,该当前对象的邻居拓扑图包括表示该当前对象的位置特征的第一节点、表示该当前对象的邻居对象的位置特征的第二节点以及所述第一节点与所述第二节点之间的连接边;以及,针对所述多个历史对象中的每个历史对象,基于所述多个历史对象的预测位置信息确定该历史对象的邻居拓扑图;其中,该历史对象的邻居拓扑图包含表示该历史对象的预测位置特征的第三节点、表示该历史对象的邻居对象的预测位置特征的第四节点以及所述第三节点与所述第四节点之间的连接边。
上述邻居拓扑图可以由节点与连接边组成,当前对象中每个对象都可以有其对应的邻居拓扑图,其中,当前对象的邻居拓扑图中存在第一节点,用于表示当前对象的位置特征,该第一节点对应至少一个第二节点,第二节点用于表示该第一节点对应的邻居节点的位置特征,节点的邻居节点即为该节点对应的对象的邻居对象,这里,位置特征可以根据节点对应的位置信息确定。历史对象中每个对象可以有其对应的邻居拓扑图,其中,历史对象的邻居拓扑图中存在第三节点,用于表示历史 对象的预测位置特征,该第三节点对应至少一个第四节点,第四节点用于表示该历史对象的邻居对象的预测位置特征。
一种可能的实施方式中,可以利用以下步骤确定当前对象的邻居拓扑图:
基于所述多个当前对象的位置信息,确定所述多个当前对象中各个当前对象的位置特征;基于所述多个当前对象的位置信息,确定该当前对象的邻居对象;基于该当前对象的位置特征,生成该当前对象的第一节点;基于该当前对象的邻居对象的位置特征,生成该当前对象的邻居对象的第二节点;生成连接所述第一节点与所述第二节点的连接边,得到该当前对象的邻居拓扑图。
该实施方式中,针对每个当前对象,可以根据多个当前对象的位置信息,确定该当前对象的邻居对象,示例性的,可以确定该当前对象与多个当前对象中每个其他对象之间的欧氏距离,若确定该当前对象与该其他对象的欧式距离小于或等于预设阈值,则可以将该其他对象作为上述当前对象的邻居对象。
同时,还可以基于当前对象的位置信息,确定各个当前对象的位置特征,具体的,可以提取当前对象的位置信息中各个维度的数据,得到各个当前对象的位置特征,之后,可以将其组合为一个N维的特征向量,得到位置特征对应的特征向量。示例性的,假设位置信息包含坐标信息、尺寸信息及偏转角,则位置特征可以包括坐标系的x轴特征、y轴特征、z轴特征,以及尺寸信息的长度特征length、宽度特征width、高度特征height,以及偏转角特征yaw。
之后,针对每个当前对象,可以基于该当前对象的位置特征,生成该当前对象对应的节点,再基于该当前对象的邻居对象的位置特征,生成其对应的节点。在邻居拓扑图中,各个节点的位置信息与其对应的位置特征匹配。
在生成邻居拓扑图中的节点后,可以生成连接各个节点的连接边,连接边将第一节点分别与其对应的第二节点连接,第二节点之间不存在连接边,示例性的,连接边可以为从第一节点指向第二节点的有向边,若一第一节点存在K个第二节点,则其对应的邻居拓扑图则包含K+1个节点以及K条连接边。
该实施方式,使得邻居拓扑图能够体现当前对象及当前对象的邻居对象的位置特征,并利用当前对象及当前对象的邻居对象的位置特征确定对象跟踪结果,提高了对象跟踪的精度。
示例性的,可以令
Figure PCTCN2022128396-appb-000001
表示第t帧的检测结果,其中,I t为检测到的对象的数目,
Figure PCTCN2022128396-appb-000002
为第i个对象的位置信息,记作
Figure PCTCN2022128396-appb-000003
其中
Figure PCTCN2022128396-appb-000004
可以 为检测结果对应的原始点云数据。
可以令位置特征为
Figure PCTCN2022128396-appb-000005
表示第i个对象的中心点坐标,长,宽高和朝向角;
利用
Figure PCTCN2022128396-appb-000006
表示对象和其邻居对象的位置特征的集合,其中
Figure PCTCN2022128396-appb-000007
为对象的位置特征,
Figure PCTCN2022128396-appb-000008
是检测结果的对象的第k个邻居对象的位置特征,为了简化记号,我们将对象
Figure PCTCN2022128396-appb-000009
记作其自身的第0个邻居对象的位置特征,则上述集合可以记为
Figure PCTCN2022128396-appb-000010
Figure PCTCN2022128396-appb-000011
集合中第0个邻居对象及其邻居对象之间的拓扑关系用有向图来表示,得到邻居拓扑图,将其记作
Figure PCTCN2022128396-appb-000012
其有K+1个顶点和K条有向边,其中所有边均为从第0个邻居节点指向其他节点的有向边,因为总节点数目为K+1,所以边数目为K。其中
Figure PCTCN2022128396-appb-000013
为节点集合,
Figure PCTCN2022128396-appb-000014
表示第i k个顶点的特征,这里的特征即位置特征,可以用(x,y,z,l,w,h,yaw)表示,也可以用神经网络从
Figure PCTCN2022128396-appb-000015
原始点云数据中提取得到,也可以是上述两种特征的组合。这里为了方便,可以以(x,y,z,l,w,h,yaw)为例描述。
边的集合定义为
Figure PCTCN2022128396-appb-000016
其中f diff(...)为根据节点计算的有向边的函数。由于节点一般为N维向量,示例性的,可以直接取两个向量做减法,得到连接边的向量f diff(...)。
相似的,可以利用类似的方式确定历史对象的邻居拓扑图。
如图2所示,为本公开实施例提供的生成邻居拓扑图的流程图,可以先根据各个对象的位置特征,生成各个节点,并分别确定各个对象的邻居对象,根据邻居对象的位置特征及当前对象的位置特征计算连接边,将生成的节点与连接边拼接得到邻居拓扑图。
S140、基于所述多个当前对象的邻居拓扑图与所述多个历史对象的邻居拓扑图,更新对象跟踪结果。
基于上述多个当前对象的邻居拓扑图与多个历史对象的邻居拓扑图可以进行当前对象与历史对象的匹配,确定各个当前对象与各个历史对象之间的对应关系,示 例性的,对应关系可以包括匹配、新增和消失,其中,若一个当前对象与一个历史对象为同一对象,则该当前对象与该历史对象之间的关系可以为匹配;若一个当前对象与各个历史对象都不为同一对象,则该当前对象为新增对象,其与历史对象的对应关系可以为新增;若一个历史对象与各个当前对象都不为同一对象,则该历史对象的与当前对象的对应关系可以为消失。
在确定当前对象与历史对象之间的对应关系后,即可根据确定的对应关系确定各个对象的跟踪结果,并更新对象跟踪结果。
具体的,可以确定各当前对象的邻居拓扑图与各历史对象的邻居拓扑图之间的相似度;然后基于确定的相似度更新对象跟踪结果。
该实施例中,可以根据各当前对象与各历史对象的邻居拓扑图之间的相似度确定各当前对象与各历史对象的对应关系,再根据确定的对应关系更新对象跟踪结果。
在一个实施例中,可以针对每个当前对象,确定该当前对象的邻居拓扑图分别与各个所述历史对象的邻居拓扑图之间的相似度;基于所述相似度,更新该当前对象对应的所述对象跟踪结果。
一种可能的实施方式中,可以利用以下步骤确定所述多个当前对象中的每个当前对象的邻居拓扑图与各历史对象的邻居拓扑图之间的相似度:
针对每个所述历史对象,在该当前对象与该历史对象之间的欧式距离小于或等于预设距离阈值的情况下,基于该当前对象的邻居拓扑图,以及该历史对象的邻居拓扑图,确定该当前对象的节点与该历史对象的节点的第一相似度、以及该当前对象的邻居对象的节点与该历史对象的邻居对象的节点的第二相似度;
基于所述第一相似度及所述第二相似度,确定该当前对象的邻居拓扑图与该历史对象的邻居拓扑图之间的相似度。
由于当前帧图像与历史帧图像之间的时间间隔较短,各个对象在当前帧图像中的位置和在历史帧图像中的位置之间的位置偏移较小,若一当前对象与一历史对象为同一对象,其邻居拓扑图之间的相似度应当较高,且之间的欧氏距离较小,若其欧式距离大于预设距离阈值,则认为其不为同一对象,将相似度设置为0,若其欧式距离小于或等于预设距离阈值,则可以认为该当前对象与该历史对象可能为匹配关系,此时可以基于该当前对象及该历史对象的邻居拓扑图确定该当前对象的节点与该历史对象的节点的第一相似度,示例性的,可以计算该当前对象的位置特征和该历史对象的位置特征的特征向量差,对确定的差值取模处理,再乘以-1,得到第一相似度
Figure PCTCN2022128396-appb-000017
同时,可以根据该当前对象的邻居拓扑图及该历史对象的邻居拓扑图,确定该当前对象的邻居对象的节点与该历史对象的邻居对象的节点之间的第二相似度。示例性的,对于该当前对象的邻居拓扑图
Figure PCTCN2022128396-appb-000018
和该历史对象的邻居拓扑图
Figure PCTCN2022128396-appb-000019
仅考虑其中的邻居对象(即不考虑
Figure PCTCN2022128396-appb-000020
Figure PCTCN2022128396-appb-000021
),可以假设
Figure PCTCN2022128396-appb-000022
有x个邻居,
Figure PCTCN2022128396-appb-000023
有y个邻居,两组邻居对象构成一个二分图。
利用计算第一相似度的方式,确定相似性矩阵得到一个x*y的邻居相似性矩阵neighbour_matrix,邻居相似性矩阵中各个元素为对应的当前对象的邻居对象的节点与对应的历史对象的邻居对象的节点之间的第三相似度,利用匈牙利匹配算法或其他匹配算法求解该邻居相似性矩阵,得到一组最优的匹配关系neighbour_match,示例性的,可以取第三相似度最高的一对邻居节点作为neighbour_match,neighbour_match中包括该当前对象的邻居对象及与其匹配的该历史对象的邻居对象,即确定该当前对象的各邻居对象与该历史对象的各邻居对象是否为同一对象,在得到neighbour_match之后,可以将neighbour_match中的各第三相似度相加,得到第二相似度。
在得到第i个当前对象与第j个历史对象的第一相似度和第二相似度后,可以将得到的第一相似度及第二相似度相加,得到第i个当前对象与第j个历史对象的邻居拓扑图之间的相似度similarity_matrix(i,j)。
基于上述方法,遍历每个当前对象和每个历史对象后,可以组成相似度矩阵similarity_matrix,通过对相似性矩阵求解,可以得到各个当前对象与各个历史对象之间的对应关系,之后,即可根据确定的对应关系更新对象跟踪结果。
可以利用贪心最近邻、匈牙利匹配等算法对得到的相似度矩阵similarity_matrix进行求解,从而得到当前对象与历史对象之间的对应关系,其具体方法可以与确定neighbour_match的方式相同。
最后,在确定该当前对象与该历史对象之间的对应关系后,可以将得到的对应关系作为该当前对象与该历史对象的匹配结果,针对不同的匹配结果,可以采用不同的方式更新对象跟踪结果。
具体的,针对该当前对象匹配到历史对象(即对应关系为匹配),可以利用该当前对象的位置信息更新该历史对象对应的对象跟踪结果;针对该当前对象未匹配到历史对象(即对应关系为新增),可以新建针对该当前对象的对象跟踪结果,利用该当前对象的位置信息作为其对应的对象跟踪结果;针对所述多个历史对象中的 每个历史对象,响应于确定没有与该历史对象相匹配的当前对象(即对应结果为消失),可以基于该历史对象对应的的历史帧图像的采集时间及当前时间,判断当前时间与历史帧图像的采集时间的差值是否大于或等于保留时间阈值,即是否未检测到该历史对象的时长达到预设的保留时间阈值,从而确定保留或清除该历史对象的对象跟踪结果。
在得到更新后的对象跟踪结果后,即可基于更新后的对象跟踪结果控制装载有采集上述当前帧图像和历史帧图像的图像采集装置的智能行驶装置,比如,在预测行驶路线上存在检测到的对象时调整行驶路线、行驶速度等。
这样,利用当前对象的位置信息及历史对象的位置信息确定当前对象的邻居拓扑图及历史对象的邻居拓扑图,并利用当前对象的邻居拓扑图及历史对象的邻居拓扑图进行对象跟踪,提高对象跟踪精确度。
对应于上述对象跟踪方法,本公开还公开了一种对象跟踪装置,该装置中的各个模块能够实现上述各个实施例的对象跟踪方法中的每个步骤,并且能够取得相同的有益效果,因此,对于相同的部分这里不再进行赘述。具体地,如图3所示,对象跟踪装置包括:
获取模块310,用于获取在当前帧图像中检测到的多个当前对象的位置信息,以及在当前帧图像之前的历史帧图像中检测到的多个历史对象的位置信息;其中,所述历史帧图像与所述当前帧图像的采集的时间间隔小于或等于预设时间阈值;
生成模块320,用于针对所述多个历史对象中的每个历史对象,基于该历史对象在所述历史帧图像中的位置信息,生成该历史对象在所述当前帧图像中的预测位置信息;
确定模块330,针对所述多个当前对象中的每个当前对象,基于所述多个当前对象的位置信息确定该当前对象的邻居拓扑图,其中,该当前对象的邻居拓扑图包括表示该当前对象的位置特征的第一节点、表示该当前对象的邻居对象的位置特征的第二节点以及所述第一节点与所述第二节点之间的连接边;以及针对所述多个历史对象中的每个历史对象,基于所述多个历史对象的预测位置信息确定该历史对象的邻居拓扑图,其中,该历史对象的邻居拓扑图包含表示该历史对象的预测位置特征的第三节点、表示该历史对象的邻居对象的预测位置特征的第四节点以及所述第三节点与所述第四节点之间的连接边;
更新模块340,用于基于所述多个当前对象的邻居拓扑图与所述多个历史对象的邻居拓扑图,更新对象跟踪结果。
在一种可能的实施方式中,所述生成模块320具体用于:
基于图像采集装置采集当前帧图像时的位置信息、以及所述图像采集装置采集历史帧图像时的位置信息,确定所述图像采集装置从采集所述历史帧图像的位置到采集所述当前帧图像的位置的位置偏移向量;
基于所述位置偏移向量,对该历史对象在所述历史帧图像中的位置进行偏移,得到该历史对象在所述当前帧图像中的所述预测位置信息。
在一种可能的实施方式中,所述确定模块330在针对所述多个当前对象中的每个当前对象,基于所述多个当前对象的位置信息,确定该当前对象的邻居拓扑图时,用于:
基于所述多个当前对象的位置信息,确定所述多个当前对象中各个当前对象的位置特征;
基于所述多个当前对象的位置信息,确定该当前对象的邻居对象;
基于该当前对象的位置特征,生成该当前对象的第一节点;
基于该当前对象的邻居对象的位置特征,生成该当前对象的邻居对象的第二节点;
生成连接所述第一节点与所述第二节点的连接边,得到该当前对象的邻居拓扑图。
在一种可能的实施方式中,所述更新模块340具体用于:
针对每个当前对象,确定该当前对象的邻居拓扑图分别与各个所述历史对象的邻居拓扑图之间的相似度;
基于所述相似度,更新该当前对象对应的所述对象跟踪结果。
在一种可能的实施方式中,所述更新模块340在确定该当前对象的邻居拓扑图分别与各个所述历史对象的邻居拓扑图之间的相似度时,用于:
针对每个所述历史对象,在该当前对象与该历史对象之间的欧式距离小于或等于预设距离阈值的情况下,基于该当前对象的邻居拓扑图,以及该历史对象的邻居拓扑图,确定该当前对象的节点与该历史对象的节点的第一相似度、以及该当前对象的邻居对象的节点与该历史对象的邻居对象的节点的第二相似度;
基于所述第一相似度及所述第二相似度,确定该当前对象的邻居拓扑图与该历史对象的邻居拓扑图之间的相似度。
在一种可能的实施方式中,所述更新模块340在确定该当前对象的邻居拓扑图分别与各个所述历史对象的邻居拓扑图之间的相似度时,用于:
针对每个所述历史对象,在该当前对象与该历史对象之间的欧式距离大于预设距离阈值的情况下,确定该当前对象的邻居拓扑图与该历史对象的邻居拓扑图之间的相似度为0。
在一种可能的实施方式中,所述更新模块340在基于该当前对象的邻居拓扑图,以及该历史对象的邻居拓扑图,确定该当前对象的节点与该历史对象的节点的所述第一相似度时,用于:
基于该当前对象的位置特征对应的特征向量与该历史对象的位置特征对应的特征向量之间的差值,确定所述第一相似度。
在一种可能的实施方式中,所述更新模块340在基于该当前对象的邻居拓扑图,以及该历史对象的邻居拓扑图,确定该当前对象的邻居对象的节点与该历史对象的邻居对象的节点的所述第二相似度时,用于:
针对该当前对象的一个第一邻居对象,基于该当前对象的邻居拓扑图,以及该历史对象的邻居拓扑图,确定该第一邻居对象分别与该历史对象的各个邻居对象之间的第三相似度;所述第一邻居对象为所述当前对象的邻居对象中的任一个;
基于所述第三相似度,从该历史对象的邻居对象中筛选与该第一邻居对象相匹配的目标邻居对象;
基于每个第一邻居对象以及与其相匹配的目标邻居对象之间的第三相似度,确定该当前对象的邻居对象的节点与该历史对象的邻居对象的节点的所述第二相似度。
在一种可能的实施方式中,所述更新模块340在基于所述相似度,更新该当前对象对应的所述对象跟踪结果时,用于:
针对该当前对象,基于所述相似度,确定该当前对象与各个所述历史对象的匹配结果;
响应于确定该当前对象匹配到历史对象,利用该当前对象的位置信息更新该历史对象对应的对象跟踪结果。
在一种可能的实施方式中,所述更新模块340在基于所述相似度,更新该当前对象对应的所述对象跟踪结果时,还用于:
响应于确定该当前对象未匹配到历史对象,利用该当前对象的位置信息建立与该当前对象的对应的对象跟踪结果。
在一种可能的实施方式中,所述更新模块340在基于所述相似度,更新该当前对象对应的所述对象跟踪结果时,还用于:
针对所述多个历史对象中的每个历史对象,响应于确定没有与该历史对象相匹 配的当前对象,基于该历史对象对应的历史帧图像的采集时间及当前时间,确定保留或清除该历史对象的对象跟踪结果。
在一种可能的实施方式中,所述装置还包括控制模块,用于:
基于更新后的所述对象跟踪结果,控制装载有采集所述当前帧图像以及所述历史帧图像的图像采集装置的智能行驶装置。
对应于上述对象跟踪方法,本公开实施例还提供了一种电子设备400,如图4所示,为本公开实施例提供的电子设备400结构示意图,包括:
处理器41、存储器42、和总线43;存储器42用于存储执行指令,包括内存421和外部存储器422;这里的内存421也称内存储器,用于暂时存放处理器41中的运算数据,以及与硬盘等外部存储器422交换的数据,处理器41通过内存421与外部存储器422进行数据交换,当电子设备400运行时,处理器41与存储器42之间通过总线43通信,使得处理器41执行以下指令:
获取在当前帧图像中检测到的多个当前对象的位置信息,以及在当前帧图像之前的历史帧图像中检测到的多个历史对象的位置信息;其中,所述历史帧图像与所述当前帧图像的采集的时间间隔小于或等于预设时间阈值;
针对所述多个历史对象中的每个历史对象,基于该历史对象在所述历史帧图像中的位置信息,生成该历史对象在所述当前帧图像中的预测位置信息;
针对所述多个当前对象中的每个当前对象,基于所述多个当前对象的位置信息确定该当前对象的邻居拓扑图,其中,该当前对象的邻居拓扑图包括表示该当前对象的位置特征的第一节点、表示该当前对象的邻居对象的位置特征的第二节点以及所述第一节点与所述第二节点之间的连接边;以及针对所述多个历史对象中的每个历史对象,基于所述多个历史对象的预测位置信息确定该历史对象的邻居拓扑图,其中,该历史对象的邻居拓扑图包含表示该历史对象的预测位置特征的第三节点、表示该历史对象的邻居对象的预测位置特征的第四节点以及所述第三节点与所述第四节点之间的连接边;
基于所述多个当前对象的邻居拓扑图与所述多个历史对象的邻居拓扑图,更新对象跟踪结果。
本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上述方法实施例中所述的对象跟踪方法的步骤。其中,该存储介质可以是易失性或非易失的计算机可读取存储介质。
本公开实施例所提供的对象跟踪方法的计算机程序产品,包括存储了程序代码 的计算机可读存储介质,所述程序代码包括的指令可用于执行上述方法实施例中所述的对象跟踪方法的步骤,具体可参见上述方法实施例,在此不再赘述。
本公开实施例还提供一种计算机程序,该计算机程序被处理器执行时实现前述实施例的任意一种方法。该计算机程序产品可以具体通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品具体体现为计算机存储介质,在另一个可选实施例中,计算机程序产品具体体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的***和装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。在本公开所提供的几个实施例中,应该理解到,所揭露的***、装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个处理器可执行的非易失的计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上所述实施例,仅为本公开的具体实施方式,用以说明本 公开的技术方案,而非对其限制,本公开的保护范围并不局限于此,尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本公开实施例技术方案的精神和范围,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应所述以权利要求的保护范围为准。

Claims (15)

  1. 一种对象跟踪方法,其特征在于,包括:
    获取在当前帧图像中检测到的多个当前对象的位置信息,以及在当前帧图像之前的历史帧图像中检测到的多个历史对象的位置信息;其中,所述历史帧图像与所述当前帧图像的采集的时间间隔小于或等于预设时间阈值;
    针对所述多个历史对象中的每个历史对象,基于该历史对象在所述历史帧图像中的位置信息,生成该历史对象在所述当前帧图像中的预测位置信息;
    针对所述多个当前对象中的每个当前对象,基于所述多个当前对象的位置信息确定该当前对象的邻居拓扑图,其中,该当前对象的邻居拓扑图包括表示该当前对象的位置特征的第一节点、表示该当前对象的邻居对象的位置特征的第二节点以及所述第一节点与所述第二节点之间的连接边;
    针对所述多个历史对象中的每个历史对象,基于所述多个历史对象的预测位置信息确定该历史对象的邻居拓扑图,其中,该历史对象的邻居拓扑图包含表示该历史对象的预测位置特征的第三节点、表示该历史对象的邻居对象的预测位置特征的第四节点以及所述第三节点与所述第四节点之间的连接边;
    基于所述多个当前对象的邻居拓扑图与所述多个历史对象的邻居拓扑图,更新对象跟踪结果。
  2. 根据权利要求1所述的方法,其特征在于,基于该历史对象在所述历史帧图像中的位置信息,生成该历史对象在所述当前帧图像中的所述预测位置信息,包括:
    基于图像采集装置采集当前帧图像时的位置信息、以及所述图像采集装置采集历史帧图像时的位置信息,确定所述图像采集装置从采集所述历史帧图像的位置到采集所述当前帧图像的位置的位置偏移向量;
    基于所述位置偏移向量,对该历史对象在所述历史帧图像中的位置进行偏移,得到该历史对象在所述当前帧图像中的所述预测位置信息。
  3. 根据权利要求1或2所述的方法,其特征在于,针对所述多个当前对象中的每个当前对象,基于所述多个当前对象的位置信息,确定该当前对象的邻居拓扑图,包括:
    基于所述多个当前对象的位置信息,确定所述多个当前对象中各个当前对象的位置特征;
    基于所述多个当前对象的位置信息,确定该当前对象的邻居对象;
    基于该当前对象的位置特征,生成该当前对象的第一节点;
    基于该当前对象的邻居对象的位置特征,生成该当前对象的邻居对象的第二节点;
    生成连接所述第一节点与所述第二节点的连接边,得到该当前对象的邻居拓扑图。
  4. 根据权利要求1至3任一所述的方法,其特征在于,所述基于所述多个当前对象的邻居拓扑图与所述多个历史对象的邻居拓扑图,更新所述对象跟踪结果,包括:
    针对每个当前对象,确定该当前对象的邻居拓扑图分别与各个所述历史对象的邻居拓扑图之间的相似度;
    基于所述相似度,更新该当前对象对应的所述对象跟踪结果。
  5. 根据权利要求4所述的方法,其特征在于,确定该当前对象的邻居拓扑图分别与各个所述历史对象的邻居拓扑图之间的相似度,包括:
    针对每个所述历史对象,在该当前对象与该历史对象之间的欧式距离小于或等于预设距离阈值的情况下,基于该当前对象的邻居拓扑图,以及该历史对象的邻居拓扑图,确定该当前对象的节点与该历史对象的节点的第一相似度、以及该当前对象的邻居对象的节点与该历史对象的邻居对象的节点的第二相似度;
    基于所述第一相似度及所述第二相似度,确定该当前对象的邻居拓扑图与该历史对象的邻居拓扑图之间的相似度。
  6. 根据权利要求4或5所述的方法,其特征在于,确定该当前对象的邻居拓扑图分别与各个所述历史对象的邻居拓扑图之间的相似度,包括:
    针对每个所述历史对象,在该当前对象与该历史对象之间的欧式距离大于预设距离阈值的情况下,确定该当前对象的邻居拓扑图与该历史对象的邻居拓扑图之间的相似度为0。
  7. 根据权利要求5所述的方法,其特征在于,基于该当前对象的邻居拓扑图,以及该历史对象的邻居拓扑图,确定该当前对象的节点与该历史对象的节点的所述第一相似度,包括:
    基于该当前对象的位置特征对应的特征向量与该历史对象的位置特征对应的特征向量之间的差值,确定所述第一相似度。
  8. 根据权利要求5或7所述的方法,其特征在于,所述基于该当前对象的邻居拓扑图,以及该历史对象的邻居拓扑图,确定该当前对象的邻居对象的节点与该历史对象的邻居对象的节点的所述第二相似度,包括:
    针对该当前对象的一个第一邻居对象,基于该当前对象的邻居拓扑图,以及该历史对象的邻居拓扑图,确定该第一邻居对象分别与该历史对象的各个邻居对象之间的第三相似度;所述第一邻居对象为所述当前对象的邻居对象中的任一个;
    基于所述第三相似度,从该历史对象的邻居对象中筛选与该第一邻居对象相匹配的 目标邻居对象;
    基于每个第一邻居对象以及与其相匹配的目标邻居对象之间的第三相似度,确定该当前对象的邻居对象的节点与该历史对象的邻居对象的节点的所述第二相似度。
  9. 根据权利要求4至8任一所述的方法,其特征在于,基于所述相似度,更新该当前对象对应的所述对象跟踪结果,包括:
    针对该当前对象,基于所述相似度,确定该当前对象与各个所述历史对象的匹配结果;
    响应于确定该当前对象匹配到历史对象,利用该当前对象的位置信息更新该历史对象对应的对象跟踪结果。
  10. 根据权利要求9所述的方法,其特征在于,所述方法还包括:
    响应于确定该当前对象未匹配到历史对象,利用该当前对象的位置信息建立与该当前对象的对应的对象跟踪结果。
  11. 根据权利要求9所述的方法,其特征在于,所述方法还包括:
    针对所述多个历史对象中的每个历史对象,响应于确定没有与该历史对象相匹配的当前对象,基于该历史对象对应的历史帧图像的采集时间及当前时间,确定保留或清除该历史对象的对象跟踪结果。
  12. 根据权利要求1至11任一所述的方法,其特征在于,所述方法还包括:
    基于更新后的所述对象跟踪结果,控制装载有采集所述当前帧图像以及所述历史帧图像的图像采集装置的智能行驶装置。
  13. 一种对象跟踪装置,其特征在于,包括:
    获取模块,用于获取在当前帧图像中检测到的多个当前对象的位置信息,以及在当前帧图像之前的历史帧图像中检测到的多个历史对象的位置信息;其中,所述历史帧图像与所述当前帧图像的采集的时间间隔小于或等于预设时间阈值;
    生成模块,用于针对所述多个历史对象中的每个历史对象,基于该历史对象在所述历史帧图像中的位置信息,生成该历史对象在所述当前帧图像中的预测位置信息;
    确定模块,用于针对所述多个当前对象中的每个当前对象,基于所述多个当前对象的位置信息确定该当前对象的邻居拓扑图,其中,该当前对象的邻居拓扑图包括表示该当前对象的位置特征的第一节点、表示该当前对象的邻居对象的位置特征的第二节点以及所述第一节点与所述第二节点之间的连接边;以及针对所述多个历史对象中的每个历史对象,基于所述多个历史对象的预测位置信息确定该历史对象的邻居拓扑图,其中,该历史对象的邻居拓扑图包含表示该历史对象的预测位置特征的第三节点、表示该历史 对象的邻居对象的预测位置特征的第四节点以及所述第三节点与所述第四节点之间的连接边;
    更新模块,用于基于所述多个当前对象的邻居拓扑图与所述多个历史对象的邻居拓扑图,更新对象跟踪结果。
  14. 一种电子设备,其特征在于,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行如权利要求1至12任一项所述的对象跟踪方法的步骤。
  15. 一种计算机可读存储介质,其特征在于,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行如权利要求1至12任一项所述的对象跟踪方法的步骤。
PCT/CN2022/128396 2021-10-29 2022-10-28 对象跟踪 WO2023072269A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111271923.7A CN113971687A (zh) 2021-10-29 2021-10-29 对象跟踪方法、装置电子设备及存储介质
CN202111271923.7 2021-10-29

Publications (1)

Publication Number Publication Date
WO2023072269A1 true WO2023072269A1 (zh) 2023-05-04

Family

ID=79589205

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/128396 WO2023072269A1 (zh) 2021-10-29 2022-10-28 对象跟踪

Country Status (2)

Country Link
CN (1) CN113971687A (zh)
WO (1) WO2023072269A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113971687A (zh) * 2021-10-29 2022-01-25 上海商汤临港智能科技有限公司 对象跟踪方法、装置电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090201149A1 (en) * 2007-12-26 2009-08-13 Kaji Mitsuru Mobility tracking method and user location tracking device
CN104200488A (zh) * 2014-08-04 2014-12-10 合肥工业大学 一种基于图表示和匹配的多目标跟踪方法
CN110895819A (zh) * 2018-09-12 2020-03-20 长沙智能驾驶研究院有限公司 目标跟踪方法、装置、计算机可读存储介质和计算机设备
WO2021072709A1 (zh) * 2019-10-17 2021-04-22 深圳市大疆创新科技有限公司 目标检测与跟踪方法、***、设备及存储介质
CN113971687A (zh) * 2021-10-29 2022-01-25 上海商汤临港智能科技有限公司 对象跟踪方法、装置电子设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090201149A1 (en) * 2007-12-26 2009-08-13 Kaji Mitsuru Mobility tracking method and user location tracking device
CN104200488A (zh) * 2014-08-04 2014-12-10 合肥工业大学 一种基于图表示和匹配的多目标跟踪方法
CN110895819A (zh) * 2018-09-12 2020-03-20 长沙智能驾驶研究院有限公司 目标跟踪方法、装置、计算机可读存储介质和计算机设备
WO2021072709A1 (zh) * 2019-10-17 2021-04-22 深圳市大疆创新科技有限公司 目标检测与跟踪方法、***、设备及存储介质
CN113971687A (zh) * 2021-10-29 2022-01-25 上海商汤临港智能科技有限公司 对象跟踪方法、装置电子设备及存储介质

Also Published As

Publication number Publication date
CN113971687A (zh) 2022-01-25

Similar Documents

Publication Publication Date Title
Park et al. Elastic lidar fusion: Dense map-centric continuous-time slam
CN107990899B (zh) 一种基于slam的定位方法和***
KR101725060B1 (ko) 그래디언트 기반 특징점을 이용한 이동 로봇의 위치를 인식하기 위한 장치 및 그 방법
KR101708659B1 (ko) 이동 로봇의 맵을 업데이트하기 위한 장치 및 그 방법
KR101776622B1 (ko) 다이렉트 트래킹을 이용하여 이동 로봇의 위치를 인식하기 위한 장치 및 그 방법
US8913055B2 (en) Online environment mapping
CN110702111A (zh) 使用双事件相机的同时定位与地图创建(slam)
KR101776621B1 (ko) 에지 기반 재조정을 이용하여 이동 로봇의 위치를 인식하기 위한 장치 및 그 방법
CN112304307A (zh) 一种基于多传感器融合的定位方法、装置和存储介质
CN112219087A (zh) 位姿预测方法、地图构建方法、可移动平台及存储介质
Voigt et al. Robust embedded egomotion estimation
KR20150144726A (ko) 검색 기반 상관 매칭을 이용하여 이동 로봇의 위치를 인식하기 위한 장치 및 그 방법
Li et al. Review of vision-based Simultaneous Localization and Mapping
Peng et al. Globally-optimal contrast maximisation for event cameras
CN110648363A (zh) 相机姿态确定方法、装置、存储介质及电子设备
CN112802096A (zh) 实时定位和建图的实现装置和方法
CN112734837B (zh) 图像匹配的方法及装置、电子设备及车辆
WO2023072269A1 (zh) 对象跟踪
KR101916573B1 (ko) 다중 객체 추적 방법
CN112233148A (zh) 目标运动的估计方法、设备及计算机存储介质
Qian et al. Pocd: Probabilistic object-level change detection and volumetric mapping in semi-static scenes
Wei et al. Novel robust simultaneous localization and mapping for long-term autonomous robots
Eade Monocular simultaneous localisation and mapping
Yang et al. Visual SLAM using multiple RGB-D cameras
CN115239899B (zh) 位姿图生成方法、高精地图生成方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22886144

Country of ref document: EP

Kind code of ref document: A1