CN115965944A - Target information detection method, device, driving device, and medium - Google Patents

Target information detection method, device, driving device, and medium Download PDF

Info

Publication number
CN115965944A
CN115965944A CN202310222986.6A CN202310222986A CN115965944A CN 115965944 A CN115965944 A CN 115965944A CN 202310222986 A CN202310222986 A CN 202310222986A CN 115965944 A CN115965944 A CN 115965944A
Authority
CN
China
Prior art keywords
feature map
historical
time
moment
target information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310222986.6A
Other languages
Chinese (zh)
Other versions
CN115965944B (en
Inventor
李传康
***
秦海波
彭琦翔
吴冰
姚卯青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Weilai Zhijia Technology Co Ltd
Original Assignee
Anhui Weilai Zhijia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Weilai Zhijia Technology Co Ltd filed Critical Anhui Weilai Zhijia Technology Co Ltd
Priority to CN202310222986.6A priority Critical patent/CN115965944B/en
Publication of CN115965944A publication Critical patent/CN115965944A/en
Application granted granted Critical
Publication of CN115965944B publication Critical patent/CN115965944B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a target information detection method, target information detection equipment, driving equipment and a medium, wherein the method comprises the steps of inputting sensing data at the current moment into a sensing detection model for feature extraction to obtain a feature map at the current moment; obtaining a feature map of a historical moment from a buffer queue, and aligning the feature map of the historical moment with the feature map of the current moment to obtain a multi-time frame feature map; the multi-time-frame feature map is input into the perception detection model for detection, a detection result of the target information is obtained, feature extraction is only carried out on perception data at the current moment, a large amount of redundant calculation is reduced, time for feature extraction is shortened, and therefore detection efficiency of the target information is improved.

Description

Target information detection method, target information detection device, driving device, and medium
Technical Field
The invention relates to the technical field of data processing, and particularly provides a target information detection method, target information detection equipment, driving equipment and a medium.
Background
The introduction of a timing capability into a perceptual model, so as to solve a timing task (e.g., trajectory prediction), or improve the perceptual capability of a single frame (e.g., introduction of continuous frame optimization target detection), etc., is a trend in algorithm development at present.
In general, a time sequence module in a perceptual model generally uses a sliding window method, and in one inference, features need to be extracted from a current frame and a historical frame in the sliding window. And in the sliding window of the next moment, the feature extraction needs to be repeated for the historical frame. In this way, each sliding window generates many redundant calculations, resulting in excessive time overhead and reduced efficiency in detection of target information.
Disclosure of Invention
In order to overcome the above-mentioned drawbacks, the present invention is proposed to provide a method, an apparatus, a driving apparatus, and a medium for detecting target information, which solve or at least partially solve the technical problems of excessive time overhead and reduced efficiency of detection of target information due to redundant calculations occurring each time a sliding window is made.
In a first aspect, the present invention provides a method for detecting target information, the method comprising:
inputting the sensing data of the current moment into a sensing detection model for feature extraction to obtain a feature map of the current moment;
obtaining a feature map of a historical moment from a buffer queue, and aligning the feature map of the historical moment with the feature map of the current moment to obtain a multi-time frame feature map;
and inputting the multi-time-frame feature map into the perception detection model for detection to obtain a detection result of the target information.
Further, the method for detecting target information further includes:
and storing the characteristic diagram at the current moment into the buffer queue.
Further, in the method for detecting target information, storing the feature map of the current time in the buffer queue includes:
detecting whether the number of non-idle state elements in the cache queue reaches the maximum number of all elements in the cache queue; wherein the non-idle state elements are elements in which the feature map of the historical moment is stored;
if the number of the non-idle state elements in the cache queue reaches the maximum number of all elements in the cache queue, deleting the characteristic diagram of the historical moment in the first element, sequentially moving the characteristic diagrams of the historical moments in the rest elements to the direction of the first element for bit supplement, and storing the characteristic diagram of the current moment into the last element in the cache queue;
if the number of the non-idle state elements in the cache queue does not reach the maximum number of all the elements in the cache queue, storing the characteristic diagram at the current moment into a first idle state element; the idle state element is an element which does not store the feature map of the historical time.
Further, in the method for detecting target information, storing the feature map of the current time into a first idle state element includes:
and inserting the characteristic diagram of the current moment from the last element in the buffer queue, and moving the characteristic diagram to the first idle state element in the direction of the first element.
Further, in the method for detecting target information, aligning the feature map at the historical time with the feature map at the current time to obtain a multi-time frame feature map, the method includes:
acquiring current pose information of the driving equipment based on the perception detection model, and acquiring historical pose information of the driving equipment from the cache queue;
determining a pose transformation relation of the driving equipment according to the historical pose information and the current pose information;
and aligning the feature map at the historical moment with the feature map at the current moment based on the pose transformation relation to obtain a multi-time-frame feature map.
Further, in the method for detecting target information, aligning the feature map at the historical time with the feature map at the current time based on the pose transformation relationship to obtain a multi-time frame feature map, the method includes:
determining a mapping relation between the feature graph at the current moment and the feature graph at the historical moment based on the pose transformation relation;
and mapping the characteristic graph at the historical moment to finish alignment based on the mapping relation, so as to obtain the multi-time frame characteristic graph.
Further, in the above method for detecting target information, determining a mapping relationship between the feature map at the current time and the feature map at the historical time based on the pose transformation relationship includes:
performing grid division on the feature map at the current moment to obtain a grid feature map at the current moment;
and determining the mapping relation between the feature map at the current moment and the feature map at the historical moment based on the gridding feature map at the current moment and the pose transformation relation.
In a second aspect, the present invention provides a target information detection apparatus, comprising a processor and a storage device, the storage device being adapted to store a plurality of program codes, wherein the program codes are adapted to be loaded and run by the processor to perform any one of the above target information detection methods.
In a third aspect, there is provided a driving apparatus including the detection apparatus of the target information as described above.
In a third aspect, a computer readable storage medium is provided, which stores a plurality of program codes, wherein the program codes are adapted to be loaded and run by a processor to execute any one of the above-mentioned methods for detecting object information.
One or more technical schemes of the invention at least have one or more of the following beneficial effects:
in the technical scheme of the implementation of the invention, the characteristic diagram of the historical moment is stored through the cache queue, the sensing data of the current moment is input into the sensing detection model for characteristic extraction, after the characteristic diagram of the current moment is obtained, the characteristic diagram of the historical moment is directly obtained from the cache queue, the characteristic diagram of the historical moment is aligned with the characteristic diagram of the current moment, and the multi-time-frame characteristic diagram is obtained to be used as the input of the sensing detection model, so that the detection result of the target information is output, the characteristic extraction of the sensing data of the current moment is realized, a large amount of redundant calculation is reduced, the time for characteristic extraction is shortened, and the detection efficiency of the target information is improved.
Drawings
The disclosure of the present invention will become more readily understood with reference to the accompanying drawings. As is readily understood by those skilled in the art: these drawings are for illustrative purposes only and are not intended to be a limitation on the scope of the present disclosure. Moreover, in the drawings, like numerals are used to indicate like parts, and in which:
fig. 1 is a flow chart illustrating the main steps of a method for detecting target information according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating alignment of feature maps at different times;
FIG. 3 is a graph comparing trace prediction results for different timing lengths;
FIG. 4 is a flow chart illustrating the main steps of a method for detecting target information according to an embodiment of the present invention;
fig. 5 is a main configuration block diagram of a detection apparatus of target information according to an embodiment of the present invention.
Detailed Description
Some embodiments of the invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are only for explaining the technical principle of the present invention, and are not intended to limit the scope of the present invention.
In the description of the present invention, a "module" or "processor" may include hardware, software, or a combination of both. A module may comprise hardware circuitry, various suitable sensors, communication ports, memory, may comprise software components such as program code, or may be a combination of software and hardware. The processor may be a central processing unit, microprocessor, image processor, digital signal processor, or any other suitable processor. The processor has data and/or signal processing functionality. The processor may be implemented in software, hardware, or a combination thereof. Non-transitory computer readable storage media include any suitable medium that can store program code, such as magnetic disks, hard disks, optical disks, flash memory, read-only memory, random-access memory, and the like. The term "a and/or B" denotes all possible combinations of a and B, such as a alone, B alone or a and B. The term "at least one A or B" or "at least one of A and B" means similar to "A and/or B" and may include only A, only B, or both A and B. The singular forms "a", "an" and "the" may include the plural forms as well.
In the process of detecting by using a time sequence module in a perception model, the time sequence module usually uses a sliding window method, and in one inference, features need to be extracted from a current frame and a historical frame in the sliding window. And in the sliding window of the next moment, the feature extraction needs to be repeated for the historical frame. In this way, each sliding window generates many redundant calculations, resulting in excessive time overhead and reduced efficiency in detection of target information.
Therefore, in order to solve the above technical problems, the present invention provides the following technical solutions:
referring to fig. 1, fig. 1 is a flow chart illustrating main steps of a method for detecting target information according to an embodiment of the invention. As shown in fig. 1, the method for detecting target information in the embodiment of the present invention mainly includes the following steps 101 to 103.
Step 101, inputting the sensing data of the current moment into a sensing detection model for feature extraction to obtain a feature map of the current moment;
in a specific implementation process, sensing, a camera and the like can be used for acquiring data of the current environment in real time to serve as sensing data. For example, the perception data may include images, lidar point clouds, and the like. After the sensing data at the current moment is collected, the sensing data at the current moment can be input into a time sequence module in a pre-trained sensing detection model for feature extraction, and a feature map at the current moment is obtained.
102, obtaining a feature map of a historical moment from a buffer queue, and aligning the feature map of the historical moment with the feature map of the current moment to obtain a multi-time frame feature map;
in a specific implementation process, a buffer queue may be preset, and after extracting the feature map at each time of the timing sequence module in the sensing detection model, the feature map may be used as the feature map at the historical time and stored in the buffer queue. Therefore, after the sensing data at the current moment is obtained, only the feature map at the current moment corresponding to the sensing data at the current moment needs to be extracted, and after the feature map at the historical moment is directly obtained from the cache queue, the feature map at the historical moment is aligned with the feature map at the current moment to obtain the multi-time-frame feature map. The feature maps of N-1 historical moments can be extracted according to the time sequence length N specified by a time sequence module in the perception detection model.
In a specific implementation process, the feature map at the historical time and the feature map at the current time may be aligned with reference to the flowchart shown in fig. 2, so as to obtain a multi-time-frame feature map. Fig. 2 is a schematic flow chart illustrating alignment of feature maps at different times. As shown in fig. 2, the process may include the following steps 201-203:
step 201, obtaining current pose information of a driving device based on the perception detection model, and obtaining historical pose information of the driving device from the cache queue;
in a specific implementation process, when the characteristic diagram of the historical moment is stored in the cache queue, historical pose information of the driving device can be stored together. Therefore, after the sensing data of the current moment are obtained, the current pose information of the driving equipment can be obtained by using the sensing detection model, and the historical pose information of the driving equipment can be obtained from the cache queue.
Step 202, determining a pose transformation relation of the driving equipment according to the historical pose information and the current pose information;
in a specific implementation process, the attitude change amount of the driving device from the historical time to the current time can be calculated according to the historical attitude information and the current attitude information, so that the attitude transformation relation of the driving device can be obtained according to the attitude change amount.
And 203, aligning the feature map of the historical moment with the feature map of the current moment based on the pose transformation relation to obtain a multi-time-frame feature map.
In a specific implementation process, after the pose transformation relationship of the driving device is obtained, a mapping relationship between the feature map at the current time and the feature map at the historical time may be determined based on the pose transformation relationship of the driving device, that is, for the pixel position of the feature map at the current time, the pixel position of the feature map at the historical time corresponding to the pixel position of the feature map at the current time. And then, mapping the feature map at the historical moment to the feature map at the current moment to finish alignment based on the mapping relation between the feature map at the current moment and the feature map at the historical moment, so as to obtain the multi-time frame feature map. Specifically, based on the mapping relationship between the feature map at the current time and the feature map at the historical time, the coordinates of each pixel of the feature map at the historical time are projected to the feature map at the current time, so that all the feature maps are aggregated together.
In a specific implementation process, the feature map at the current time may be subjected to meshing to obtain a gridded feature map at the current time, and a mapping relationship between the feature map at the current time and the feature map at the historical time is determined based on the gridded feature map at the current time and the pose transformation relationship. Specifically, a mesh with point cloud data may be selected as an effective mesh, and the mapping relationship between the feature map in the effective mesh and the feature map at the historical time is successively determined until the feature maps in all the effective meshes are traversed, and the mapping relationship between the feature map at the current time and the feature map at the historical time is obtained.
And 103, inputting the multi-time-frame feature map into the perception detection model for detection to obtain a detection result of the target information.
In a specific implementation process, the obtained multi-time frame feature map may be input into the perceptual detection model again for detection, so as to obtain a detection result of the target information. Different detection results of the target information can be obtained through different detection head networks in the perception detection model. For example, the detection result of the target information may include a target object such as an obstacle or a signboard; semantic/instance segmentation results; a travelable area detection result; a trajectory prediction result; intent prediction results, etc.
In a specific implementation process, when time sequence feature extraction is performed, only feature extraction needs to be performed on sensing data at the current moment, and pressure of a time sequence module in a sensing detection model is released. For example, fig. 3 is a comparison graph of the track prediction results under different time sequence lengths. As shown in fig. 3, in the vehicle turning scene, when the time-series length is 3 frames, the predicted vehicle travel track a is a straight line and does not coincide with the lane line, and when the time-series length is 5 frames, the predicted vehicle travel track a is a curved line and coincides with the lane line.
According to the target information detection method, the feature map of the historical moment is stored in the cache queue, the sensing data of the current moment is input into the sensing detection model for feature extraction, after the feature map of the current moment is obtained, the feature map of the historical moment is directly obtained from the cache queue, the feature map of the historical moment is aligned with the feature map of the current moment, and the multi-time-frame feature map is obtained and used as the input of the sensing detection model, so that the detection result of the target information is output, the feature extraction of the sensing data of the current moment is realized, a large amount of redundant calculation is reduced, the time for feature extraction is shortened, and the efficiency of target information detection is improved.
Referring to fig. 4, fig. 4 is a flow chart illustrating main steps of a method for detecting target information according to an embodiment of the invention. As shown in fig. 4, the method for detecting target information in the embodiment of the present invention mainly includes the following steps 401 to 406.
Step 401, inputting the sensing data of the current moment into a sensing detection model for feature extraction to obtain a feature map of the current moment;
step 402, obtaining a feature map of a historical moment from a buffer queue, and aligning the feature map of the historical moment with the feature map of the current moment to obtain a multi-time-frame feature map;
step 403, inputting the multi-time-frame feature map into the perception detection model for detection to obtain a detection result of the target information;
step 404, detecting whether the number of non-idle state elements in the cache queue reaches the maximum number of all elements in the cache queue; if yes, go to step 405; if not, go to step 406;
in a specific implementation process, after the feature map at the current time is obtained, the feature map at the current time needs to be stored in a buffer queue. However, the number of elements in the buffer queue is usually limited, and therefore, it can be detected whether the number of non-idle state elements in the buffer queue reaches the maximum number of all elements in the buffer queue. Wherein the elements in which the feature map of the historical time is stored may be non-idle state elements; elements of the feature map for which the historical time is not stored may be idle state elements. Illustratively, the cache queue includes L elements, and if the non-idle state element is also L, step 405 is executed, and if the non-idle state element is also M, and M is smaller than L, step 406 is executed.
Step 405, deleting the feature map of the historical time in the first element, moving the feature maps of the historical time in the remaining elements to the direction of the first element in sequence for bit complementing, and storing the feature map of the current time into the last element in the cache queue;
in a specific implementation process, if the number of non-idle state elements in the cache queue reaches the maximum number of all elements in the cache queue, the feature map at the historical time in the first element may be deleted, and after moving the feature maps at the historical time in the remaining elements in the direction of the first element in sequence for bit complementing, the feature map at the current time is stored in the last element in the cache queue. Namely, deleting the feature map of the most previous historical moment, moving the feature maps of the subsequent historical moments forward in sequence to make the last element be an idle state element, and then storing the feature map of the current moment into the last element in the cache queue.
Step 406, storing the characteristic diagram of the current moment into a first idle state element;
in a specific implementation process, if the number of non-idle state elements in the cache queue does not reach the maximum number of all elements in the cache queue, the feature map at the current time may be stored in the first idle state element. Specifically, the feature map at the current time is inserted from the last element in the buffer queue, and is moved to the first idle state element in the direction of the first element. That is, for L elements, the number of non-idle state elements is N, and the feature map at the current time is inserted from the last element in the buffer queue and then moved to the (N + 1) th element.
It should be noted that, although the foregoing embodiments describe each step in a specific sequence, those skilled in the art will understand that, in order to achieve the effect of the present invention, different steps do not necessarily need to be executed in such a sequence, and they may be executed simultaneously (in parallel) or in other sequences, and these changes are all within the protection scope of the present invention.
It will be understood by those skilled in the art that all or part of the flow of the method according to the above-described embodiment may be implemented by a computer program, which may be stored in a computer-readable storage medium and used to implement the steps of the above-described embodiments of the method when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable storage medium may include: any entity or device capable of carrying said computer program code, media, usb disk, removable hard disk, magnetic diskette, optical disk, computer memory, read-only memory, random access memory, electrical carrier wave signals, telecommunication signals, software distribution media, etc. It should be noted that the computer-readable storage medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer-readable storage media may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
Furthermore, the invention also provides a target information detection device.
Referring to fig. 5, fig. 5 is a main configuration block diagram of a target information detection apparatus according to an embodiment of the present invention. As shown in fig. 5, the apparatus for detecting target information in the embodiment of the present invention may include a processor 51 and a storage device 52.
The storage 52 may be configured to store a program for executing the detection method of the object information of the above-described method embodiment, and the processor 51 may be configured to execute a program in the storage 52, which includes, but is not limited to, a program for executing the detection method of the object information of the above-described method embodiment. For convenience of explanation, only the parts related to the embodiments of the present invention are shown, and specific technical details are not disclosed. The detection device of the target information may be a control device formed including various electronic devices.
In one embodiment, the number of the storage device 52 and the number of the processors 51 may be plural. And the program for executing the method for detecting the target information of the above method embodiment may be divided into a plurality of sub-programs, each of which may be loaded and executed by the processor 51 to execute different steps of the method for detecting the target information of the above method embodiment. Specifically, each piece of sub program may be stored in a different storage device 52, respectively, and each processor 51 may be configured to execute the programs in one or more storage devices 52 to implement the method for detecting the target information of the above method embodiment together, that is, each processor 51 executes different steps of the method for detecting the target information of the above method embodiment, respectively, to implement the method for detecting the target information of the above method embodiment together.
The processors 51 may be processors disposed on the same device, for example, the device may be a high-performance device composed of multiple processors, and the processors 51 may be processors configured on the high-performance device. Moreover, the processors 51 may also be processors disposed on different devices, for example, the devices may be a server cluster, and the processors 51 may be processors on different servers in the server cluster.
Further, the present invention also provides a driving apparatus, which may include the detection apparatus of the target information of the above-described embodiment.
Further, the invention also provides a computer readable storage medium. In one computer-readable storage medium embodiment according to the present invention, the computer-readable storage medium may be configured to store a program for executing the control method for detection of target information of the above-described method embodiment, which may be loaded and executed by a processor to implement the above-described target information detection method. For convenience of explanation, only the parts related to the embodiments of the present invention are shown, and specific technical details are not disclosed. The computer readable storage medium may be a storage device formed by including various electronic devices, and optionally, the computer readable storage medium is a non-transitory computer readable storage medium in the embodiment of the present invention.
Further, it should be understood that, since the modules are only configured to illustrate the functional units of the apparatus of the present invention, the corresponding physical devices of the modules may be the processor itself, or a part of software, a part of hardware, or a part of a combination of software and hardware in the processor. Thus, the number of individual modules in the figures is merely illustrative.
Those skilled in the art will appreciate that the various modules in the apparatus may be adaptively split or combined. Such splitting or combining of specific modules does not cause the technical solutions to deviate from the principle of the present invention, and therefore, the technical solutions after splitting or combining will fall within the protection scope of the present invention.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is apparent to those skilled in the art that the scope of the present invention is not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.

Claims (10)

1. A method for detecting target information, comprising:
inputting the sensing data of the current moment into a sensing detection model for feature extraction to obtain a feature map of the current moment;
obtaining a feature map of a historical moment from a buffer queue, and aligning the feature map of the historical moment with the feature map of the current moment to obtain a multi-time frame feature map;
and inputting the multi-time-frame feature map into the perception detection model for detection to obtain a detection result of the target information.
2. The method for detecting object information according to claim 1, further comprising:
and storing the characteristic diagram at the current moment into the buffer queue.
3. The method for detecting the target information according to claim 2, wherein storing the feature map of the current time to the buffer queue includes:
detecting whether the number of non-idle state elements in the cache queue reaches the maximum number of all elements in the cache queue; wherein the non-idle state elements are elements in which the feature map of the historical moment is stored;
if the number of the non-idle state elements in the cache queue reaches the maximum number of all elements in the cache queue, deleting the characteristic diagram of the historical moment in the first element, sequentially moving the characteristic diagrams of the historical moments in the rest elements to the direction of the first element for bit supplement, and storing the characteristic diagram of the current moment into the last element in the cache queue;
if the number of the non-idle state elements in the cache queue does not reach the maximum number of all the elements in the cache queue, storing the characteristic diagram at the current moment into a first idle state element; the idle state element is an element which does not store the feature map of the historical time.
4. The method for detecting target information according to claim 3, wherein storing the feature map of the current time into a first idle state element comprises:
and inserting the characteristic diagram of the current moment from the last element in the buffer queue, and moving the characteristic diagram to the first idle state element in the direction of the first element.
5. The method for detecting target information according to claim 1, wherein aligning the feature map at the historical time with the feature map at the current time to obtain a multi-time-frame feature map comprises:
acquiring current pose information of the driving equipment based on the perception detection model, and acquiring historical pose information of the driving equipment from the cache queue;
determining a pose transformation relation of the driving equipment according to the historical pose information and the current pose information;
and aligning the feature map at the historical moment with the feature map at the current moment based on the pose transformation relation to obtain a multi-time-frame feature map.
6. The method for detecting the target information according to claim 5, wherein aligning the feature map of the historical time with the feature map of the current time based on the pose transformation relationship to obtain a multi-time-frame feature map comprises:
determining a mapping relation between the feature graph at the current moment and the feature graph at the historical moment based on the pose transformation relation;
and mapping the characteristic graph at the historical moment to finish alignment based on the mapping relation, so as to obtain the multi-time frame characteristic graph.
7. The method for detecting the target information according to claim 6, wherein determining the mapping relationship between the feature map at the current time and the feature map at the historical time based on the pose transformation relationship includes:
performing grid division on the feature map at the current moment to obtain a grid feature map at the current moment;
and determining the mapping relation between the feature map at the current moment and the feature map at the historical moment based on the gridding feature map at the current moment and the pose transformation relation.
8. An apparatus for detecting object information, comprising a processor and a storage device adapted to store a plurality of program codes, the program codes being adapted to be loaded and run by the processor to perform the method for detecting object information according to any one of claims 1 to 7.
9. A driving apparatus characterized by comprising a detection apparatus of the target information according to claim 8.
10. A computer-readable storage medium, characterized in that a plurality of program codes are stored, wherein the program codes are adapted to be loaded and executed by a processor to perform the method of detecting object information according to any one of claims 1 to 7.
CN202310222986.6A 2023-03-09 2023-03-09 Target information detection method, device, driving device and medium Active CN115965944B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310222986.6A CN115965944B (en) 2023-03-09 2023-03-09 Target information detection method, device, driving device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310222986.6A CN115965944B (en) 2023-03-09 2023-03-09 Target information detection method, device, driving device and medium

Publications (2)

Publication Number Publication Date
CN115965944A true CN115965944A (en) 2023-04-14
CN115965944B CN115965944B (en) 2023-05-09

Family

ID=85888659

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310222986.6A Active CN115965944B (en) 2023-03-09 2023-03-09 Target information detection method, device, driving device and medium

Country Status (1)

Country Link
CN (1) CN115965944B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179311A (en) * 2019-12-23 2020-05-19 全球能源互联网研究院有限公司 Multi-target tracking method and device and electronic equipment
CN111797751A (en) * 2020-06-29 2020-10-20 中国第一汽车股份有限公司 Pedestrian trajectory prediction method, device, equipment and medium
CN112016469A (en) * 2020-08-28 2020-12-01 Oppo广东移动通信有限公司 Image processing method and device, terminal and readable storage medium
US20210065381A1 (en) * 2019-08-29 2021-03-04 Boe Technology Group Co., Ltd. Target tracking method, device, system and non-transitory computer readable medium
US20210188263A1 (en) * 2019-12-23 2021-06-24 Baidu International Technology (Shenzhen) Co., Ltd. Collision detection method, and device, as well as electronic device and storage medium
CN113743607A (en) * 2021-09-15 2021-12-03 京东科技信息技术有限公司 Training method of anomaly detection model, anomaly detection method and device
CN114494314A (en) * 2021-12-27 2022-05-13 南京大学 Timing boundary detection method and timing sensor
CN114723779A (en) * 2021-01-06 2022-07-08 广州汽车集团股份有限公司 Vehicle positioning method and device and computer readable storage medium
CN114998433A (en) * 2022-05-31 2022-09-02 Oppo广东移动通信有限公司 Pose calculation method and device, storage medium and electronic equipment
CN115565154A (en) * 2022-09-19 2023-01-03 九识(苏州)智能科技有限公司 Feasible region prediction method, device, system and storage medium
CN115588175A (en) * 2022-10-21 2023-01-10 北京易航远智科技有限公司 Aerial view characteristic generation method based on vehicle-mounted all-around image
CN115597591A (en) * 2022-09-15 2023-01-13 山东新一代信息产业技术研究院有限公司(Cn) Robot repositioning method and system based on multi-line laser radar

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210065381A1 (en) * 2019-08-29 2021-03-04 Boe Technology Group Co., Ltd. Target tracking method, device, system and non-transitory computer readable medium
CN111179311A (en) * 2019-12-23 2020-05-19 全球能源互联网研究院有限公司 Multi-target tracking method and device and electronic equipment
US20210188263A1 (en) * 2019-12-23 2021-06-24 Baidu International Technology (Shenzhen) Co., Ltd. Collision detection method, and device, as well as electronic device and storage medium
CN111797751A (en) * 2020-06-29 2020-10-20 中国第一汽车股份有限公司 Pedestrian trajectory prediction method, device, equipment and medium
CN112016469A (en) * 2020-08-28 2020-12-01 Oppo广东移动通信有限公司 Image processing method and device, terminal and readable storage medium
CN114723779A (en) * 2021-01-06 2022-07-08 广州汽车集团股份有限公司 Vehicle positioning method and device and computer readable storage medium
CN113743607A (en) * 2021-09-15 2021-12-03 京东科技信息技术有限公司 Training method of anomaly detection model, anomaly detection method and device
CN114494314A (en) * 2021-12-27 2022-05-13 南京大学 Timing boundary detection method and timing sensor
CN114998433A (en) * 2022-05-31 2022-09-02 Oppo广东移动通信有限公司 Pose calculation method and device, storage medium and electronic equipment
CN115597591A (en) * 2022-09-15 2023-01-13 山东新一代信息产业技术研究院有限公司(Cn) Robot repositioning method and system based on multi-line laser radar
CN115565154A (en) * 2022-09-19 2023-01-03 九识(苏州)智能科技有限公司 Feasible region prediction method, device, system and storage medium
CN115588175A (en) * 2022-10-21 2023-01-10 北京易航远智科技有限公司 Aerial view characteristic generation method based on vehicle-mounted all-around image

Also Published As

Publication number Publication date
CN115965944B (en) 2023-05-09

Similar Documents

Publication Publication Date Title
AU2019213369B2 (en) Non-local memory network for semi-supervised video object segmentation
CN111860398B (en) Remote sensing image target detection method and system and terminal equipment
JP7273129B2 (en) Lane detection method, device, electronic device, storage medium and vehicle
CN111144483A (en) Image feature point filtering method and terminal
CN111861925A (en) Image rain removing method based on attention mechanism and gate control circulation unit
CN112734931B (en) Method and system for assisting point cloud target detection
GB2579262A (en) Space-time memory network for locating target object in video content
CN115965657B (en) Target tracking method, electronic device, storage medium and vehicle
CN111027555A (en) License plate recognition method and device and electronic equipment
CN115249266A (en) Method, system, device and storage medium for predicting position of waypoint
CN113223064A (en) Method and device for estimating scale of visual inertial odometer
CN114399737A (en) Road detection method and device, storage medium and electronic equipment
CN111914596A (en) Lane line detection method, device, system and storage medium
CN114387197A (en) Binocular image processing method, device, equipment and storage medium
CN116229406B (en) Lane line detection method, system, electronic equipment and storage medium
CN116523970A (en) Dynamic three-dimensional target tracking method and device based on secondary implicit matching
CN115965944B (en) Target information detection method, device, driving device and medium
CN113962332B (en) Salient target identification method based on self-optimizing fusion feedback
CN113763468B (en) Positioning method, device, system and storage medium
CN115631108A (en) RGBD-based image defogging method and related equipment
CN112183359B (en) Method, device and equipment for detecting violent content in video
CN114419018A (en) Image sampling method, system, device and medium
CN111985542A (en) Representative graph structure model, visual understanding model establishing method and application
CN116558540B (en) Model training method and device, and track generating method and device
CN115797412B (en) Dynamic object outlier parallel detection method, device, system, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant