WO2022179197A1 - 一种信息处理方法及相关设备 - Google Patents
一种信息处理方法及相关设备 Download PDFInfo
- Publication number
- WO2022179197A1 WO2022179197A1 PCT/CN2021/131058 CN2021131058W WO2022179197A1 WO 2022179197 A1 WO2022179197 A1 WO 2022179197A1 CN 2021131058 W CN2021131058 W CN 2021131058W WO 2022179197 A1 WO2022179197 A1 WO 2022179197A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- target
- touch
- processing device
- formation
- Prior art date
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 50
- 238000003672 processing method Methods 0.000 title claims abstract description 50
- 238000001514 detection method Methods 0.000 claims abstract description 350
- 230000015572 biosynthetic process Effects 0.000 claims abstract description 333
- 238000000034 method Methods 0.000 claims abstract description 96
- 230000004927 fusion Effects 0.000 claims abstract description 74
- 238000005192 partition Methods 0.000 claims description 218
- 238000012545 processing Methods 0.000 claims description 190
- 238000013507 mapping Methods 0.000 claims description 47
- 230000008859 change Effects 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 39
- 239000012634 fragment Substances 0.000 description 27
- 230000008569 process Effects 0.000 description 20
- 230000002093 peripheral effect Effects 0.000 description 12
- 230000001133 acceleration Effects 0.000 description 10
- 230000033001 locomotion Effects 0.000 description 7
- 238000012216 screening Methods 0.000 description 6
- 125000006850 spacer group Chemical group 0.000 description 4
- 238000012937 correction Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 241000282412 Homo Species 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000007405 data analysis Methods 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 230000008685 targeting Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/86—Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
- G01S13/867—Combination of radar systems with cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/86—Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/91—Radar or analogous systems specially adapted for specific applications for traffic control
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/40—Means for monitoring or calibrating
- G01S7/4004—Means for monitoring or calibrating of parts of a radar system
- G01S7/4026—Antenna boresight
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/803—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/017—Detecting movement of traffic to be counted or controlled identifying vehicles
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/017—Detecting movement of traffic to be counted or controlled identifying vehicles
- G08G1/0175—Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/04—Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30236—Traffic on road, railway or crossing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Definitions
- the embodiments of the present application relate to the field of data processing, and in particular, to an information processing method and related equipment.
- different types of sensors can detect different feature information.
- cameras can detect the appearance characteristics of the target
- radar can detect the speed and distance of the target.
- the space alignment process is as follows: obtain the images that can be detected by each sensor, determine the calibration point in the actual space, and associate the position of the calibration point in the actual space with the position of the calibration point displayed on the screen. By performing the above operations on a plurality of calibration points, a mapping relationship between the actual space and each sensor screen is established, and a mapping relationship between each sensor screen is also established. Then align the time of different sensors, when at the same time, object information is detected at a certain point on a sensor screen, and object information is also detected at the point corresponding to this point on other sensor screens, it can be determined. The information is the information of the same object. Therefore, the detection results of different sensors for the object can be combined together as the fusion detection information of the object.
- the embodiment of the present application provides an information processing method, which is used to realize fusion of detection information detected by different sensors, so as to improve the efficiency of detection information fusion.
- a first aspect of the embodiments of the present application provides an information processing method, the method is applied to a processing device in a monitoring system, and the detection system further includes a plurality of sensors.
- the detection information obtained by each of the multiple sensors includes detection information for the same multiple targets, and the method includes:
- the processing device acquires a plurality of detection information from the above-mentioned multiple sensors, wherein the multiple detection information corresponds to the multiple sensors one-to-one, and each detection information in the multiple detection information is detected by the sensor corresponding to the detection information .
- the processing device determines a plurality of corresponding formation information according to the plurality of detection information, wherein the plurality of formation information is in one-to-one correspondence with the plurality of detection information, and each formation information is used to describe the difference between the objects detected by the sensor corresponding to the formation information. and the object includes the aforementioned target.
- the processing device determines target formation information according to a plurality of formation information, and the coincidence degree of the target formation information and the foregoing plurality of formation information is higher than a preset threshold, wherein the target formation information is used to describe the positional relationship between the foregoing plurality of targets , and the target formation information includes the position information of each target.
- the processing device fuses the detection information corresponding to the same target in the multiple formation information according to the position information of any target in each target.
- the formation information between the objects detected by the sensors is respectively determined through detection information from different sensors, and the target formation information is determined according to the degree of coincidence with each formation information, so that the target object is determined. .
- the target formation information is the formation information with similar characteristics detected by different sensors, it reflects the information detected by the same target at different sensors. Therefore, any object reflected in the target formation information can be determined according to the target formation information. Correspondence between detection results at different sensors, according to which the detection results of different sensors for the same object can be fused.
- the method of obtaining fusion detection information through formation information in the embodiment of the present application can greatly improve the efficiency of obtaining fusion detection information.
- the method of the embodiment of the present application only needs to provide detection information of different sensors, and does not need to occupy the site to be observed, which expands the scope of application of detection information fusion.
- the detection information may include a position feature set
- the position feature set may include multiple position features
- the position features are used to represent objects detected by the corresponding sensor , and the positional relationship between the objects around the object.
- the detection information includes a position feature set
- the position feature set can accurately reflect the positional relationship between objects detected by the sensor, that is, an accurate formation can be determined according to the positional relationship between the objects. How to accurately fuse detection information from different sensors for the same target.
- the processing device determines a plurality of corresponding formation information according to a plurality of detection information, which may specifically include:
- the position feature set obtains a plurality of corresponding touch line information, wherein each touch line information in the plurality of touch line information is used to describe the information that the object detected by the corresponding sensor touches the reference line.
- the above-mentioned plurality of touch line information The information is in one-to-one correspondence with the foregoing multiple location feature sets.
- the processing device respectively determines a plurality of corresponding formation information according to the foregoing plurality of antenna information, wherein the foregoing plurality of antenna information is in one-to-one correspondence with the foregoing plurality of formation information.
- the touch line information is obtained through the position feature set. Since the touch line information is the information of the object touching the reference line, the touch time, touch interval, touch position, etc., including specific values, can be obtained by touching the reference line. or data on specific location features. Therefore, through the specific numerical values or specific position characteristics of the touch lines of multiple targets, a collection of touch line data can be obtained, such as a sequence composed of multiple touch times, a sequence composed of multiple touch intervals, or multiple touch locations. composition distribution, etc. Since the above-mentioned sets of antenna data all have specific numerical values or position characteristics, they can be directly calculated without other data processing, so that target formation information whose coincidence degree meets the preset threshold can be quickly determined.
- the target formation information may be determined according to the touch partition sequence, specifically:
- the touch line information includes the timing information and the touch point partition information corresponding to the object detected by the sensor touching the reference line, and the touch point partition information represents the touch point of the object touching the reference line, and the partition information in the reference line;
- the information includes a sequence of touch zones, and the sequence of touch zones represents a sequence relationship before and after the zone positions corresponding to the objects detected by the sensor touching the reference line.
- the processing device determines the target formation information according to the multiple formation information, which may specifically include: the processing device acquires the first subsequences of the multiple touch partition sequences, and uses the first subsequence as the target formation information, wherein the first subsequence is the same as the multiple The coincidence degrees of the touch partition sequences are all higher than the first threshold.
- the processing device fuses the detection information corresponding to the same target in the plurality of formation information according to the position information of each target, which may specifically include: the processing device according to the touch corresponding to each target in the first subsequence.
- Point partition information fuse the detection information corresponding to the same target in multiple touch partition sequences.
- the timing information represents the front-and-rear relationship between different targets touching the reference line
- the touch point partition information represents the left-right relationship between different targets touching the reference line
- the touch point partition information reflects the positional relationship of multiple targets touching the reference line in the touch partition sequence. Since the timing information and the touch point partition information are both specific numerical values, the touch partition sequence is a set of numerical values reflecting the positional relationship between the objects. According to the detection information from different sensors, the corresponding touch partition sequence is obtained. The obtained multiple touch partition sequences are multiple value sets. It is determined that the coincidence degree of the value sets meets the preset threshold. It is only necessary to compare the corresponding values, and no complicated operations are required, which improves the matching target formation information. efficiency.
- the longest common subsequence (longest common sequence, LCS) algorithm can be used to detect detections from different sensors. For a plurality of touch sub-sequences of the information, a first sub-sequence whose coincidence degree with each touch sub-sequence is higher than the first threshold is determined. In this embodiment of the present application, all common sequences of multiple touch partition sequences can be obtained through the LCS algorithm, so as to achieve matching of the same position features of the multiple touch partition sequences.
- the aforementioned first subsequence determined by the LCS algorithm may include, among the subsequences whose coincidence degrees with the aforementioned multiple touch partition sequences are all higher than the first threshold, longest subsequence.
- all common sequences of multiple touch partition sequences can be determined through the LCS algorithm, so as to match all the fragments of the touch partition sequences with the same location feature. If a plurality of fragments are public sequences, and some non-public sequences are interspersed in these public sequences, these non-public sequences interspersed in the public sequences can be identified. Among them, the non-public sequences reflect different positional relationships in different sensors. In this case, it can be considered that the non-public sequence mixed in the public sequence is caused by the false detection or missed detection of the sensor, so that the non-public sequence is fault-tolerant, that is, the non-public sequence is used in the target detected by different sensors. Correspondence to objects to realize the fusion of detection information.
- the first subsequence determined by the LCS algorithm may include the subsequence with the longest length among the subsequences whose coincidence degrees with multiple touch partition sequences are all higher than the first threshold. Since the positional relationship between the targets may be similar by chance, the longer the determined subsequence is, the lower the possibility of having a similar positional relationship is, and the more chance can be avoided, the longest subsequence is determined by the LCS algorithm. sequence, the target formation information of the same target set can be accurately determined. For example, the positional relationship of two objects may be similar by chance, but if the standard is raised to the positional relationship between ten objects with a high degree of coincidence, the probability of ten objects with similar positional relationship is relatively high.
- the target formation information may be determined according to the touch position sequence, specifically:
- the touch line information includes timing information and touch point position information corresponding to the object detected by the sensor touching the reference line.
- the touch point position information indicates the touch point of the object touching the reference line, and the position information in the reference line reflects the The left and right positional relationship between the targets is calculated; the formation information includes the touch position sequence, and the touch position sequence represents the front and back time sequence relationship of the position of the object detected by the sensor touching the reference line.
- the processing device determines the target formation information according to the plurality of formation information, which may specifically include: the processing device acquires the third subsequence of the multiple touch position sequences, and uses the third subsequence as the target formation information, wherein the third subsequence is the same as the multiplicity.
- the coincidence degrees of the touch position sequences are all higher than the third threshold.
- the processing device fuses the detection information corresponding to the same target in the plurality of formation information according to the position information of each target, which may specifically include: the processing device according to the touch corresponding to each target in the third subsequence. Point position information, and fuse the detection information corresponding to the same target in multiple touch position sequences.
- the touch point position information represents the left-right relationship between different targets touching the reference line, and may be continuous numerical values or data. Therefore, based on the continuous value or data, the formation information of the target can be more accurately distinguished from the formation information of other non-targets, so as to more accurately realize the fusion of detection information for the same target.
- the movement trend between the targets can be analyzed or calculated through the continuous numerical value or data.
- other information such as the movement trajectory of the target objects, etc., can also be calculated, which is not limited here.
- the target formation information may be determined according to the touch interval sequence, specifically:
- the touch line information includes timing information and touch time interval information corresponding to the object detected by the sensor touching the reference line, wherein the touch time interval information represents the time interval before and after the object touches the reference line; the formation information includes the touch interval sequence , the touch interval sequence represents the distribution of the time interval when the object detected by the corresponding sensor touches the reference line.
- the processing device determines the target formation information according to the plurality of formation information, which may specifically include: the processing device acquires the second subsequence of the multiple touch interval sequences, and uses the second subsequence as the target formation information, wherein the second subsequence is the same as the multiplicity.
- the coincidence degrees of the touch interval sequences are all higher than the second threshold.
- the processing device fuses the detection information corresponding to the same target in at least two formation information according to the position information of each target, including: the processing device according to the touch time distribution information corresponding to each target in the second subsequence , and fuse the detection information corresponding to the same target in at least two touch interval sequences.
- the timing information represents the relationship before and after different targets touch the reference line
- the touch time interval information represents the time interval before and after different targets touch the reference line.
- the interval touch time interval information reflects the positional relationship of multiple targets touching the reference line in the touch interval sequence. Since the timing information and the touch time interval information are both specific values, the touch interval sequence is a set of values reflecting the positional relationship between the objects. According to the detection information from different sensors, the corresponding touch interval sequence is obtained. The obtained multiple touch interval sequences are multiple value sets. It is determined that the coincidence degree of the value sets meets the preset threshold. It is only necessary to compare the corresponding values, and complex operations are not required, which improves the matching target formation information. efficiency.
- the LCS algorithm may be used to determine the contact with the sensor according to multiple touch interval sequences derived from detection information of different sensors.
- the coincidence degree of each touch interval sequence is higher than the second subsequence of the second threshold.
- all common sequences of multiple touch interval sequences can be obtained through the LCS algorithm, so as to realize matching of the same position features of the multiple touch interval sequences.
- the aforementioned second sequence determined by the LCS algorithm may include, among the subsequences whose coincidence degrees with the aforementioned multiple touch interval sequences are all higher than the second threshold, the length longest subsequence.
- all common sequences of multiple touch interval sequences can be determined by using the LCS algorithm, so as to match all segments of touch interval sequences with the same location feature. If a plurality of fragments are public sequences, and some non-public sequences are interspersed in these public sequences, these non-public sequences interspersed in the public sequences can be identified. Among them, the non-public sequences reflect different positional relationships in different sensors. In this case, it can be considered that the non-public sequence mixed in the public sequence is caused by the false detection or missed detection of the sensor, so that the non-public sequence is fault-tolerant, that is, the non-public sequence is used in the target detected by different sensors. Correspondence to objects to realize the fusion of detection information.
- the second subsequence determined by the LCS algorithm may include the subsequence with the longest length among the subsequences whose coincidence degrees with multiple touch interval sequences are all higher than the second threshold. Since the time interval of the target touching the reference line may be similar by chance, the longer the determined subsequence length, the lower the possibility of having a similar time interval, and the more chance to avoid this chance, the longest subsequence determined by the LCS algorithm The subsequence of the same target can accurately determine the target formation information of the same target set.
- the time interval between two targets touching the reference line may be similar by chance, but if the standard is raised to the time interval where ten targets touch the reference line with high coincidence, ten targets with similar time intervals Compared with the possibility of two targets with a similar time interval, the possibility of the target will be greatly reduced, so if the second subsequence of ten targets is determined by the LCS algorithm, these ten targets are different sensors targeting the same The detection results of the ten targets are more likely, reducing the possibility of matching errors.
- the target formation information may be determined according to the touch partition sequence and the touch interval sequence, specifically:
- the touch line information includes timing information corresponding to the object detected by the sensor touching the reference line, touch point partition information and touch time interval information, wherein the touch point partition information indicates that the touch point of the object touching the reference line is in the reference line.
- the partition information in the line, the touch time interval information represents the time interval before and after the object touches the reference line; the formation information includes the touch partition sequence and the touch interval sequence, wherein the touch partition sequence represents the object detected by the corresponding sensor.
- the time sequence relationship between the partition positions of the reference line before and after, and the touch interval sequence represents the distribution of time intervals corresponding to the objects detected by the sensor touching the reference line.
- the processing device determines target formation information according to multiple formation information, which may specifically include:
- the processing device acquires a first subsequence of at least two touch partition sequences, wherein the coincidence degrees of the first subsequence and the multiple touch partition sequences are all higher than a first threshold; the processing device acquires the at least two touch interval sequences the second subsequence, wherein the coincidence degrees of the second subsequence and the multiple touch interval sequences are all higher than the second threshold; the processing device determines the intersection of the first object set and the second object set, and uses the intersection as the target object set, wherein the first object set is the set of objects corresponding to the first subsequence, and the second object set is the set of objects corresponding to the second subsequence; the processing device combines the touch partition sequence of the target object set with the touch The interval sequence is used as target formation information.
- the first object set corresponding to the first subsequence and the second object set corresponding to the second subsequence are used to determine the intersection of the first object set and the second object set, and use the intersection set as a collection of target objects.
- the objects in the intersection correspond to the first subsequence, that is, according to the detection information of different sensors, similar touch partition information can be obtained; at the same time, the objects in the intersection correspond to the second subsequence, that is, according to the detection information of different sensors detection information, while having similar touch time interval information.
- the intersection between objects corresponding to other subsequences can also be taken, such as the first subsequence and the third subsequence.
- the intersection between the objects corresponding to each sequence, or the intersection between the objects corresponding to the second subsequence and the third subsequence, or the objects corresponding to other subsequences, and any one of the first to third subsequences The intersection between objects corresponding to the sequence.
- subsequences are also used to represent the positional relationship between objects, such as the distance or direction between objects, etc., which are not limited here.
- suitable subsequences can be flexibly selected for operation, which improves the feasibility and flexibility of the scheme.
- the intersection between the corresponding objects of more subsequences can also be taken, for example, taking the first subsequence, the second subsequence and the third subsequence Subsequences correspond to intersections between objects.
- the greater the number of subsequences taken the more similar types of information representing the positional relationship of objects can be obtained based on the detection information of multiple sensors, and the higher the possibility that the set of objects corresponding to the detection information is the same set of objects. Therefore, by screening the intersection of objects corresponding to multiple subsequences, the formation information of the target can be more accurately distinguished from the formation information of other non-targets, so as to more accurately realize the fusion of detection information for the same target.
- the target formation information may be determined through the target group distribution map, specifically:
- the formation information includes a target group distribution map, wherein the target group distribution map represents the positional relationship between objects.
- the processing device determines a plurality of corresponding formation information according to a plurality of detection information, which may specifically include: the processing device obtains a plurality of corresponding initial target group distribution maps according to a plurality of location feature sets, wherein the initial target group distribution map represents the location of the corresponding sensor. The positional relationship between the detected objects; the processing device obtains the standard perspective maps of multiple initial target group distribution maps through the perspective change algorithm, and uses the multiple standard perspective maps as the corresponding multiple target group distribution maps, wherein the target The array position information of the group distribution map includes the target object distribution information of the target object, and the target object distribution information represents the position of the target object in the object detected by the corresponding sensor.
- the processing device determines the target formation information according to the at least two formation information, which may specifically include: the processing device obtains image feature sets of multiple target group distribution maps, and uses the image feature sets as the target formation information, wherein the image feature set is associated with the multiple targets.
- the coincidence degree of the group distribution map is higher than the third threshold.
- the processing device fuses the detection information corresponding to the same target in the multiple formation information according to the position information of each target, which may specifically include: the processing device, according to the target distribution information corresponding to each target in the image feature set, fuses the detection information.
- the detection information corresponding to the same target in the distribution map of multiple target groups is fused.
- a plurality of corresponding initial target group distribution maps are obtained according to detection information from different sensors, and a plurality of corresponding target group distribution maps are obtained through a perspective change algorithm, and then images of multiple target group distribution maps are obtained. feature set, and use the image feature set as the target formation information.
- an image feature set whose coincidence degree with the multiple target group distribution maps is higher than a preset threshold is determined. Since image features can intuitively reflect the positional relationship between objects displayed in the image, the image feature set determined by multiple target group distribution maps can intuitively reflect detection results with similar The detection results of the sensor to the same target group are matched, so as to accurately realize the fusion of detection information.
- the acquisition of the image feature set may be implemented in combination with the reference line, specifically:
- the processing device may acquire multiple contact line information of the target object corresponding to the location feature set according to the multiple location feature sets, wherein each contact line information in the multiple contact line information is used to describe the information detected by the corresponding sensor.
- the information that the object touches the reference line, and the multiple touch line information corresponds to the multiple position feature sets one-to-one.
- the processing device obtains a plurality of corresponding initial target group distribution diagrams according to the plurality of location feature sets, which may specifically include: the processing device obtains a plurality of corresponding initial target group distribution diagrams according to the plurality of contact line information, wherein the plurality of initial target groups The objects in the group distribution map have the same antenna information.
- the initial target group of the approximate time due to the high similarity between the images of the approximate time, if the same time is not determined, when matching the initial target distribution maps from different sensors, the initial target group of the approximate time will be introduced.
- the interference of the distribution map leads to the matching error of the distribution map and the wrong acquisition of the image feature set, so that the detection information at different times is fused, resulting in a fusion error of the detection information.
- This error can be avoided by using the contact line information.
- multiple initial target group distribution maps are determined by the contact line information, and the multiple initial target group distribution maps have the same contact line information, indicating the distribution of the multiple initial target groups.
- the images are acquired at the same time, which ensures that the fused detection information is acquired at the same time, which improves the accuracy of detection information fusion.
- any one of the first to tenth embodiments of the first aspect, and the eleventh embodiment of the first aspect of the embodiments of the present application may also be implemented.
- the mapping of the coordinate system specifically:
- the plurality of sensors include a first sensor and a second sensor, wherein the space coordinate system corresponding to the first sensor is a standard coordinate system, and the space coordinate system corresponding to the second sensor is a target coordinate system.
- the method may further include:
- the processing device determines the mapping relationship between the multiple standard point information and the multiple target point information according to the fusion detection information, wherein the fusion detection information is obtained by fusing the detection information corresponding to the same target in the multiple array information, and the standard point information Represents the position information of each object in the target object set in the standard coordinate system, and the target point information represents the position information of each object in the target object set in the target coordinate system, wherein multiple standard point information and multiple target point information One-to-one correspondence; the processing device determines the mapping relationship between the standard coordinate system and the target coordinate system according to the mapping relationship between the standard point information and the target point information.
- the mapping relationship between the multiple standard point information and the multiple target point information is determined by fusing the detection information, and the standard coordinates are determined through the mapping relationship between the multiple standard point information and the multiple target point information.
- the mapping relationship between the system and the target coordinate system In the method described in the embodiments of the present application, as long as detection information from different sensors can be acquired, the mapping of coordinate systems between different sensors can be realized. Subsequent determination of target formation information, point information mapping and other steps can be realized by the processing equipment itself, without manual calibration and mapping. By processing equipment matching target formation information, the accuracy of equipment operation improves the accuracy of point information mapping. At the same time, as long as the detection information from different sensors can be obtained, the fusion of detection information and the mapping of the coordinate system can be realized, which avoids the scene limitation caused by manual calibration and ensures the accuracy and universality of detection information fusion.
- the alignment of the time axis may further include:
- the processing device calculates the time difference between the time axes of the multiple sensors according to the fusion result of the detection information corresponding to the same target in the multiple formation information.
- the time axes of different sensors can be aligned according to the time difference.
- the time axis alignment method provided by the embodiments of the present application can be implemented as long as the detection information of different sensors can be obtained, and it does not require multiple sensors to be in the same time synchronization system, which expands the application scenarios of the time axis alignment of different sensors, and also expands the The scope of application of information fusion.
- the correction of the sensor can also be implemented.
- the plurality of sensors include standard sensors and to-be-tested sensors, and the method may further include:
- the processing device obtains the target formation information corresponding to the standard formation information of the standard sensor; the processing device obtains the target formation information corresponding to the to-be-measured formation information of the sensor to be tested; the processing device determines the difference between the to-be-measured formation information and the standard formation information; The difference and standard formation information are used to obtain error parameters, wherein the error parameters are used to indicate the error of the formation information to be tested, or to indicate the performance parameters of the sensor to be tested.
- the standard sensor is used as the detection standard, and the error parameter is obtained according to the difference between the formation information to be tested and the standard formation information.
- the error parameter is used to indicate the error of the formation information to be measured
- the information corresponding to the error parameter in the formation information to be measured can be corrected through the error parameter and the standard formation information;
- the error parameter is used to indicate the performance parameter of the sensor to be measured
- the performance parameters such as the false detection rate of the sensor to be tested can be determined, and the data analysis of the sensor to be tested can be realized to realize the selection of the sensor.
- a second aspect of the present application provides a processing device, the processing device is located in a detection system, and the detection system further includes at least two sensors, wherein the detection information acquired by the at least two sensors includes at least two sensors for the same detection information of at least two targets, the processing device includes: a processor and a transceiver.
- the transceiver is configured to acquire at least two pieces of detection information from at least two sensors, wherein the at least two sensors correspond to the at least two pieces of detection information in one-to-one correspondence.
- the processor is configured to: determine at least two corresponding formation information according to the at least two detection information, wherein each formation information is used to describe the positional relationship between objects detected by the corresponding sensor, wherein the objects include the aforementioned targets
- the target formation information is determined according to at least two formation information, the coincidence degree of the target formation information and each formation information of the at least two formation information is higher than a preset threshold, and the target formation information is used to describe the relationship between the at least two targets.
- the target formation information includes the position information of each target; according to the position information of any target in each target, the detection information corresponding to the same target in at least two formation information is fused.
- the processing device is adapted to perform the method of the aforementioned first aspect.
- a third aspect of the embodiments of the present application provides a processing device, where the device includes: a processor and a memory coupled to the processor.
- the memory is used for storing executable instructions for instructing the processor to perform the method of the aforementioned first aspect.
- a fourth aspect of the embodiments of the present application provides a computer-readable storage medium, where a program is stored in the computer-readable storage medium, and when the computer executes the program, the method described in the foregoing first aspect is performed.
- a fifth aspect of the embodiments of the present application provides a computer program product.
- the computer program product When the computer program product is executed on a computer, the computer executes the method described in the foregoing first aspect.
- Fig. 1a is a schematic diagram of time axis alignment of multiple sensors
- Fig. 1b is a schematic diagram of the alignment of the spatial coordinate system of the multi-sensor
- FIG. 2 is a schematic diagram of a matching target provided by an embodiment of the present application.
- 3a is a system schematic diagram of an information processing method provided by an embodiment of the present application.
- FIG. 3b is a schematic diagram of an application scenario of the information processing method provided by the embodiment of the present application.
- FIG. 4 is a schematic flowchart of an information processing method provided by an embodiment of the present application.
- FIG. 5 is a characteristic schematic diagram of an information processing method provided by an embodiment of the present application.
- FIG. 6 is a schematic diagram of a scribing method provided in an embodiment of the present application.
- FIG. 7 is another schematic flowchart of an information processing method provided by an embodiment of the present application.
- FIG. 8 is a schematic diagram of another application scenario of the information processing method provided by the embodiment of the present application.
- FIG. 9 is another schematic flowchart of an information processing method provided by an embodiment of the present application.
- FIG. 10 is a schematic diagram of another application scenario of the information processing method provided by the embodiment of the present application.
- FIG. 11 is another schematic flowchart of an information processing method provided by an embodiment of the present application.
- FIG. 12 is a schematic diagram of another application scenario of the information processing method provided by the embodiment of the present application.
- FIG. 13 is another schematic flowchart of an information processing method provided by an embodiment of the present application.
- FIG. 14 is a schematic diagram of another application scenario of the information processing method provided by the embodiment of the present application.
- FIG. 16 is another schematic diagram of an information processing method provided by an embodiment of the present application.
- FIG. 17 is another schematic flowchart of an information processing method provided by an embodiment of the present application.
- FIG. 18 is a schematic diagram of another application scenario of the information processing method provided by the embodiment of the present application.
- 19 is a schematic diagram of another application scenario of the information processing method provided by the embodiment of the present application.
- FIG. 20 is a schematic structural diagram of a processing device provided by an embodiment of the present application.
- FIG. 21 is another schematic structural diagram of a processing device provided by an embodiment of the present application.
- the embodiments of the present application provide an information processing method and related equipment, which are used to realize fusion of detection information detected by different sensors, so as to improve the efficiency of detection information fusion.
- the sensor can detect objects, and for the same object, different sensors can detect different detection information.
- cameras can detect appearance features such as shape and texture of objects
- radar can detect motion information such as position and speed of objects.
- FIG. 1a is a schematic diagram of the alignment of the time axes of multiple sensors.
- the time synchronization device in the time synchronization system generates a time stamp, and transmits the time stamp to a plurality of sensors in the time synchronization system.
- the alignment of the time axis can be achieved by detecting multiple sensors in the time synchronization system based on the same time stamp.
- time stamp of the time synchronization equipment can only be transmitted in the time synchronization system, and the sensors outside the time synchronization system cannot receive the time stamp, the alignment of the time axis can only be realized in the same time synchronization system, which limits the fusion of detection information. application scenarios.
- FIG. 1b is a schematic diagram of the alignment of the spatial coordinate systems of the multi-sensors.
- Spatial calibration needs to determine the calibration point in the actual space, and manually calibrate the position of the calibration point in different sensor screens, for example, calibrate the calibration point 4 in the screen of sensor A, and calibrate the corresponding calibration point 4' in the screen of sensor B. , and then manually determine the mapping relationship of the same calibration point in different sensor images.
- multiple calibration points need to be calibrated to achieve a complete mapping of the spatial coordinate system.
- the spatial calibration needs to be realized manually, there may be deviations between human's subjective cognition and the actual mapping relationship, which may not necessarily reflect the actual mapping relationship.
- the calibration point 4 and the calibration point 4' shown in Figure 1b, in the cylinder can not find the calibration point that is obviously different from other points, the calibration points for different pictures can not actually reflect the same point, cause calibration errors.
- any other objects that do not have obvious distinguishing points, such as spheres are prone to the above calibration errors. Therefore, the manually calibrated mapping relationship is not necessarily accurate.
- the spatial calibration is inaccurate. In the process of fusing the detection information of multiple sensors, the same target in reality may be judged as different targets, or different targets may be judged as the same target. Information is wrong data.
- spatial calibration in addition to performing spatial calibration on the images of the two cameras as shown in FIG. 1b, spatial calibration can also be performed on multiple sensors that do not belong to the same type. For example, the camera image and the radar image are calibrated. For the calibration of different types of sensor images, the above-mentioned calibration point calibration errors may also occur, which will not be repeated here.
- the efficiency of manual calibration is low, and the spatial calibration needs to be manually calibrated for multiple calibration points, and the detected area cannot be used during the calibration process, which limits the actual operation.
- manual demarcation usually requires occupying the train lanes for half a day or a day. Normally, the scheduling of train lanes does not allow for such a long occupancy. In this case, the fusion of spatial calibration and detection information cannot be achieved.
- the current alignment of the time axes of different sensors is limited by the time synchronization system, and cannot be realized when the sensors are not in the same time synchronization system.
- the current alignment of different sensor space coordinate systems is limited by the inefficiency and low accuracy of manual calibration, which makes the fusion of detection information prone to errors and limits the scenarios where fusion can be achieved.
- an embodiment of the present application provides an information processing method, which acquires formation information between objects displayed by the detection information through detection information from multiple sensors. By matching the target formation information with similar characteristics, the target formation information is determined as the detection information of the same object set by different sensors, so as to fuse the detection information of different sensors.
- the method provided by the embodiment of the present application is actually a reproduction on the device of the process of manually determining the same target in the pictures of different sensors.
- Each sensor has multiple pictures corresponding to multiple times, and the number and status of the objects reflected in each picture are different. Faced with so much information, the human eye cannot directly capture all the details in the picture, and can only first distinguish the picture of the same set of objects in different pictures as a whole.
- This process is also referred to as matching the target object set since it is a plurality of pictures in which the same target object set is determined in different pictures.
- the process of matching the human eye to the target set requires an abstract process. Other details in the picture are omitted, and only the positional relationship between the objects in the picture is extracted, so as to abstract the formation information between the objects.
- FIG. 2 is a schematic diagram of a matching target provided by an embodiment of the present application.
- the detection information A there are 5 motor vehicles that form a shape similar to the number "9".
- the radar picture that is, the detection information B
- the detection information B there are 5 targets that also form a shape similar to "9”.
- the five target sets in these two pictures have similar positional characteristics, that is, they have similar formation information. It can be considered that the two are the embodiment of the same target set in different sensor pictures.
- the same single target can be determined in the pictures of different sensors according to the position of the single target in the target set in the pictures of different sensors.
- the target at the bottom of formation "9" is target A
- the detection information B detected by sensor B the bottom of formation "9”
- the target A' and the target A are the same target.
- sensor A may be a camera
- sensor B may be a radar
- sensor A and sensor B can also be other combinations, for example, sensor A is radar, sensor B is ETC, etc., or sensor A and sensor are the same sensor, such as radar or camera, etc., which are not limited here.
- the number of sensors is not limited.
- more detection information can be obtained through more sensors, and the same target in the detection information can be analyzed, which is not limited here.
- the solution of the embodiment of the present invention mainly includes the following steps: 1. Acquire multiple detection information from different sensors; 2. Determine corresponding formation information according to the multiple detection information; 3. Determine according to the multiple formation information 4. Fusion of the detection information of different sensors for the same target according to the position information of each target in the target formation information.
- FIG. 3a is a schematic diagram of a system of an information processing method provided by an embodiment of the present application.
- the system is a detection system, and the system includes a processing device and a plurality of sensors.
- sensor A and sensor B as an example, sensor A transmits the detected detection information A to the processing device, and sensor B transmits the detected detection information B to the processing device.
- the processing device obtains the fusion information of the target object according to the detection information A and the detection information B.
- the devices in the detection system described in this application may have a fixed connection state or may not have a fixed connection state, and data transmission may be implemented in the form of data copying or the like.
- the detection information of the sensor can be transmitted to the processing device, the sensor and the processing device can be called a detection system, which is not limited here.
- sensor A and sensor B can acquire detection information respectively, and then copy detection information A and detection information B to a processing device within a certain period of time, and the processing device processes detection information A and detection information B. This mode may also be referred to as offline processing.
- FIG. 3b is a schematic diagram of an application scenario of the information processing method provided by the embodiment of the present application.
- the information processing method provided by the embodiment of the present application is mainly used for information fusion in a multi-sensor system.
- a multi-sensor system can receive detection information from multiple sensors and fuse the detection information from multiple sensors.
- the detection information may be the license plate from the electronic toll collection (electronic toll collection, ETC) sensor, transaction flow information, and the like.
- ETC electronic toll collection
- the multi-sensor system can also obtain other detection information from other sensors, such as the license plate, model information, etc. from the camera, distance and speed information from the radar, etc., which are not limited here.
- the information processing method provided in the embodiment of the present application realizes the fusion of detection information, and the fusion result can be applied to various scenarios, such as toll auditing on expressways, off-site overtaking, safety monitoring, and the like.
- the fusion results can also be applied to other scenarios, such as holographic intersections on urban intersections, vehicle entry warning, pedestrian warning, etc., or intrusion detection on closed roads, automatic parking, etc., here Not limited.
- FIG. 4 is a schematic flowchart of an information processing method provided by an embodiment of the present application. The method includes:
- the detection information A acquired by the sensor A may include a location feature set.
- the position feature set includes a plurality of position features, and the position features are used to represent the positional relationship between the object detected by the sensor A and the objects around the object.
- the detection information is a picture composed of pixels, and the position feature can be embodied as the distance between the pixels.
- the position feature can also be expressed in other forms, for example, a left-right relationship or a front-back relationship between pixels, which is not limited here.
- the sensor A in addition to a camera, the sensor A may also be other types of sensors, such as radar, an electronic toll collection (electronic toll collection, ETC) sensor, etc., which are not limited here.
- sensors there will be corresponding position features.
- the position feature of radar can be expressed as the distance between objects or the direction between objects, etc.
- the position feature of ETC can be expressed as the lane information of the vehicle and the timing relationship between the front and rear, etc. , which is not limited here.
- the detection information B acquired by the sensor B may also include a location feature set.
- the detection information B, and the position feature refer to the description of the sensor A, the detection information A, and the position feature in step 401, and details are not repeated here.
- the senor A and the sensor B may be the same type of sensor, or may be different types of sensors.
- sensor A and sensor B may be cameras with different angles or different radars, or sensor A may be a camera or radar, and sensor B may be ETC, etc., which is not limited here.
- the number of sensors in the embodiment of the present application is not limited to two, and the number of sensors may be any integer greater than or equal to 2, which is not limited here.
- Sensor A and sensor B are used as examples of sensors in the monitoring system. If the detection system includes more sensors, for the description of these sensors, please refer to the description of sensor A and sensor B in step 401 and step 402, which will not be repeated here. .
- the types of the plurality of sensors are also not limited, and may be the same type of sensors or different types of sensors, which are not limited here.
- the processing device After acquiring the detection information A, the processing device can determine the formation information A according to the detection information A, and the formation information A is used to indicate the positional relationship between the objects detected by the sensor A.
- the processing device may determine the formation information A according to the position feature set.
- the processing device may determine the formation information A according to the position feature set.
- the processing device may determine the formation information A according to the position feature set.
- the processing device can determine the formation information B according to the detection information B, and the formation information B is used to indicate the positional relationship between the objects detected by the sensor B.
- the positional relationship between the objects may include at least one of a left-right positional relationship between the objects, or a front-to-back positional relationship between the objects.
- the processing device may determine the formation information B according to the position feature set.
- the determination of formation information may be implemented by a method such as a scribing method or an image feature matching method.
- a method such as a scribing method or an image feature matching method.
- step 401 and step 402 do not necessarily have a sequential relationship, that is, step 401 may be performed before or after step 402, and step 401 and step 402 may also be performed simultaneously, which is not limited here.
- Step 403 and step 404 also have no necessary sequence relationship, that is, step 403 can be executed before or after step 404, and step 403 and step 404 can also be executed at the same time, as long as step 403 is executed after step 401, and step 404 is executed after step 402. That is, there is no limitation here.
- the corresponding formation information should also be determined according to the acquired detection information.
- the process of determining the corresponding formation information refer to the descriptions of steps 403 and 404, which are not described here. Repeat.
- the target formation information After obtaining the formation information A and the formation information B, the target formation information can be determined according to the formation information A and the formation information B.
- the coincidence degree between the target formation information and formation information A and formation information B is higher than a preset threshold, which is used to reflect formation information in formation information A and formation information B that belong to the same target set.
- the formation information may have various representations, and the criteria for judging the coincidence degree are also different.
- the process of acquiring and processing the different formation information detailed explanations will be given in the following in conjunction with the embodiments of FIG. 7 to FIG. 17 , which will not be repeated here.
- the formation information includes the formation information of each target, which is used to indicate the specific position of the target in the target set. Therefore, the target corresponding to the same target in the detection information of different sensors can be determined according to the position information of the target, and the detection information of multiple corresponding targets can be fused.
- the formation information between the objects detected by the sensors is respectively determined through detection information from different sensors, and the target formation information is determined according to the degree of coincidence with each formation information, so that the target object is determined. .
- the target formation information is the formation information with similar characteristics detected by different sensors, it reflects the information detected by the same target at different sensors. Therefore, any object reflected in the target formation information can be determined according to the target formation information. Correspondence between detection results at different sensors, according to which the detection results of different sensors for the same object can be fused.
- the method of obtaining fusion detection information through formation information in the embodiment of the present application can greatly improve the efficiency of obtaining fusion detection information.
- the method of the embodiment of the present application only needs to provide detection information of different sensors, and does not need to occupy the site to be observed, which expands the scope of application of detection information fusion.
- the corresponding formation information may be determined according to the location feature set, and in step 405, the target formation information needs to be determined according to a plurality of formation information.
- the position feature set has different forms, and there are many ways to determine the formation information, mainly including the scribing method and the image feature matching method, and the classification will be described next.
- the formation information may include three types of information: 1. The relative horizontal positional relationship between objects, such as the left-right positional relationship or the left-right spacing between objects; 2. The relative vertical positional relationship between objects, For example, the front-to-back position relationship of the object or the front-to-back distance, etc.; 3. The characteristics of the object itself, such as length, width, height, shape, etc.
- FIG. 5 is a schematic diagram of a feature of an information processing method provided by an embodiment of the present application.
- the formation information may include the front-rear distance and the left-right distance between vehicles, and may also include information of each vehicle, such as vehicle model, license plate number, etc., which are not limited here.
- FIG. 5 only takes a vehicle on the road as an example, and does not limit the objects detected by the sensor.
- the sensor can also be used to detect other objects, such as pedestrians, obstacles, etc., which are not limited here.
- the formation information can be represented as an overall shape, such as the shape "9" in the embodiment shown in FIG. 2 .
- the processing efficiency of shapes or images is not as high as that of digital processing. Expressing the formation information in the form of continuous or discrete numbers can greatly improve the efficiency of data processing.
- Converting the overall shape features into digital features can be achieved by the scribing method.
- a reference line is drawn, and information such as timing and position of objects touching the reference line can be obtained, and shape features can be converted into digital features, which is convenient for processing equipment.
- touch line information various information about the object touching the reference line is also referred to as touch line information.
- the touch line information may include timing information of the object touching the reference line, touch point partition information, touch point position information, touch time interval information, etc., which are not limited here.
- the time sequence information represents the time sequence before and after the object detected by the sensor touches the reference line, which reflects the front and back relationship between the objects.
- the touch point partition information represents the partition information of the touch point where the object touches the reference line in the reference line.
- FIG. 6 is a schematic diagram of a scribing method provided by an embodiment of the present application.
- the baseline can be divided according to different lanes. For example, in the figure, lane 1 is zone 1, lane 2 is zone 2, and lane 3 is zone 3.
- the touch point position information represents the position information of the touch point in the reference line where the object touches the reference line.
- the first vehicle in lane 1 is 1.5 meters away from the left endpoint of the baseline
- the first vehicle in lane 3 is 7.5 meters away from the left endpoint of the baseline.
- the touch time interval information represents the time interval before and after the object touches the reference line.
- the touch point partition information and the touch point position information can be classified as the relative lateral position relationship between objects, and the timing information and the touch time interval information can be classified as between the objects. Longitudinal position relative relationship.
- FIG. 7 is a schematic flowchart of an information processing method provided by an embodiment of the present application.
- the method includes:
- the detection information is a picture composed of pixels
- the position feature in the position feature set can be embodied as the distance between pixels.
- the position feature can also be expressed in other forms, for example, a left-right relationship or a front-back relationship between pixels, which is not limited here.
- the sensor A in addition to the camera, the sensor A may also be other types of sensors, such as radar, ETC sensor, etc., which is not limited here.
- sensors such as radar, ETC sensor, etc.
- position features there will be corresponding position features.
- the position feature of radar can be expressed as the distance between objects or the direction between objects, etc.
- the position feature of ETC can be expressed as the lane information of the vehicle and the timing relationship between the front and rear, etc. , which is not limited here.
- the detection information is the picture of the object detected by the radar within the detection range, and the position feature in the position feature set can be reflected as the distance between the objects.
- the position feature can also be expressed in other forms, for example, the left-right relationship or the front-back relationship between objects, which is not limited here.
- the senor B may also be other types of sensors, such as a camera, an ETC sensor, etc., which is not limited here.
- sensors there will be corresponding location features, which are not limited here.
- sensor A and sensor B are only examples of sensors, and do not limit the type and quantity of sensors.
- the touch line information is the information that the pixels of the object touch the reference line.
- the processing device may acquire, according to the detection information A, the timing information A of the object pixel touching the reference line and the touch point partition information A.
- FIG. 8 is a schematic diagram of an application scenario of the information processing method provided by the embodiment of the application.
- the sequence number column indicates the sequence before and after each object touches the reference line, that is, the timing information A;
- the column of point partition information indicates the partition information of the touch point in the baseline when each object touches the baseline, that is, the touch point partition information A, where 1 represents lane 1 and 3 represents lane 3.
- the touch line information is the information that the object touches the reference line.
- the processing device may acquire, according to the detection information B, the timing information B of the object touching the reference line and the touch point partition information B.
- the column of serial number indicates the sequence before and after each object touches the reference line, namely timing information B;
- the column of touch point partition information indicates the partition of the touch point in the reference line when each object touches the reference line information, that is, the touch point partition information B, where 1 means 1 lane and 3 means 3 lanes.
- step 701 and step 702 do not have a certain sequence, and step 701 may be performed before or after step 702, or step 701 and step 702 may be performed simultaneously, which is not limited here.
- Step 703 and step 704 have no necessary sequence, and step 703 can be executed before or after step 704, or step 703 and step 704 can be executed at the same time, as long as step 703 is executed after step 701, and step 704 is executed after step 702. Yes, there is no limitation here.
- the touch point partition information A can be arranged in sequence according to the time sequence, and the touch partition sequence A can be obtained.
- the touch point partition information B can be arranged in sequence according to the time sequence, and the touch partition sequence B can be obtained.
- step 705 and step 706 do not have a certain sequence, and step 705 may be performed before or after step 706, or step 705 and step 706 may be performed at the same time, as long as step 705 is performed after step 703, step 706 It can be executed after step 704, which is not limited here.
- the touch partition sequence A and the touch partition sequence B are essentially two sequences, and the processing device can compare the two sequences. When it is found that the two sequences contain the same or highly overlapping sequence fragments, it can be considered that the sequence Fragments are the common part of both sequences. In the embodiments of the present application, the sequence fragment is also referred to as the first subsequence. Because the touch partition sequence reflects the positional relationship between the objects detected by the sensor, that is, the formation information between the objects. When two sequence segments include the same or highly overlapping sequence segments, it means that the object sets corresponding to the segments in the two sequence sequences have the same positional relationship, that is, they have the same formation information. When different sensors detect the same or similar formation information, it can be considered that the two sensors detect the same set of objects.
- the first subsequence is also referred to as target formation information, which represents the same or similar formation information detected by multiple sensors.
- the coincidence degree with the touch partition sequence B may be higher than the first threshold.
- the degree of coincidence is also referred to as the degree of similarity.
- the first threshold may be 90%, and besides 90%, the first threshold may also be other values, such as 95%, 99%, etc., which are not limited here.
- the touch partition sequence A and the touch partition sequence B shown in FIG. 8 both include sequence fragments of (3, 3, 1, 3, 1).
- the processing device may use the segment as the first subsequence.
- the coincidence degree of the first sub-sequence with the touch sub-sequence A and the touch sub-sequence B are both 100%.
- the first subsequence may be determined through an LCS algorithm.
- all common sequences of multiple touch partition sequences can be obtained through the LCS algorithm, so as to achieve matching of the same position features of the multiple touch partition sequences. Since the LCS algorithm calculates the longest common subsequence, the first subsequence calculated by the LCS algorithm may include subsequences whose coincidence degrees with the aforementioned multiple touch partition sequences are all higher than the first threshold, longest subsequence.
- all common sequences of multiple touch partition sequences can be determined through the LCS algorithm, so as to match all the fragments of the touch partition sequences with the same location feature. If a plurality of fragments are public sequences, and some non-public sequences are interspersed in these public sequences, these non-public sequences interspersed in the public sequences can be identified. Among them, the non-public sequences reflect different positional relationships in different sensors. In this case, it can be considered that the non-public sequence mixed in the public sequence is caused by the false detection or missed detection of the sensor, so that the non-public sequence is fault-tolerant, that is, the non-public sequence is used in the target detected by different sensors. Correspondence to objects to realize the fusion of detection information.
- the first subsequence determined by the LCS algorithm may include the subsequence with the longest length among the subsequences whose coincidence degrees with multiple touch partition sequences are all higher than the first threshold. Since the positional relationship between the targets may be similar by chance, the longer the determined subsequence is, the lower the possibility of having a similar positional relationship is, and the more chance can be avoided, the longest subsequence is determined by the LCS algorithm. sequence, the target formation information of the same target set can be accurately determined.
- the positional relationship of two objects may be similar by chance, but if the standard is raised to the positional relationship between ten objects with a high degree of coincidence, the probability of ten objects with similar positional relationship is relatively high.
- the possibility of two targets with similar positional relationship will be greatly reduced, so if the first subsequence of ten targets is determined by the LCS algorithm, these ten targets are the same ten targets by different sensors. Detection results are more likely, reducing the chance of matching errors.
- the first subsequence is composed of multiple touch point partition information, and for each touch point partition information in the first subsequence, corresponding data can be found in the touch partition sequence A and the touch partition sequence B .
- the touch point partition information with the sequence number of 4 in the touch partition sequence A has its own touch point partition information 3, and the touch point partition information before and after is 1.
- the single touch point partition information in the touch partition sequence or the first sub-sequence is also referred to as position information, which indicates the position of a single target in the target set.
- the self-partition information is called self-feature
- the preceding or nearby partition information is called peripheral feature
- the peripheral feature may also include more nearby touch point partition information, which is not limited here.
- the processing device can fuse the detection information corresponding to the serial number 4 with the detection information corresponding to the serial number 13 to obtain the fusion information of the target object.
- the appearance information such as the size and shape of the object corresponding to the serial number 4 can be detected.
- information such as the model, color, license plate and other information of the vehicle corresponding to serial number 4 can be detected.
- the radar corresponding to the partition sequence B information such as the moving speed of the object corresponding to serial number 13 can be detected.
- information such as vehicle speed and acceleration corresponding to serial number 13 can be detected.
- the processing device can fuse the aforementioned model, color, license plate and other information with vehicle speed, acceleration and other information to obtain the fusion information of the vehicle.
- the timing information represents the front-and-rear relationship between different targets touching the reference line
- the touch point partition information represents the left-right relationship between different targets touching the reference line
- the touch point partition information reflects the positional relationship of multiple targets touching the reference line in the touch partition sequence. Since the timing information and the touch point partition information are both specific numerical values, the touch partition sequence is a set of numerical values reflecting the positional relationship between the objects. According to the detection information from different sensors, the corresponding touch partition sequence is obtained. The obtained multiple touch partition sequences are multiple value sets. It is determined that the coincidence degree of the value sets meets the preset threshold. It is only necessary to compare the corresponding values, and no complicated operations are required, which improves the matching target formation information. efficiency.
- the target formation information in addition to determining the target formation information according to the timing information and the touch point partition information, the target formation information may also be determined according to the timing information and the touch time interval information.
- FIG. 9 is a schematic flowchart of an information processing method provided by an embodiment of the present application.
- the method includes:
- step 901 and step 902 refer to step 701 and step 702 in the embodiment shown in FIG. 7, and details are not repeated here.
- the touch line information is the information that the pixels of the object touch the reference line.
- the processing device may acquire, according to the detection information A, the timing information A and the touching time interval information A of the object pixels touching the reference line.
- FIG. 10 is a schematic diagram of an application scenario of the information processing method provided by the embodiment of the present application.
- the serial number column indicates the sequence before and after each object touches the reference line, that is, timing information A;
- the column of time interval information indicates the time difference between each object touching the reference line and the time difference between the previous object touching the reference line, that is, the touch time interval information A, wherein the touch time interval information is in seconds.
- the touch time interval information can also be in milliseconds, which is not limited here.
- the touch line information is the information that the object touches the reference line.
- the processing device may acquire, according to the detection information B, the timing information B and the touching time interval information B of the object touching the reference line.
- the column of serial number indicates the sequence before and after each object touches the reference line, namely timing information B;
- the column of touch time interval information indicates the time difference between each object touching the reference line and the previous object touching the reference line , that is, the touch time interval information B, wherein the touch time interval information is in seconds.
- the touch time interval information can also be in milliseconds, which is not limited here.
- step 901 and step 902 do not have a certain sequence, and step 901 may be performed before or after step 902, or step 901 and step 902 may be performed simultaneously, which is not limited here.
- Step 903 and step 904 have no necessary sequence, step 903 can be executed before or after step 904, or step 903 and step 904 can be executed at the same time, as long as step 903 is executed after step 901, and step 904 is executed after step 902. Yes, there is no limitation here.
- the touch time interval information A can be arranged in sequence according to the time sequence, and the touch interval sequence A can be obtained.
- the touch time interval information B can be arranged in sequence according to the time sequence, and the touch interval sequence B can be obtained.
- step 905 and step 906 do not have a certain sequence, and step 905 may be performed before or after step 906, or step 905 and step 906 may be performed at the same time, as long as step 905 is performed after step 903, step 906 It can be executed after step 904, which is not limited here.
- the touch interval sequence A and the touch interval sequence B are essentially two sequences, and the processing device can compare the two sequences, and when it is found that the two sequences contain the same or highly overlapping sequence fragments, it can be considered that the sequence Fragments are the common part of both sequences.
- the sequence fragment is also referred to as the second subsequence. Because the touch interval sequence reflects the positional relationship between the objects detected by the sensor, that is, the formation information between the objects. When two sequence segments include the same or highly overlapping sequence segments, it means that the object sets corresponding to the segments in the two sequence sequences have the same positional relationship, that is, they have the same formation information. When different sensors detect the same or similar formation information, it can be considered that the two sensors detect the same set of objects.
- the second subsequence is also called target formation information, which represents the same or similar formation information detected by multiple sensors.
- the coincidence degree with the touch interval sequence B may be higher than the second threshold.
- the degree of coincidence is also referred to as the degree of similarity.
- the second threshold may be 90%, and besides 90%, the second threshold may also be other values, such as 95%, 99%, etc., which are not limited here.
- the touch spacer sequence A and the touch spacer sequence B shown in FIG. 10 both contain sequence fragments of (2.0s, 0.3s, 1.9s, 0.4s).
- the processing device may use the segment as the second subsequence.
- the degree of coincidence between the second subsequence and the touch interval sequence A and the touch interval sequence B are both 100%.
- the second subsequence may be determined through the LCS algorithm.
- all common sequences of multiple touch interval sequences can be obtained through the LCS algorithm, so as to realize matching of the same position features of the multiple touch interval sequences. Since the LCS algorithm calculates the longest common subsequence, the second subsequence calculated by the LCS algorithm may include subsequences whose coincidence degrees with the aforementioned multiple touch interval sequences are all higher than the second threshold, longest subsequence.
- all common sequences of multiple touch interval sequences can be determined by using the LCS algorithm, so as to match all segments of touch interval sequences with the same location feature. If a plurality of fragments are public sequences, and some non-public sequences are interspersed in these public sequences, these non-public sequences interspersed in the public sequences can be identified. Among them, the non-public sequences reflect different positional relationships in different sensors. In this case, it can be considered that the non-public sequence mixed in the public sequence is caused by the false detection or missed detection of the sensor, so that the non-public sequence is fault-tolerant, that is, the non-public sequence is used in the target detected by different sensors. Correspondence to objects to realize the fusion of detection information.
- the second subsequence determined by the LCS algorithm may include the subsequence with the longest length among the subsequences whose coincidence degrees with multiple touch interval sequences are all higher than the second threshold. Since the positional relationship between the targets may be similar by chance, the longer the determined subsequence is, the lower the possibility of having a similar positional relationship is, and the more chance can be avoided, the longest subsequence is determined by the LCS algorithm. sequence, the target formation information of the same target set can be accurately determined.
- the positional relationship of two objects may be similar by chance, but if the standard is raised to the positional relationship between ten objects with a high degree of coincidence, the probability of ten objects with similar positional relationship is relatively high.
- the possibility of two targets with similar positional relationship will be greatly reduced, so if the first subsequence of ten targets is determined by the LCS algorithm, these ten targets are the same ten targets by different sensors. Detection results are more likely, reducing the chance of matching errors.
- the second subsequence is composed of multiple touch time interval information.
- the corresponding data can be found in the touch interval sequence A and the touch interval sequence B .
- the touch time interval information with the sequence number 3 in the touch interval sequence A has its own touch time interval information of 0.3s, and the touch time interval information before and after it is 2.0s and 1.9s, respectively.
- the single touch time interval information in the touch interval sequence or the second subsequence is also referred to as position information, which represents the position of a single target in the target set.
- the touch time interval information of the self is called the self feature, and the touch time interval information around or nearby is called the peripheral feature.
- the peripheral feature may also include more nearby touch time interval information, which is not limited here.
- the touch time interval information with the same self-feature and surrounding features that is, the touch time interval information with the serial number 12
- the processing device can fuse the detection information corresponding to the serial number 3 with the detection information corresponding to the serial number 12 to obtain the fusion information of the target object.
- the appearance information such as the size and shape of the object corresponding to the serial number 3 can be detected.
- information such as the model, color, license plate and other information of the vehicle corresponding to serial number 3 can be detected.
- the radar corresponding to the partition sequence B information such as the moving speed of the object corresponding to serial number 12 can be detected.
- information such as vehicle speed and acceleration corresponding to the serial number 12 can be detected.
- the processing device can fuse the aforementioned model, color, license plate and other information with vehicle speed, acceleration and other information to obtain the fusion information of the vehicle.
- the timing information represents the relationship before and after different targets touch the reference line
- the touch time interval information represents the time interval before and after different targets touch the reference line.
- the interval touch time interval information reflects the positional relationship of multiple targets touching the reference line in the touch interval sequence. Since the timing information and the touch time interval information are both specific values, the touch interval sequence is a set of values reflecting the positional relationship between the objects. According to the detection information from different sensors, the corresponding touch interval sequence is obtained. The obtained multiple touch interval sequences are multiple value sets. It is determined that the coincidence degree of the value sets meets the preset threshold. It is only necessary to compare the corresponding values, and complex operations are not required, which improves the matching target formation information. efficiency.
- the target formation information may also be determined according to timing information and touch point position information.
- FIG. 11 is a schematic flowchart of an information processing method provided by an embodiment of the present application.
- the method includes:
- step 1101 and step 1102 refer to step 701 and step 702 of the embodiment shown in FIG. 7, and details are not repeated here.
- the touch line information is the information that the pixels of the object touch the reference line.
- the processing device may obtain, according to the detection information A, the timing information A of the object pixel touching the reference line and the touch point position information A.
- the touch point position information A indicates the position of the touch point on the reference line.
- the touch point position information A may represent the positional relationship between the touch points of different objects, and specifically may represent the left-right relationship between the touch points, so as to reflect the left-right relationship between the objects.
- the touch point position information A may represent the distance between the touch point and the reference point on the reference line, and the distance between different touch points can be used to reflect the distance between the touch points. positional relationship.
- the distance between the touch point and the left end point of the reference line is taken as an example, but it does not limit the position information of the touch point.
- the position information of the touch point may represent the difference between the touch point and any point on the reference line. The positional relationship between them is not limited here.
- FIG. 12 is a schematic diagram of an application scenario of the information processing method provided by the embodiment of the application.
- the serial number column indicates the sequence before and after each object touches the reference line, that is, the timing information A;
- the column of point position information indicates the distance between the touch point of each object touching the reference line and the left end point of the reference line, that is, the touch point position information A.
- the position information of the touch point may represent the positional relationship between the touch point and any point on the reference line, which is not limited here.
- the touch line information is the information that the object touches the reference line.
- the processing device may acquire, according to the detection information B, the time sequence information B and the touch point position information B of the object touching the reference line.
- the touch point position information B refer to the description of the touch point position information A in step 1103, and details are not repeated here.
- the column of serial numbers indicates the sequence before and after each object touches the reference line, that is, timing information B;
- the location information of the touch point may represent the location relationship between the touch point and any point on the reference line, so as to reflect the location relationship between different touch points, which is not limited here.
- step 1101 and step 1102 do not have a necessary sequence, and step 1101 may be performed before or after step 1102, or step 1101 and step 1102 may be performed simultaneously, which is not limited here.
- Step 1103 and step 1104 are not necessarily in order. Step 1103 can be executed before or after step 1104, or step 1103 and step 1104 can be executed at the same time, as long as step 1103 is executed after step 1101, and step 1104 is executed after step 1102. Yes, there is no limitation here.
- the touch point position information A can be arranged in sequence according to the time sequence, and the touch position sequence A can be obtained.
- the touch point position information B can be arranged in sequence according to the time sequence, and the touch position sequence B can be obtained.
- step 1105 and step 1106 do not have a necessary sequence, and step 1105 may be performed before or after step 1106, or step 1105 and step 1106 may be performed at the same time, as long as step 1105 is performed after step 1103, step 1106 It may be executed after step 1104, which is not limited here.
- the touch position sequence A and the touch position sequence B are essentially two sequences, and the processing device can compare the two sequences, and when it is found that the two sequences contain the same or highly overlapping sequence fragments, it can be considered that the sequence Fragments are the common part of both sequences.
- the sequence fragment is also referred to as the third subsequence. Because the touch position sequence reflects the positional relationship between the objects detected by the sensor, that is, the formation information between the objects. When the two sequences include the same or highly overlapping sequence segments, it means that the object sets corresponding to the segments in the two sequences have the same positional relationship, that is, they have the same formation information. When different sensors detect the same or similar formation information, it can be considered that the two sensors detect the same set of objects.
- the third subsequence is also called target formation information, which represents the same or similar formation information detected by multiple sensors.
- the degree of coincidence with the touch position sequence B is higher than the third threshold.
- the degree of coincidence is also referred to as the degree of similarity.
- the third threshold may be 90%, and besides 90%, the third threshold may also be other values, such as 95%, 99%, etc., which are not limited here.
- the touch position sequence A and the touch position sequence B shown in FIG. 12 both include sequence fragments of (7.5m, 7.3m, 1.5m, 7.6m, 1.3m).
- the processing device may use the segment as a third subsequence.
- the coincidence degree of the third subsequence with the touch position sequence A and the touch position sequence B are both 100%.
- the third subsequence may be determined through the LCS algorithm.
- all common sequences of multiple touch position sequences can be obtained through the LCS algorithm, so as to achieve matching of the same position features of the multiple touch position sequences. Since the LCS algorithm calculates the longest common subsequence, the third subsequence calculated by the LCS algorithm may include subsequences whose coincidence degrees with the aforementioned multiple touch position sequences are all higher than the second threshold, longest subsequence.
- the third subsequence is composed of multiple touch point position information.
- the corresponding data can be found in the touch position sequence A and the touch position sequence B .
- the position information of the touch point whose sequence number is 2 in the touch position sequence A is 7.3m
- the position information of the front and rear touch points is 7.5m and 1.5m, respectively.
- the position information of a single touch point in the touch position sequence or the third sub-sequence is also referred to as position information, which represents the position of a single target in the target set.
- the location information of the touch point of the self is called the self feature
- the location information of the touch point in front, back or nearby is called the peripheral feature
- the peripheral feature may also include more location information of nearby touch points, which is not limited here.
- the processing device can fuse the detection information corresponding to the sequence number 2 with the detection information corresponding to the sequence number 11 to obtain the fusion information of the target object.
- the appearance information such as the size and shape of the object corresponding to the sequence number 2 can be detected.
- information such as the model, color, license plate and other information of the vehicle corresponding to serial number 2 can be detected.
- Touching the radar corresponding to the position sequence B can detect information such as the moving speed of the object corresponding to serial number 11.
- information such as vehicle speed and acceleration corresponding to the serial number 11 can be detected.
- the processing device can fuse the aforementioned model, color, license plate and other information with vehicle speed, acceleration and other information to obtain the fusion information of the vehicle.
- all common sequences of multiple touch interval sequences can be determined by using the LCS algorithm, so as to match all segments of touch interval sequences with the same location feature. If a plurality of fragments are public sequences, and some non-public sequences are interspersed in these public sequences, these non-public sequences interspersed in the public sequences can be identified. Among them, the non-public sequences reflect different positional relationships in different sensors. In this case, it can be considered that the non-public sequence mixed in the public sequence is caused by the false detection or missed detection of the sensor, so that the non-public sequence is fault-tolerant, that is, the non-public sequence is used in the target detected by different sensors. Correspondence to objects to realize the fusion of detection information.
- the third subsequence determined by the LCS algorithm may include the subsequence with the longest length among the subsequences whose coincidence degrees with multiple touch position sequences are all higher than the third threshold. Since the positional relationship between the targets may be similar by chance, the longer the determined subsequence is, the lower the possibility of having a similar positional relationship is, and the more chance can be avoided, the longest subsequence is determined by the LCS algorithm. sequence, the target formation information of the same target set can be accurately determined.
- the positional relationship of two objects may be similar by chance, but if the standard is raised to the positional relationship between ten objects with a high degree of coincidence, the probability of ten objects with similar positional relationship is relatively high.
- the possibility of two targets with similar positional relationship will be greatly reduced, so if the first subsequence of ten targets is determined by the LCS algorithm, these ten targets are the same ten targets by different sensors. Detection results are more likely, reducing the chance of matching errors.
- the touch point position information represents the left-right relationship between different targets touching the reference line, and may be continuous numerical values or data. Therefore, based on the continuous value or data, the formation information of the target can be more accurately distinguished from the formation information of other non-targets, so as to more accurately realize the fusion of detection information for the same target.
- the movement trend between the targets can be analyzed or calculated through the continuous numerical value or data.
- other information such as the movement trajectory of the target objects, etc., can also be calculated, which is not limited here.
- the subsequences in addition to determining the corresponding subsequences, can also be combined to improve the accuracy of formation matching.
- FIG. 13 is a schematic flowchart of an information processing method provided by an embodiment of the present application.
- the method includes:
- step 1301 and step 1302 refer to step 701 and step 702 in the embodiment shown in FIG. 7, and details are not repeated here.
- the touch line information is the information that the pixels of the object touch the reference line.
- the processing device may acquire, according to the detection information A, the timing information A of the object pixel touching the reference line, the touch point partition information A, and the touch time interval information A.
- FIG. 14 is a schematic diagram of an application scenario of the information processing method provided by the embodiment of the present application.
- the serial number column indicates the sequence before and after each object touches the reference line, that is, the timing information A;
- the column of point partition information indicates the partition information of the touch point in the baseline when each object touches the baseline, that is, the touch point partition information A, where 1 represents lane 1 and 3 represents lane 3.
- the column of touch time interval information indicates the time difference between each object touching the reference line and the time difference between the previous object touching the reference line, that is, touch time interval information A, wherein the touch time interval information is in seconds. In addition to seconds, the touch time interval information can also be in milliseconds, which is not limited here.
- the touch line information is the information that the object touches the reference line.
- the processing device may acquire, according to the detection information B, the time sequence information B of the object touching the reference line, the touch point partition information B, and the touch time interval information B.
- the column of serial number indicates the sequence before and after each object touches the reference line, that is, timing information B;
- the column of touch point partition information indicates the partition of the touch point in the reference line when each object touches the reference line information, that is, the touch point partition information B, where 1 means 1 lane and 3 means 3 lanes.
- the column of touch time interval information indicates the time difference between each object touching the reference line and the time difference between the previous object touching the reference line, that is, touch time interval information B, wherein the touch time interval information is in seconds. In addition to seconds, the touch time interval information can also be in milliseconds, which is not limited here.
- step 1301 and step 1302 do not have a certain sequence, and step 1301 may be performed before or after step 1302, or step 1301 and step 1302 may be performed simultaneously, which is not limited here.
- Step 1303 and step 1304 are not necessarily in order. Step 1303 can be executed before or after step 1304, or step 1303 and step 1304 can be executed at the same time, as long as step 1303 is executed after step 1301, and step 1304 is executed after step 1302. Yes, there is no limitation here.
- step 705 For the step for the processing device to acquire the touch partition sequence A according to the timing information A and the touch point partition information, refer to step 705 in the embodiment shown in FIG. 7 , which will not be repeated here.
- step 905 For the step of acquiring the touch interval sequence A by the processing device according to the timing information A and the touch time interval information A, refer to step 905 in the embodiment shown in FIG. 9 , which will not be repeated here.
- step 706 For the step of acquiring the touch partition sequence B by the processing device according to the timing information B and the touch point partition information, refer to step 706 in the embodiment shown in FIG. 7 , which will not be repeated here.
- step 906 For the step of acquiring the touch interval sequence B by the processing device according to the timing information B and the touch time interval information B, refer to step 906 in the embodiment shown in FIG. 9 , which will not be repeated here.
- step 1305 and step 1306 do not have a necessary sequence, and step 1305 may be performed before or after step 1306, or step 1305 and step 1306 may be performed at the same time, as long as step 1305 is performed after step 1303, step 1306 It can be executed after step 1304, which is not limited here.
- the touch partition sequence A and the touch partition sequence B are essentially two sequences, and the processing device can compare the two sequences. When it is found that the two sequences contain the same or highly overlapping sequence fragments, it can be considered that the sequence Fragments are the common part of both sequences. In the embodiments of the present application, the sequence fragment is also referred to as the first subsequence. Because the touch partition sequence reflects the positional relationship between the objects detected by the sensor, that is, the formation information between the objects. When two sequence segments include the same or highly overlapping sequence segments, it means that the object sets corresponding to the segments in the two sequence sequences have the same positional relationship, that is, they have the same formation information. When different sensors detect the same or similar formation information, it can be considered that the two sensors detect the same set of objects.
- the first subsequence is also referred to as target formation information, which represents the same or similar formation information detected by multiple sensors.
- the coincidence degree with the touch partition sequence B may be higher than the first threshold.
- the degree of coincidence is also referred to as the degree of similarity.
- the first threshold may be 90%, and besides 90%, the first threshold may also be other values, such as 95%, 99%, etc., which are not limited here.
- the touch partition sequence A and the touch partition sequence B shown in FIG. 8 both include sequence fragments of (3, 3, 1, 3, 1).
- the processing device may use the segment as the first subsequence.
- the coincidence degree of the first sub-sequence with the touch sub-sequence A and the touch sub-sequence B are both 100%.
- the touch interval sequence A and the touch interval sequence B are essentially two sequences, and the processing device can compare the two sequences, and when it is found that the two sequences contain the same or highly overlapping sequence fragments, it can be considered that the sequence Fragments are the common part of both sequences.
- the sequence fragment is also referred to as the second subsequence. Because the touch interval sequence reflects the positional relationship between the objects detected by the sensor, that is, the formation information between the objects. When two sequence segments include the same or highly overlapping sequence segments, it means that the object sets corresponding to the segments in the two sequence sequences have the same positional relationship, that is, they have the same formation information. When different sensors detect the same or similar formation information, it can be considered that the two sensors detect the same set of objects.
- the second subsequence is also called target formation information, which represents the same or similar formation information detected by multiple sensors.
- the coincidence degree with the touch interval sequence B may be higher than the second threshold.
- the degree of coincidence is also referred to as the degree of similarity.
- the second threshold may be 90%, and besides 90%, the second threshold may also be other values, such as 95%, 99%, etc., which are not limited here.
- the touch spacer sequence A and the touch spacer sequence B shown in FIG. 10 both contain sequence fragments of (2.0s, 0.3s, 1.9s, 0.4s).
- the processing device may use the segment as the second subsequence.
- the degree of coincidence between the second subsequence and the touch interval sequence A and the touch interval sequence B are both 100%.
- the objects indicated by the first subsequence (3, 3, 1, 3, 1), on the sensor A side have serial numbers from 1 to 5, and correspond to the objects on the sensor B side with serial numbers from 10 to 14.
- the object set corresponding to the first subsequence is also referred to as the first object set.
- the objects indicated by the second subsequence (2.0s, 0.3s, 1.9s, 0.4s), on the sensor A side, have serial numbers 2 to 5, and correspond to the objects on the sensor B side with serial numbers 11 to 14.
- the object set corresponding to the second subsequence is also referred to as the second object set.
- intersection of the two object sets Take the intersection of the two object sets, that is, on the sensor A side, take the intersection of the objects with numbers 1 to 5 and objects with numbers 2 to 5, that is, determine that the intersection is the set of objects with numbers 2 to 5.
- the intersection is the set of targets with serial numbers 11 to 14 .
- the intersection of the first object set and the second object set is also referred to as a target object set.
- the first subsequence is composed of multiple touch point partition information, and for each touch point partition information in the first subsequence, corresponding data can be found in the touch partition sequence A and the touch partition sequence B .
- the partition information of the touch point whose sequence number is 4 in the touch partition sequence A is 3, and the partition information of the front and rear touch points is 1.
- the single touch point partition information in the touch partition sequence or the first sub-sequence is also referred to as position information, which indicates the position of a single target in the target set.
- the self-partition information is called self-feature
- the preceding or nearby partition information is called peripheral feature
- the peripheral feature may also include more nearby touch point partition information, which is not limited here.
- the processing device can fuse the detection information corresponding to the serial number 4 with the detection information corresponding to the serial number 13 to obtain the fusion information of the target object.
- the appearance information such as the size and shape of the object corresponding to the serial number 4 can be detected.
- information such as the model, color, license plate and other information of the vehicle corresponding to serial number 4 can be detected.
- the radar corresponding to the partition sequence B information such as the moving speed of the object corresponding to serial number 13 can be detected.
- information such as vehicle speed and acceleration corresponding to serial number 13 can be detected.
- the processing device can fuse the aforementioned model, color, license plate and other information with vehicle speed, acceleration and other information to obtain the fusion information of the vehicle.
- detection information with the same self-feature and surrounding features in the second subsequence may also be fused.
- detection information with the same self-feature and surrounding features in the second subsequence may also be fused.
- step 908 of the embodiment shown in FIG. 9 For the description of the self-features and surrounding features of the second subsequence, refer to step 908 of the embodiment shown in FIG. 9 , and details are not repeated here.
- the first object set corresponding to the first subsequence and the second object set corresponding to the second subsequence are used to determine the intersection of the first object set and the second object set, and use the intersection set as a collection of target objects.
- the objects in the intersection correspond to the first subsequence, that is, according to the detection information of different sensors, similar touch partition information can be obtained; at the same time, the objects in the intersection correspond to the second subsequence, that is, according to the detection information of different sensors detection information, while having similar touch time interval information.
- the intersection between objects corresponding to other subsequences can also be taken, such as the first subsequence and the third subsequence.
- the intersection between the objects corresponding to each sequence, or the intersection between the objects corresponding to the second subsequence and the third subsequence, or the objects corresponding to other subsequences, and any one of the first to third subsequences The intersection between objects corresponding to the sequence.
- subsequences are also used to represent the positional relationship between objects, such as the distance or direction between objects, etc., which are not limited here.
- suitable subsequences can be flexibly selected for operation, which improves the feasibility and flexibility of the scheme.
- the intersection between the corresponding objects of more subsequences can also be taken, for example, taking the first subsequence, the second subsequence and the third subsequence Subsequences correspond to intersections between objects.
- the greater the number of subsequences taken the more similar types of information representing the positional relationship of objects can be obtained based on the detection information of multiple sensors, and the higher the possibility that the set of objects corresponding to the detection information is the same set of objects. Therefore, by screening the intersection of objects corresponding to multiple subsequences, the formation information of the target can be more accurately distinguished from the formation information of other non-targets, so as to more accurately realize the fusion of detection information for the same target.
- the touch line information is obtained through the position feature set. Since the touch line information is the information of the object touching the reference line, the touch time, touch interval, touch position, etc., including specific values, can be obtained by touching the reference line. or data on specific location features. Therefore, through the specific numerical values or specific position characteristics of the touch lines of multiple targets, a collection of touch line data can be obtained, such as a sequence composed of multiple touch times, a sequence composed of multiple touch intervals, or multiple touch locations. composition distribution, etc. Since the above-mentioned sets of antenna data all have specific numerical values or position characteristics, they can be directly calculated without other data processing, so that target formation information whose coincidence degree meets the preset threshold can be quickly determined.
- the formation information by the scribing method in addition to determining the formation information by the scribing method, it can also be determined by other methods, such as an image feature matching method.
- formation information can be represented as an overall shape.
- this abstracted overall shape can be represented by image features.
- the method of determining the formation information by using the overall image features is called the image feature matching method.
- FIG. 15 is a schematic flowchart of an information processing method provided by an embodiment of the present application. The method includes:
- steps 1501 and 1502 refer to steps 701 and 702 in the embodiment shown in FIG. 7, and details are not repeated here.
- the processing device can distinguish different objects according to the pixels in the picture, and mark feature points on the objects.
- the shape composed of each feature point is used as the initial target group distribution map A.
- the labeling of the feature points may follow a uniform rule.
- the center point of the front of the vehicle may be used as the feature point.
- it can also be other points, such as the center point of the license plate, etc., which is not limited here.
- FIG. 16 is a schematic diagram of an application scenario of the information processing method provided by the embodiment of the present application.
- the center point of the license plate is marked, and the marked points are connected to form the initial target group distribution map A, which has a shape similar to the number "9".
- the corresponding shape feature can be extracted by a scale-invariant feature transform (SIFT) algorithm, so as to obtain the initial target group distribution map A.
- SIFT scale-invariant feature transform
- the detection information B is a picture of the object detected by the radar within the detection range
- the object detected by the radar has label information in the picture
- the label information represents the corresponding object.
- the processing device may use the shape formed by each label information in the picture as the initial target group distribution map B.
- the locations where the annotation information is located are connected to form an initial target group distribution map B, which also has a shape similar to the number "9".
- the corresponding shape feature can be extracted by the SIFT algorithm, so as to obtain the initial target group distribution map B.
- the processing device can obtain the standard view diagram of the initial target group distribution diagram A through the viewing angle change algorithm, and use the standard view angle diagram of the initial target group distribution diagram A as the target group distribution diagram A.
- the processing device can obtain the standard view of the initial target group distribution map B through the viewing angle change algorithm, and use the standard view of the initial target group distribution map B as the target group distribution map B.
- the target group distribution map A and the target group distribution map B are two shapes, and the processing device can compare the image features of these two shapes.
- the feature set is the common part of the two image features.
- the feature set is also referred to as an image feature set. Because the image features reflect the positional relationship between the objects detected by the sensor, that is, the formation information between the objects.
- two image features include the same feature set or a feature set with a high degree of coincidence, it means that the object set corresponding to the feature set in the two image features has the same positional relationship, that is, has the same formation information.
- different sensors detect the same or similar formation information it can be considered that the two sensors detect the same set of objects.
- the image feature set is also called target formation information, which represents the same or similar formation information detected by multiple sensors.
- the sensor has a certain missed detection rate, it is not required that the image feature set completely coincide with the features in the target group distribution map A and target group distribution map B, as long as the image feature set and the target group distribution map A and the target are guaranteed to be completely coincident.
- All the coincidence degrees of the group distribution map B may be higher than the third threshold.
- the degree of coincidence is also referred to as the degree of similarity.
- the third threshold may be 90%, and besides 90%, the third threshold may also be other values, such as 95%, 99%, etc., which are not limited here.
- the image feature sets of the distribution maps of different target groups may be matched through a face recognition algorithm or a fingerprint recognition algorithm.
- the image feature set is composed of multiple annotation information or annotation points.
- the corresponding data can be found in the target group distribution map A and the target group distribution map B.
- the marked point at the bottom of the shape "9" in the target group distribution map A is the marked point at the bottom of the shape "9" in the target group distribution map A.
- the single label information or label point in the target group distribution map or the image feature set is also referred to as position information, which represents the position of a single target in the target set.
- the label information with the same position can also be found in the target group distribution map B, that is, the label information at the bottom of the shape "9" in the target group distribution map B. Since the two annotation information and annotation points are in the image feature set and have the same location features, it can be considered that the two annotation information and annotation points reflect the same object. Therefore, the processing device can fuse the detection information corresponding to the label point at the bottom of the shape "9” with the detection information corresponding to the label information at the bottom of the shape "9” to obtain the fusion information of the target.
- the camera corresponding to the target group distribution map A can detect the appearance information such as the size and shape of the object.
- information such as the model, color, license plate and other information of the corresponding vehicle can be detected.
- the radar corresponding to the target group distribution map B can detect the moving speed of the object and other information.
- information such as the speed and acceleration of the corresponding vehicle can be detected.
- the processing device can fuse the aforementioned model, color, license plate and other information with vehicle speed, acceleration and other information to obtain the fusion information of the vehicle.
- a plurality of corresponding initial target group distribution maps are obtained according to detection information from different sensors, and a plurality of corresponding target group distribution maps are obtained through a perspective change algorithm, and then images of multiple target group distribution maps are obtained. feature set, and use the image feature set as the target formation information.
- an image feature set whose coincidence degree with the multiple target group distribution maps is higher than a preset threshold is determined. Since image features can intuitively reflect the positional relationship between objects displayed in the image, the image feature set determined by multiple target group distribution maps can intuitively reflect detection results with similar The detection results of the sensor to the same target group are matched, so as to accurately realize the fusion of detection information.
- the image feature matching method and the scribing method can also be combined to obtain more accurate results.
- the image feature matching method is combined with the scribing method.
- FIG. 17 is a schematic flowchart of an information processing method provided by an embodiment of the present application.
- the method includes:
- steps 1701 and 1702 refer to steps 701 and 702 in the embodiment shown in FIG. 7, and details are not described herein again.
- the touch line information includes timing information of the object touching the reference line, touch point partition information, touch point position information, touch time interval information, and the like.
- the processing device may acquire any of the foregoing contact line information according to the detection information, for example, the timing information A and the touch point partition information A may be acquired according to the detection information A.
- the timing information A and the touch point partition information A For the acquisition process of the timing information A and the touch point partition information A, refer to step 703 of the embodiment shown in FIG. 7 , and details are not repeated here.
- the processing device may also acquire other touch line information, such as timing information A and touch time interval information A shown in step 903 in the embodiment shown in FIG. 9 , or Timing information A and touch point position information A shown in step 1103 in the embodiment shown in FIG. 11 , or timing information A shown in step 1303 in the embodiment shown in FIG. 13 , touch point partition information A and touch point information A
- the time interval information A, etc. is not limited here.
- step 1703 which type of wireline information the processing device obtains according to the detection information A, correspondingly, the same type of wireline information should be obtained according to the detection information B.
- the process of obtaining the wireline information refer to the aforementioned FIG. 7 . 9. The embodiment shown in FIG. 11 or FIG. 13 will not be repeated here.
- the object touches the reference line only for a moment, so the touch line information can reflect the moment of detection information.
- the processing device may determine the initial target group distribution map A according to the detection information A at the moment reflected by the antenna line information A.
- the initial target group distribution map A obtained here reflects the formation information at the moment when the touch line information A is located.
- the processing device can determine the antenna information B that has the same formation information as the antenna information A. Since the antenna information mainly reflects the formation information of the object set, it can be considered that the antenna information B is the same as the antenna information A.
- the processing device may determine the initial target group distribution map B according to the detection information B at the moment reflected by the antenna line information B.
- the initial target group distribution map B obtained here reflects the formation information at the moment when the touch line information B is located.
- step 1504 For the process of acquiring the initial target group distribution map B, refer to step 1504 in the embodiment shown in FIG. 15 , and details are not repeated here.
- Steps 1707 to 1709 refer to steps 1505 to 1507 of the embodiment shown in FIG. 15 , and details are not repeated here.
- the initial target group of the approximate time due to the high similarity between the images of the approximate time, if the same time is not determined, when matching the initial target distribution maps from different sensors, the initial target group of the approximate time will be introduced.
- the interference of the distribution map leads to the matching error of the distribution map and the wrong acquisition of the image feature set, so that the detection information at different times is fused, resulting in a fusion error of the detection information.
- This error can be avoided by using the contact line information.
- multiple initial target group distribution maps are determined by the contact line information, and the multiple initial target group distribution maps have the same contact line information, indicating the distribution of the multiple initial target groups.
- the images are acquired at the same time, which ensures that the fused detection information is acquired at the same time, which improves the accuracy of detection information fusion.
- the methods described in the embodiments of the present application can not only be used to obtain fusion information, but also can be used for other purposes, such as realizing the mapping of the spatial coordinate system of different sensors, realizing the mapping of the time axis of different sensors, error correction of sensors or filtering, etc.
- the plurality of sensors may include a first sensor and a second sensor, wherein the space coordinate system corresponding to the first sensor is a standard coordinate system, and the space coordinate system corresponding to the second sensor is a target coordinate system.
- the space coordinate system corresponding to the first sensor is a standard coordinate system
- the space coordinate system corresponding to the second sensor is a target coordinate system.
- the processing device determines the mapping relationship between the multiple standard point information and the multiple target point information according to the fusion detection information, wherein the fusion detection information is obtained by fusing the detection information corresponding to the same target in the multiple formation information.
- the standard point information represents the position information of each object in the target object set in the standard coordinate system
- the target point information represents the position information of each object in the target object set in the target coordinate system, wherein, the multiple standard point information and There is a one-to-one correspondence of multiple target point information.
- the processing device can determine the mapping relationship between the standard coordinate system and the target coordinate system according to the mapping relationship between the standard point information and the target point information.
- the mapping relationship between the multiple standard point information and the multiple target point information is determined by fusing the detection information, and the standard coordinates are determined through the mapping relationship between the multiple standard point information and the multiple target point information.
- the mapping relationship between the system and the target coordinate system In the method described in the embodiments of the present application, as long as detection information from different sensors can be acquired, the mapping of coordinate systems between different sensors can be realized. Subsequent determination of target formation information, point information mapping and other steps can be realized by the processing equipment itself, without manual calibration and mapping. By processing equipment matching target formation information, the accuracy of equipment operation improves the accuracy of point information mapping. At the same time, as long as the detection information from different sensors can be obtained, the fusion of detection information and the mapping of the coordinate system can be realized, which avoids the scene limitation caused by manual calibration and ensures the accuracy and universality of detection information fusion.
- the processing device calculates the time difference between the time axes of the multiple sensors according to the fusion result of the detection information corresponding to the same target in the multiple formation information. Through this time difference, the time axis mapping between different sensors can be realized.
- the time axes of different sensors can be aligned according to the time difference.
- the time axis alignment method provided by the embodiments of the present application can be implemented as long as the detection information of different sensors can be obtained, and it does not require multiple sensors to be in the same time synchronization system, which expands the application scenarios of the time axis alignment of different sensors, and also expands the The scope of application of information fusion.
- the plurality of sensors may include a standard sensor and a sensor to be tested, and the method may further include:
- the processing device obtains the target formation information corresponding to the standard formation information of the standard sensor; the processing device obtains the target formation information corresponding to the to-be-measured formation information of the sensor to be tested; the processing device determines the difference between the to-be-measured formation information and the standard formation information; The difference and standard formation information are used to obtain error parameters, wherein the error parameters are used to indicate the error of the formation information to be tested, or to indicate the performance parameters of the sensor to be tested.
- FIG. 18 is a schematic diagram of an application scenario of the information processing method provided by the embodiment of the present application.
- sensor B detects a piece of data by mistake, such as the v6a6 data in the figure, it can be divided according to the touch
- the difference between sequence A and touch partition sequence B confirms that the data of serial number 15 is detected by sensor B by mistake.
- the false detection information of the sensor can be obtained to calculate the false detection rate of the sensor to evaluate the performance of the sensor.
- FIG. 19 is a schematic diagram of an application scenario of the information processing method provided by the embodiment of the present application. As shown in FIG. 19 , if sensor B misses detection of a piece of data, the data on the 3 lanes corresponding to the serial number 2 in the figure is displayed. According to the difference between the touch partition sequence A and the touch partition sequence B, it can be determined that a target object is missed between the serial number 10 and the serial number 11.
- the missed detection information of the sensor can be obtained, so as to calculate the missed detection rate of the sensor to evaluate the performance of the sensor.
- the standard sensor is used as the detection standard, and the error parameter is obtained according to the difference between the formation information to be tested and the standard formation information.
- the error parameter is used to indicate the error of the formation information to be measured
- the information corresponding to the error parameter in the formation information to be measured can be corrected through the error parameter and the standard formation information;
- the error parameter is used to indicate the performance parameter of the sensor to be measured
- the performance parameters such as the false detection rate of the sensor to be tested can be determined, and the data analysis of the sensor to be tested can be realized to realize the selection of the sensor.
- a processing device corresponding to the information processing method in the embodiment of the present application.
- FIG. 20 is a schematic structural diagram of a processing device provided by an embodiment of the present application.
- the processing device 2000 is located in a detection system, the detection system further includes at least two sensors, wherein the detection information acquired by the at least two sensors includes detection information of the at least two sensors on the same at least two targets respectively, the Processing device 2000 may include processor 2001 and transceiver 2002 .
- the transceiver 2002 is configured to acquire at least two pieces of detection information from at least two sensors, wherein the at least two sensors are in one-to-one correspondence with the at least two pieces of detection information.
- the processor 2001 is configured to: determine at least two corresponding formation information according to the at least two detection information, wherein each formation information is used to describe the positional relationship between objects detected by the corresponding sensor, wherein the objects include The aforementioned target object; target formation information is determined according to at least two formation information, the degree of coincidence between the target formation information and each formation information in the at least two formation information is higher than a preset threshold, and the target formation information is used to describe at least two targets
- the positional relationship between objects, the target formation information includes the position information of each target; according to the position information of any target in each target, the detection information corresponding to the same target in at least two formation information fusion.
- the detection information includes a position feature set
- the position feature set includes at least two position features
- the position features represent the positional relationship between the object detected by the corresponding sensor and the objects around the object.
- the processor 2001 is specifically configured to: acquire corresponding at least two antenna information according to the at least two position feature sets, wherein each of the at least two antenna information is used as In order to describe the information that the object detected by the corresponding sensor touches the reference line, at least two contact line information is in one-to-one correspondence with at least two position feature sets; at least two corresponding formation information are respectively determined according to the at least two contact line information, The at least two antenna information is in one-to-one correspondence with the at least two formation information.
- the touch line information includes time sequence information and touch point partition information corresponding to the object detected by the sensor touching the reference line, and the touch point partition information represents the touch point of the object touching the reference line.
- the processor 2001 is specifically configured to: acquire a first subsequence of at least two touch partition sequences, and use the first subsequence as target formation information, wherein the first subsequence and the at least two touch partition sequences have a high degree of coincidence at the first threshold; according to the touch point partition information corresponding to each target in the first subsequence, the detection information corresponding to the same target in at least two touch partition sequences is fused.
- the touch line information includes time sequence information and touch time interval information corresponding to the object detected by the sensor touching the reference line, and the touch time interval information represents the time interval before and after the object touches the reference line ;
- the formation information includes the touch interval sequence, and the touch interval sequence represents the distribution of the time interval when the object detected by the corresponding sensor touches the reference line.
- the processor 2001 is specifically configured to: acquire a second subsequence of at least two touch interval sequences, and use the second subsequence as target formation information, wherein the degree of coincidence between the second subsequence and the at least two touch interval sequences is high at the second threshold; according to the touch time distribution information corresponding to each target in the second subsequence, the detection information corresponding to the same target in at least two touch interval sequences is fused.
- the touch line information includes the timing information corresponding to the object detected by the sensor touching the reference line, the touch point partition information and the touch time interval information, and the touch point partition information Represents the partition information of the touch point in the baseline where the object touches the baseline, and the touch time interval information represents the time interval before and after the object touches the baseline;
- the formation information includes the touch partition sequence and the touch interval sequence.
- the touch partition sequence represents the temporal relationship before and after the location of the partition corresponding to the object detected by the sensor touching the reference line
- the touch interval sequence represents the distribution of the time interval corresponding to the object detected by the sensor touching the reference line.
- the processor 2001 is specifically configured to: acquire a first subsequence of at least two touch partition sequences, and the coincidence degrees of the first subsequence and the at least two touch partition sequences are all higher than a first threshold; acquire at least two touch intervals In the second subsequence of the sequence, the coincidence degrees of the second subsequence and at least two touch interval sequences are higher than the second threshold; determine the intersection of the first object set and the second object set, and use the intersection as the target object set, where , the first set of objects is the set of objects corresponding to the first subsequence, and the second set of objects is the set of objects corresponding to the second subsequence; the touch partition sequence and touch interval sequence of the target object set are used as the Target formation information.
- the formation information includes a target group distribution map
- the target group distribution map represents the positional relationship between objects.
- the processor 2001 is specifically configured to: obtain at least two corresponding initial target group distribution maps according to at least two position feature sets, and the initial target distribution maps represent the positional relationship between objects detected by the corresponding sensors; Acquiring at least two standard perspective maps of the initial target group distribution maps, and using the at least two standard perspective maps as the corresponding at least two target group distribution maps, wherein the position information of the target group distribution map includes the target object distribution information of the target object , the target object distribution information represents the position of the target object in the object detected by the corresponding sensor; the image feature sets of at least two target group distribution maps are obtained, and the image feature sets are used as the target formation information, wherein the image feature sets are the same as The coincidence degree of the at least two target group distribution maps is higher than the third threshold; according to the target object distribution information corresponding to each target in the image feature set, the detection information corresponding to the same target in the at least two target group distribution maps is fused .
- the processor 2001 is further configured to: acquire at least two antenna information of the corresponding target in the image feature set according to the at least two position feature sets, wherein, among the at least two antenna information Each touch line information of is used to describe the information that the object detected by the corresponding sensor touches the reference line, and at least two touch line information corresponds to at least two position feature sets one-to-one.
- the processor 2001 is specifically configured to acquire at least two corresponding initial target group distribution maps according to the at least two antenna information, wherein the objects in the at least two initial target group distribution maps have the same antenna information.
- the at least two sensors include a first sensor and a second sensor
- the space coordinate system corresponding to the first sensor is a standard coordinate system
- the space coordinate system corresponding to the second sensor is a target coordinate system
- the processor 2001 is further configured to: determine the mapping relationship between the at least two standard point information and the at least two target point information according to the fusion detection information obtained by fusing the detection information corresponding to the same target in the at least two formation information.
- the point information represents the position information of each object in the target object set in the standard coordinate system
- the target point information represents the position information of each object in the target coordinate system, wherein at least two standard point information and at least two target point information one by one Corresponding; according to the mapping relationship between the standard point information and the target point information, determine the mapping relationship between the standard coordinate system and the target coordinate system.
- the processor 2001 is further configured to calculate the time difference between the time axes of the at least two sensors according to the fusion result of the detection information corresponding to the same target in the at least two formation information.
- the at least two sensors include a standard sensor and a sensor to be tested.
- the processor 2001 is further configured to: obtain the standard formation information corresponding to the target formation information in the standard sensor; obtain the formation information to be measured corresponding to the target formation information in the sensor to be measured; determine the difference between the formation information to be measured and the standard formation information; Difference and standard formation information, obtain error parameters, the error parameters are used to indicate the error of the formation information to be tested, or to indicate the performance parameters of the sensor to be tested.
- the processing device 2000 can perform the operations performed by the processing device in the foregoing embodiments shown in FIG. 4 to FIG. 17 , and details are not repeated here.
- FIG. 21 is a schematic structural diagram of a processing device provided by an embodiment of the present application.
- the processing device 2100 may include one or more central processing units (CPUs) 2101 and memory 2105 .
- the memory 2105 stores one or more application programs or data.
- the memory 2105 may be volatile storage or persistent storage.
- a program stored in memory 2105 may include one or more modules, each of which may include a series of instructions to operate on a processing device.
- the central processing unit 2101 may be arranged to communicate with the memory 2105 to execute a series of instruction operations in the memory 2105 on the processing device 2100.
- the processing device 2100 may also include one or more power supplies 2102, one or more wired or wireless network interfaces 2103, one or more transceiver interfaces 2104, and/or, one or more operating systems, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
- operating systems such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
- the processing device 2100 can perform the operations performed by the processing device in the foregoing embodiments shown in FIG. 4 to FIG. 17 , and details are not repeated here.
- the disclosed system, apparatus and method may be implemented in other manners.
- the apparatus embodiments described above are only illustrative.
- the division of the units is only a logical function division. In actual implementation, there may be other division methods.
- multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
- the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
- the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
- each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
- the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
- the integrated unit if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium.
- the technical solutions of the present application can be embodied in the form of software products in essence, or the parts that contribute to the prior art, or all or part of the technical solutions, and the computer software products are stored in a storage medium , including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
- the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), magnetic disk or optical disk and other media that can store program codes .
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Computer Networks & Wireless Communication (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Computational Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Electromagnetism (AREA)
- Image Analysis (AREA)
- Radar Systems Or Details Thereof (AREA)
- Position Input By Displaying (AREA)
Abstract
Description
Claims (25)
- 一种信息处理方法,其特征在于,所述方法应用于检测***中的处理设备,所述检测***还包括至少两个传感器,其中,所述至少两个传感器所获取的检测信息中包括所述至少两个传感器分别对相同的至少两个目标物的检测信息,所述方法包括:所述处理设备从所述至少两个传感器获取至少两个检测信息,其中,所述至少两个传感器与所述至少两个检测信息一一对应;所述处理设备根据所述至少两个检测信息确定对应的至少两个阵型信息,其中,每个阵型信息用于描述对应传感器所检测到的物体之间的位置关系,其中,所述物体包括所述目标物;所述处理设备根据所述至少两个阵型信息确定目标阵型信息,所述目标阵型信息与所述至少两个阵型信息中的每个阵型信息的重合度均高于预设阈值,所述目标阵型信息用于描述所述至少两个目标物之间的位置关系,所述目标阵型信息中包括每个目标物的阵位信息;所述处理设备根据所述每个目标物中任一目标物的阵位信息,将所述至少两个阵型信息中同一目标物对应的检测信息融合。
- 根据权利要求1所述的方法,其特征在于,所述检测信息包括位置特征集,所述位置特征集包括至少两个位置特征,所述位置特征表示对应传感器检测到的物体,与所述物体四周的物体之间的位置关系。
- 根据权利要求2所述的方法,其特征在于,所述处理设备根据所述至少两个检测信息确定对应的至少两个阵型信息,包括:所述处理设备根据至少两个位置特征集获取对应的至少两个触线信息,其中,所述至少两个触线信息中的每个触线信息用于描述对应传感器所检测到的物体触碰基准线的信息,所述至少两个触线信息与所述至少两个位置特征集一一对应;所述处理设备根据所述至少两个触线信息分别确定对应的所述至少两个阵型信息,所述至少两个触线信息与所述至少两个阵型信息一一对应。
- 根据权利要求3所述的方法,其特征在于,所述触线信息包括对应传感器所检测到的物体触碰所述基准线的时序信息和触碰点分区信息,所述触碰点分区信息表示所述物体触碰所述基准线的触碰点在所述基准线中的分区信息;所述阵型信息包括触碰分区序列,所述触碰分区序列表示对应传感器所检测到的物体触碰所述基准线的分区位置的前后时序关系;所述处理设备根据所述至少两个阵型信息确定目标阵型信息,包括:所述处理设备获取所述至少两个触碰分区序列的第一子序列,将所述第一子序列作为所述目标阵型信息,其中,所述第一子序列与所述至少两个触碰分区序列的重合度均高于第一阈值;所述处理设备根据所述每个目标物的阵位信息,将所述至少两个阵型信息中同一目标物对应的检测信息融合,包括:所述处理设备根据所述每个目标物在所述第一子序列中对应的触碰点分区信息,将所述至少两个触碰分区序列中同一目标物对应的检测信息融合。
- 根据权利要求3所述的方法,其特征在于,所述触线信息包括对应传感器所检测到的物体触碰所述基准线的时序信息和触碰时间间隔信息,所述触碰时间间隔信息表示所述物体触碰所述基准线的前后时间间隔;所述阵型信息包括触碰间隔序列,所述触碰间隔序列表示对应传感器所检测到的物体触碰所述基准线的时间间隔的分布;所述处理设备根据所述至少两个阵型信息确定目标阵型信息,包括:所述处理设备获取所述至少两个触碰间隔序列的第二子序列,将所述第二子序列作为所述目标阵型信息,其中,所述第二子序列与所述至少两个触碰间隔序列的重合度均高于第二阈值;所述处理设备根据所述每个目标物的阵位信息,将所述至少两个阵型信息中同一目标物对应的检测信息融合,包括:所述处理设备根据所述每个目标物在所述第二子序列中对应的触碰时间分布信息,将所述至少两个触碰间隔序列中同一目标物对应的检测信息融合。
- 根据权利要求3所述的方法,其特征在于,所述触线信息包括对应传感器所检测到的物体触碰所述基准线的所述时序信息,所述触碰点分区信息和所述触碰时间间隔信息,所述触碰点分区信息表示所述物体触碰所述基准线的触碰点在所述基准线中的分区信息,所述触碰时间间隔信息表示所述物体触碰所述基准线的前后时间间隔;所述阵型信息包括所述触碰分区序列和所述触碰间隔序列,所述触碰分区序列表示对应传感器所检测到的物体触碰所述基准线的分区位置的前后时序关系,所述触碰间隔序列表示对应传感器所检测到的物体触碰所述基准线的时间间隔的分布;所述处理设备根据所述至少两个阵型信息确定目标阵型信息,包括:所述处理设备获取至少两个触碰分区序列的所述第一子序列,所述第一子序列与所述至少两个触碰分区序列的重合度均高于所述第一阈值;所述处理设备获取至少两个触碰间隔序列的第二子序列,所述第二子序列与所述至少两个触碰间隔序列的重合度均高于所述第二阈值;所述处理设备确定第一物体集合与第二物体集合的交集,将所述交集作为目标物体集合,其中,所述第一物体集合为所述第一子序列所对应的物体的集合,所述第二物体集合为所述第二子序列所对应的物体的集合;所述处理设备将所述目标物体集合的触碰分区序列和触碰间隔序列作为所述目标阵型信息。
- 根据权利要求2所述的方法,其特征在于,所述阵型信息包括目标群分布图,所述目标群分布图表示物体之间的位置关系;所述处理设备根据所述至少两个检测信息确定对应的至少两个阵型信息,包括:所述处理设备根据至少两个位置特征集,获取对应的至少两个初始目标群分布图,所 述初始目标群分布图表示对应传感器所检测到的物体之间的位置关系;所述处理设备通过视角变化算法,获取所述至少两个初始目标群分布图的标准视角图,将至少两个标准视角图作为对应的至少两个目标群分布图,其中,所述目标群分布图的阵位信息包括目标物的目标物分布信息,所述目标物分布信息表示所述目标物在对应传感器所检测到的物体中的位置;所述处理设备根据所述至少两个阵型信息确定目标阵型信息,包括:所述处理设备获取所述至少两个目标群分布图的图像特征集,将所述图像特征集作为所述目标阵型信息,其中,所述图像特征集与所述至少两个目标群分布图的重合度均高于第三阈值;所述处理设备根据所述每个目标物的阵位信息,将所述至少两个阵型信息中同一目标物对应的检测信息融合,包括:所述处理设备根据所述每个目标物在所述图像特征集中对应的目标物分布信息,将所述至少两个目标群分布图中同一目标物对应的检测信息融合。
- 根据权利要求7所述的方法,其特征在于,所述方法还包括:所述处理设备根据至少两个位置特征集,获取所述位置特征集的对应目标物的至少两个触线信息,其中,所述至少两个触线信息中的每个触线信息用于描述对应传感器所检测到的物体触碰基准线的信息,所述至少两个触线信息与所述至少两个位置特征集一一对应;所述处理设备根据至少两个位置特征集,获取对应的至少两个初始目标群分布图,包括:所述处理设备根据所述至少两个触线信息,获取对应的至少两个初始目标群分布图,其中,所述至少两个初始目标群分布图中的物体,具有相同的触线信息。
- 根据权利要求1至8中任一项所述的方法,其特征在于,所述至少两个传感器包括第一传感器和第二传感器,所述第一传感器对应的空间坐标系为标准坐标系,所述第二传感器对应的空间坐标系为目标坐标系,所述方法还包括:所述处理设备根据将所述至少两个阵型信息中同一目标物对应的检测信息融合得到的融合检测信息,确定至少两个标准点信息与至少两个目标点信息之间的映射关系,所述标准点信息表示所述目标物体集合中各物体在所述标准坐标系中的位置信息,所述目标点信息表示所述目标物体集合中各物体在所述目标坐标系中的位置信息,其中,所述至少两个标准点信息与所述至少两个目标点信息一一对应;所述处理设备根据所述标准点信息与所述目标点信息之间的映射关系,确定所述标准坐标系与所述目标坐标系之间的映射关系。
- 根据权利要求1至9中任一项所述的方法,其特征在于,所述方法还包括:所述处理设备根据对所述至少两个阵型信息中同一目标物对应的检测信息的融合结果,计算所述至少两个传感器的时间轴之间的时间差。
- 根据权利要求1至10中任一项所述的方法,其特征在于,所述至少两个传感器包括标准传感器和待测传感器,所述方法还包括:所述处理设备获取所述目标阵型信息在所述标准传感器对应的标准阵型信息;所述处理设备获取所述目标阵型信息在所述待测传感器对应的待测阵型信息;所述处理设备确定所述待测阵型信息与所述标准阵型信息的差异;所述处理设备根据所述差异和所述标准阵型信息,获取错误参数,所述错误参数用于指示所述待测阵型信息的误差,或用于指示所述待测传感器的性能参数。
- 一种处理设备,其特征在于,所述处理设备位于检测***中,所述检测***还包括至少两个传感器,其中,所述至少两个传感器所获取的检测信息中包括所述至少两个传感器分别对相同的至少两个目标物的检测信息,所述处理设备包括:处理器和收发器;所述收发器用于,从所述至少两个传感器获取至少两个检测信息,其中,所述至少两个传感器与所述至少两个检测信息一一对应;所述处理器用于:根据所述至少两个检测信息确定对应的至少两个阵型信息,其中,每个阵型信息用于描述对应传感器所检测到的物体之间的位置关系,其中,所述物体包括所述目标物;根据所述至少两个阵型信息确定目标阵型信息,所述目标阵型信息与所述至少两个阵型信息中的每个阵型信息的重合度均高于预设阈值,所述目标阵型信息用于描述所述至少两个目标物之间的位置关系,所述目标阵型信息中包括每个目标物的阵位信息;根据所述每个目标物中任一目标物的阵位信息,将所述至少两个阵型信息中同一目标物对应的检测信息融合。
- 根据权利要求12所述的处理设备,其特征在于,所述检测信息包括位置特征集,所述位置特征集包括至少两个位置特征,所述位置特征表示对应传感器检测到的物体,与所述物体四周的物体之间的位置关系。
- 根据权利要求13所述的处理设备,其特征在于,所述处理器具体用于:根据至少两个位置特征集获取对应的至少两个触线信息,其中,所述至少两个触线信息中的每个触线信息用于描述对应传感器所检测到的物体触碰基准线的信息,所述至少两个触线信息与所述至少两个位置特征集一一对应;根据所述至少两个触线信息分别确定对应的所述至少两个阵型信息,所述至少两个触线信息与所述至少两个阵型信息一一对应。
- 根据权利要求14所述的处理设备,其特征在于,所述触线信息包括对应传感器所检测到的物体触碰所述基准线的时序信息和触碰点分区信息,所述触碰点分区信息表示所述物体触碰所述基准线的触碰点在所述基准线中的分区信息;所述阵型信息包括触碰分区序列,所述触碰分区序列表示对应传感器所检测到的物体触碰所述基准线的分区位置的前后时序关系;所述处理器具体用于:获取所述至少两个触碰分区序列的第一子序列,将所述第一子序列作为所述目标阵型信息,其中,所述第一子序列与所述至少两个触碰分区序列的重合度均高于第一阈值;根据所述每个目标物在所述第一子序列中对应的触碰点分区信息,将所述至少两个触碰分区序列中同一目标物对应的检测信息融合。
- 根据权利要求14所述的处理设备,其特征在于,所述触线信息包括对应传感器所检测到的物体触碰所述基准线的时序信息和触碰时间间隔信息,所述触碰时间间隔信息表示所述物体触碰所述基准线的前后时间间隔;所述阵型信息包括触碰间隔序列,所述触碰间隔序列表示对应传感器所检测到的物体触碰所述基准线的时间间隔的分布;所述处理器具体用于:获取所述至少两个触碰间隔序列的第二子序列,将所述第二子序列作为所述目标阵型信息,其中,所述第二子序列与所述至少两个触碰间隔序列的重合度均高于第二阈值;根据所述每个目标物在所述第二子序列中对应的触碰时间分布信息,将所述至少两个触碰间隔序列中同一目标物对应的检测信息融合。
- 根据权利要求14所述的处理设备,其特征在于,所述触线信息包括对应传感器所检测到的物体触碰所述基准线的所述时序信息,所述触碰点分区信息和所述触碰时间间隔信息,所述触碰点分区信息表示所述物体触碰所述基准线的触碰点在所述基准线中的分区信息,所述触碰时间间隔信息表示所述物体触碰所述基准线的前后时间间隔;所述阵型信息包括所述触碰分区序列和所述触碰间隔序列,所述触碰分区序列表示对应传感器所检测到的物体触碰所述基准线的分区位置的前后时序关系,所述触碰间隔序列表示对应传感器所检测到的物体触碰所述基准线的时间间隔的分布;所述处理器具体用于:获取至少两个触碰分区序列的所述第一子序列,所述第一子序列与所述至少两个触碰分区序列的重合度均高于所述第一阈值;获取至少两个触碰间隔序列的第二子序列,所述第二子序列与所述至少两个触碰间隔序列的重合度均高于所述第二阈值;确定第一物体集合与第二物体集合的交集,将所述交集作为目标物体集合,其中,所述第一物体集合为所述第一子序列所对应的物体的集合,所述第二物体集合为所述第二子序列所对应的物体的集合;将所述目标物体集合的触碰分区序列和触碰间隔序列作为所述目标阵型信息。
- 根据权利要求13所述的处理设备,其特征在于,所述阵型信息包括目标群分布图,所述目标群分布图表示物体之间的位置关系;所述处理器具体用于:根据至少两个位置特征集,获取对应的至少两个初始目标群分布图,所述初始目标分布图表示对应传感器所检测到的物体之间的位置关系;通过视角变化算法,获取所述至少两个初始目标群分布图的标准视角图,将至少两个标准视角图作为对应的至少两个目标群分布图,其中,所述目标群分布图的阵位信息包括目标物的目标物分布信息,所述目标物分布信息表示所述目标物在对应传感器所检测到的物体中的位置;所述处理器具体用于:获取所述至少两个目标群分布图的图像特征集,将所述图像特征集作为所述目标阵型信息,其中,所述图像特征集与所述至少两个目标群分布图的重合度均高于第三阈值;所述处理器具体用于:根据所述每个目标物在所述图像特征集中对应的目标物分布信息,将所述至少两个目标群分布图中同一目标物对应的检测信息融合。
- 根据权利要求18所述的处理设备,其特征在于,所述处理器还用于:根据至少两个位置特征集,获取所述图像特征集中的对应目标物的至少两个触线信息,其中,所述至少两个触线信息中的每个触线信息用于描述对应传感器所检测到的物体触碰基准线的信息,所述至少两个触线信息与所述至少两个位置特征集一一对应;所述处理器具体用于,根据所述至少两个触线信息,获取对应的至少两个初始目标群分布图,其中,所述至少两个初始目标群分布图中的物体,具有相同的触线信息。
- 根据权利要求12至19中任一项所述的处理设备,其特征在于,所述至少两个传感器包括第一传感器和第二传感器,所述第一传感器对应的空间坐标系为标准坐标系,所述第二传感器对应的空间坐标系为目标坐标系,所述处理器还用于:根据将所述至少两个阵型信息中同一目标物对应的检测信息融合得到的融合检测信息,确定至少两个标准点信息与至少两个目标点信息之间的映射关系,所述标准点信息表示所述目标物体集合中各物体在所述标准坐标系中的位置信息,所述目标点信息表示所述各物体在所述目标坐标系中的位置信息,其中,所述至少两个标准点信息与所述至少两个目标点信息一一对应;根据所述标准点信息与所述目标点信息之间的映射关系,确定所述标准坐标系与所述目标坐标系之间的映射关系。
- 根据权利要求12至20中任一项所述的处理设备,其特征在于,所述处理器还用于,根据对所述至少两个阵型信息中同一目标物对应的检测信息的融合结果,计算所述至少两个传感器的时间轴之间的时间差。
- 根据权利要求12至21中任一项所述的处理设备,其特征在于,所述至少两个传感器包括标准传感器和待测传感器,所述处理器还用于:获取所述目标阵型信息在所述标准传感器对应的标准阵型信息;获取所述目标阵型信息在所述待测传感器对应的待测阵型信息;确定所述待测阵型信息与所述标准阵型信息的差异;根据所述差异和所述标准阵型信息,获取错误参数,所述错误参数用于指示所述待测阵型信息的误差,或用于指示所述待测传感器的性能参数。
- 一种处理设备,其特征在于,包括:处理器和与所述处理器耦合的存储器;所述存储器存储所述处理器执行的可执行指令,所述可执行指令指示所述处理器执行权利要求1至11中任一项所述的方法。
- 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中保存有程序,当所述计算机执行所述程序时,执行如权利要求1至11中任一项所述的方法。
- 一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上执行时,所述计算机执行如权利要求1至11中任一项所述的方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP21927621.9A EP4266211A4 (en) | 2021-02-27 | 2021-11-17 | INFORMATION PROCESSING METHOD AND ASSOCIATED DEVICE |
JP2023550693A JP2024507891A (ja) | 2021-02-27 | 2021-11-17 | 情報処理方法および関連デバイス |
US18/456,150 US20230410353A1 (en) | 2021-02-27 | 2023-08-25 | Information processing method and related device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110221913.6A CN114972935A (zh) | 2021-02-27 | 2021-02-27 | 一种信息处理方法及相关设备 |
CN202110221913.6 | 2021-02-27 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/456,150 Continuation US20230410353A1 (en) | 2021-02-27 | 2023-08-25 | Information processing method and related device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022179197A1 true WO2022179197A1 (zh) | 2022-09-01 |
Family
ID=82973145
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/131058 WO2022179197A1 (zh) | 2021-02-27 | 2021-11-17 | 一种信息处理方法及相关设备 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20230410353A1 (zh) |
EP (1) | EP4266211A4 (zh) |
JP (1) | JP2024507891A (zh) |
CN (1) | CN114972935A (zh) |
WO (1) | WO2022179197A1 (zh) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109444911A (zh) * | 2018-10-18 | 2019-03-08 | 哈尔滨工程大学 | 一种单目相机和激光雷达信息融合的无人艇水面目标检测识别与定位方法 |
CN109615870A (zh) * | 2018-12-29 | 2019-04-12 | 南京慧尔视智能科技有限公司 | 一种基于毫米波雷达和视频的交通检测*** |
CN109977895A (zh) * | 2019-04-02 | 2019-07-05 | 重庆理工大学 | 一种基于多特征图融合的野生动物视频目标检测方法 |
CN111257866A (zh) * | 2018-11-30 | 2020-06-09 | 杭州海康威视数字技术股份有限公司 | 车载摄像头和车载雷达联动的目标检测方法、装置及*** |
CN112305576A (zh) * | 2020-10-31 | 2021-02-02 | 中环曼普科技(南京)有限公司 | 一种多传感器融合的slam算法及其*** |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019198076A1 (en) * | 2018-04-11 | 2019-10-17 | Ionterra Transportation And Aviation Technologies Ltd. | Real-time raw data- and sensor fusion |
EP3702802A1 (en) * | 2019-03-01 | 2020-09-02 | Aptiv Technologies Limited | Method of multi-sensor data fusion |
-
2021
- 2021-02-27 CN CN202110221913.6A patent/CN114972935A/zh active Pending
- 2021-11-17 EP EP21927621.9A patent/EP4266211A4/en active Pending
- 2021-11-17 WO PCT/CN2021/131058 patent/WO2022179197A1/zh active Application Filing
- 2021-11-17 JP JP2023550693A patent/JP2024507891A/ja active Pending
-
2023
- 2023-08-25 US US18/456,150 patent/US20230410353A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109444911A (zh) * | 2018-10-18 | 2019-03-08 | 哈尔滨工程大学 | 一种单目相机和激光雷达信息融合的无人艇水面目标检测识别与定位方法 |
CN111257866A (zh) * | 2018-11-30 | 2020-06-09 | 杭州海康威视数字技术股份有限公司 | 车载摄像头和车载雷达联动的目标检测方法、装置及*** |
CN109615870A (zh) * | 2018-12-29 | 2019-04-12 | 南京慧尔视智能科技有限公司 | 一种基于毫米波雷达和视频的交通检测*** |
CN109977895A (zh) * | 2019-04-02 | 2019-07-05 | 重庆理工大学 | 一种基于多特征图融合的野生动物视频目标检测方法 |
CN112305576A (zh) * | 2020-10-31 | 2021-02-02 | 中环曼普科技(南京)有限公司 | 一种多传感器融合的slam算法及其*** |
Non-Patent Citations (1)
Title |
---|
See also references of EP4266211A4 |
Also Published As
Publication number | Publication date |
---|---|
JP2024507891A (ja) | 2024-02-21 |
CN114972935A (zh) | 2022-08-30 |
EP4266211A1 (en) | 2023-10-25 |
US20230410353A1 (en) | 2023-12-21 |
EP4266211A4 (en) | 2024-05-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10984261B2 (en) | Systems and methods for curb detection and pedestrian hazard assessment | |
JP7157054B2 (ja) | 整合画像及びlidar情報に基づいた車両ナビゲーション | |
CN106503653B (zh) | 区域标注方法、装置和电子设备 | |
US10317231B2 (en) | Top-down refinement in lane marking navigation | |
CN109884618B (zh) | 车辆的导航***、包括导航***的车辆和导航车辆的方法 | |
KR101758576B1 (ko) | 물체 탐지를 위한 레이더 카메라 복합 검지 장치 및 방법 | |
US9123242B2 (en) | Pavement marker recognition device, pavement marker recognition method and pavement marker recognition program | |
CN110619279B (zh) | 一种基于跟踪的路面交通标志实例分割方法 | |
CN111045000A (zh) | 监测***和方法 | |
CN111814752B (zh) | 室内定位实现方法、服务器、智能移动设备、存储介质 | |
CN110738150B (zh) | 相机联动抓拍方法、装置以及计算机存储介质 | |
JP6758160B2 (ja) | 車両位置検出装置、車両位置検出方法及び車両位置検出用コンピュータプログラム | |
CN102542256B (zh) | 对陷阱和行人进行前部碰撞警告的先进警告*** | |
RU2635280C2 (ru) | Устройство обнаружения трехмерных объектов | |
JP6552448B2 (ja) | 車両位置検出装置、車両位置検出方法及び車両位置検出用コンピュータプログラム | |
RU2619724C2 (ru) | Устройство обнаружения трехмерных объектов | |
Petrovai et al. | A stereovision based approach for detecting and tracking lane and forward obstacles on mobile devices | |
CN114724104B (zh) | 一种视认距离检测的方法、装置、电子设备、***及介质 | |
Murray et al. | Mobile mapping system for the automated detection and analysis of road delineation | |
WO2022179197A1 (zh) | 一种信息处理方法及相关设备 | |
CN110677491B (zh) | 用于车辆的旁车位置估计方法 | |
CN117392423A (zh) | 基于激光雷达的目标物的真值数据预测方法、装置及设备 | |
KR20190056775A (ko) | 차량의 객체 인식 장치 및 방법 | |
US20230091536A1 (en) | Camera Placement Guidance | |
CN104931024B (zh) | 障碍物检测装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21927621 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202337049231 Country of ref document: IN |
|
ENP | Entry into the national phase |
Ref document number: 2021927621 Country of ref document: EP Effective date: 20230719 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023550693 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |