WO2022179197A1 - 一种信息处理方法及相关设备 - Google Patents

一种信息处理方法及相关设备 Download PDF

Info

Publication number
WO2022179197A1
WO2022179197A1 PCT/CN2021/131058 CN2021131058W WO2022179197A1 WO 2022179197 A1 WO2022179197 A1 WO 2022179197A1 CN 2021131058 W CN2021131058 W CN 2021131058W WO 2022179197 A1 WO2022179197 A1 WO 2022179197A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
target
touch
processing device
formation
Prior art date
Application number
PCT/CN2021/131058
Other languages
English (en)
French (fr)
Inventor
朱启伟
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP21927621.9A priority Critical patent/EP4266211A4/en
Priority to JP2023550693A priority patent/JP2024507891A/ja
Publication of WO2022179197A1 publication Critical patent/WO2022179197A1/zh
Priority to US18/456,150 priority patent/US20230410353A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/91Radar or analogous systems specially adapted for specific applications for traffic control
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/40Means for monitoring or calibrating
    • G01S7/4004Means for monitoring or calibrating of parts of a radar system
    • G01S7/4026Antenna boresight
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/803Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Definitions

  • the embodiments of the present application relate to the field of data processing, and in particular, to an information processing method and related equipment.
  • different types of sensors can detect different feature information.
  • cameras can detect the appearance characteristics of the target
  • radar can detect the speed and distance of the target.
  • the space alignment process is as follows: obtain the images that can be detected by each sensor, determine the calibration point in the actual space, and associate the position of the calibration point in the actual space with the position of the calibration point displayed on the screen. By performing the above operations on a plurality of calibration points, a mapping relationship between the actual space and each sensor screen is established, and a mapping relationship between each sensor screen is also established. Then align the time of different sensors, when at the same time, object information is detected at a certain point on a sensor screen, and object information is also detected at the point corresponding to this point on other sensor screens, it can be determined. The information is the information of the same object. Therefore, the detection results of different sensors for the object can be combined together as the fusion detection information of the object.
  • the embodiment of the present application provides an information processing method, which is used to realize fusion of detection information detected by different sensors, so as to improve the efficiency of detection information fusion.
  • a first aspect of the embodiments of the present application provides an information processing method, the method is applied to a processing device in a monitoring system, and the detection system further includes a plurality of sensors.
  • the detection information obtained by each of the multiple sensors includes detection information for the same multiple targets, and the method includes:
  • the processing device acquires a plurality of detection information from the above-mentioned multiple sensors, wherein the multiple detection information corresponds to the multiple sensors one-to-one, and each detection information in the multiple detection information is detected by the sensor corresponding to the detection information .
  • the processing device determines a plurality of corresponding formation information according to the plurality of detection information, wherein the plurality of formation information is in one-to-one correspondence with the plurality of detection information, and each formation information is used to describe the difference between the objects detected by the sensor corresponding to the formation information. and the object includes the aforementioned target.
  • the processing device determines target formation information according to a plurality of formation information, and the coincidence degree of the target formation information and the foregoing plurality of formation information is higher than a preset threshold, wherein the target formation information is used to describe the positional relationship between the foregoing plurality of targets , and the target formation information includes the position information of each target.
  • the processing device fuses the detection information corresponding to the same target in the multiple formation information according to the position information of any target in each target.
  • the formation information between the objects detected by the sensors is respectively determined through detection information from different sensors, and the target formation information is determined according to the degree of coincidence with each formation information, so that the target object is determined. .
  • the target formation information is the formation information with similar characteristics detected by different sensors, it reflects the information detected by the same target at different sensors. Therefore, any object reflected in the target formation information can be determined according to the target formation information. Correspondence between detection results at different sensors, according to which the detection results of different sensors for the same object can be fused.
  • the method of obtaining fusion detection information through formation information in the embodiment of the present application can greatly improve the efficiency of obtaining fusion detection information.
  • the method of the embodiment of the present application only needs to provide detection information of different sensors, and does not need to occupy the site to be observed, which expands the scope of application of detection information fusion.
  • the detection information may include a position feature set
  • the position feature set may include multiple position features
  • the position features are used to represent objects detected by the corresponding sensor , and the positional relationship between the objects around the object.
  • the detection information includes a position feature set
  • the position feature set can accurately reflect the positional relationship between objects detected by the sensor, that is, an accurate formation can be determined according to the positional relationship between the objects. How to accurately fuse detection information from different sensors for the same target.
  • the processing device determines a plurality of corresponding formation information according to a plurality of detection information, which may specifically include:
  • the position feature set obtains a plurality of corresponding touch line information, wherein each touch line information in the plurality of touch line information is used to describe the information that the object detected by the corresponding sensor touches the reference line.
  • the above-mentioned plurality of touch line information The information is in one-to-one correspondence with the foregoing multiple location feature sets.
  • the processing device respectively determines a plurality of corresponding formation information according to the foregoing plurality of antenna information, wherein the foregoing plurality of antenna information is in one-to-one correspondence with the foregoing plurality of formation information.
  • the touch line information is obtained through the position feature set. Since the touch line information is the information of the object touching the reference line, the touch time, touch interval, touch position, etc., including specific values, can be obtained by touching the reference line. or data on specific location features. Therefore, through the specific numerical values or specific position characteristics of the touch lines of multiple targets, a collection of touch line data can be obtained, such as a sequence composed of multiple touch times, a sequence composed of multiple touch intervals, or multiple touch locations. composition distribution, etc. Since the above-mentioned sets of antenna data all have specific numerical values or position characteristics, they can be directly calculated without other data processing, so that target formation information whose coincidence degree meets the preset threshold can be quickly determined.
  • the target formation information may be determined according to the touch partition sequence, specifically:
  • the touch line information includes the timing information and the touch point partition information corresponding to the object detected by the sensor touching the reference line, and the touch point partition information represents the touch point of the object touching the reference line, and the partition information in the reference line;
  • the information includes a sequence of touch zones, and the sequence of touch zones represents a sequence relationship before and after the zone positions corresponding to the objects detected by the sensor touching the reference line.
  • the processing device determines the target formation information according to the multiple formation information, which may specifically include: the processing device acquires the first subsequences of the multiple touch partition sequences, and uses the first subsequence as the target formation information, wherein the first subsequence is the same as the multiple The coincidence degrees of the touch partition sequences are all higher than the first threshold.
  • the processing device fuses the detection information corresponding to the same target in the plurality of formation information according to the position information of each target, which may specifically include: the processing device according to the touch corresponding to each target in the first subsequence.
  • Point partition information fuse the detection information corresponding to the same target in multiple touch partition sequences.
  • the timing information represents the front-and-rear relationship between different targets touching the reference line
  • the touch point partition information represents the left-right relationship between different targets touching the reference line
  • the touch point partition information reflects the positional relationship of multiple targets touching the reference line in the touch partition sequence. Since the timing information and the touch point partition information are both specific numerical values, the touch partition sequence is a set of numerical values reflecting the positional relationship between the objects. According to the detection information from different sensors, the corresponding touch partition sequence is obtained. The obtained multiple touch partition sequences are multiple value sets. It is determined that the coincidence degree of the value sets meets the preset threshold. It is only necessary to compare the corresponding values, and no complicated operations are required, which improves the matching target formation information. efficiency.
  • the longest common subsequence (longest common sequence, LCS) algorithm can be used to detect detections from different sensors. For a plurality of touch sub-sequences of the information, a first sub-sequence whose coincidence degree with each touch sub-sequence is higher than the first threshold is determined. In this embodiment of the present application, all common sequences of multiple touch partition sequences can be obtained through the LCS algorithm, so as to achieve matching of the same position features of the multiple touch partition sequences.
  • the aforementioned first subsequence determined by the LCS algorithm may include, among the subsequences whose coincidence degrees with the aforementioned multiple touch partition sequences are all higher than the first threshold, longest subsequence.
  • all common sequences of multiple touch partition sequences can be determined through the LCS algorithm, so as to match all the fragments of the touch partition sequences with the same location feature. If a plurality of fragments are public sequences, and some non-public sequences are interspersed in these public sequences, these non-public sequences interspersed in the public sequences can be identified. Among them, the non-public sequences reflect different positional relationships in different sensors. In this case, it can be considered that the non-public sequence mixed in the public sequence is caused by the false detection or missed detection of the sensor, so that the non-public sequence is fault-tolerant, that is, the non-public sequence is used in the target detected by different sensors. Correspondence to objects to realize the fusion of detection information.
  • the first subsequence determined by the LCS algorithm may include the subsequence with the longest length among the subsequences whose coincidence degrees with multiple touch partition sequences are all higher than the first threshold. Since the positional relationship between the targets may be similar by chance, the longer the determined subsequence is, the lower the possibility of having a similar positional relationship is, and the more chance can be avoided, the longest subsequence is determined by the LCS algorithm. sequence, the target formation information of the same target set can be accurately determined. For example, the positional relationship of two objects may be similar by chance, but if the standard is raised to the positional relationship between ten objects with a high degree of coincidence, the probability of ten objects with similar positional relationship is relatively high.
  • the target formation information may be determined according to the touch position sequence, specifically:
  • the touch line information includes timing information and touch point position information corresponding to the object detected by the sensor touching the reference line.
  • the touch point position information indicates the touch point of the object touching the reference line, and the position information in the reference line reflects the The left and right positional relationship between the targets is calculated; the formation information includes the touch position sequence, and the touch position sequence represents the front and back time sequence relationship of the position of the object detected by the sensor touching the reference line.
  • the processing device determines the target formation information according to the plurality of formation information, which may specifically include: the processing device acquires the third subsequence of the multiple touch position sequences, and uses the third subsequence as the target formation information, wherein the third subsequence is the same as the multiplicity.
  • the coincidence degrees of the touch position sequences are all higher than the third threshold.
  • the processing device fuses the detection information corresponding to the same target in the plurality of formation information according to the position information of each target, which may specifically include: the processing device according to the touch corresponding to each target in the third subsequence. Point position information, and fuse the detection information corresponding to the same target in multiple touch position sequences.
  • the touch point position information represents the left-right relationship between different targets touching the reference line, and may be continuous numerical values or data. Therefore, based on the continuous value or data, the formation information of the target can be more accurately distinguished from the formation information of other non-targets, so as to more accurately realize the fusion of detection information for the same target.
  • the movement trend between the targets can be analyzed or calculated through the continuous numerical value or data.
  • other information such as the movement trajectory of the target objects, etc., can also be calculated, which is not limited here.
  • the target formation information may be determined according to the touch interval sequence, specifically:
  • the touch line information includes timing information and touch time interval information corresponding to the object detected by the sensor touching the reference line, wherein the touch time interval information represents the time interval before and after the object touches the reference line; the formation information includes the touch interval sequence , the touch interval sequence represents the distribution of the time interval when the object detected by the corresponding sensor touches the reference line.
  • the processing device determines the target formation information according to the plurality of formation information, which may specifically include: the processing device acquires the second subsequence of the multiple touch interval sequences, and uses the second subsequence as the target formation information, wherein the second subsequence is the same as the multiplicity.
  • the coincidence degrees of the touch interval sequences are all higher than the second threshold.
  • the processing device fuses the detection information corresponding to the same target in at least two formation information according to the position information of each target, including: the processing device according to the touch time distribution information corresponding to each target in the second subsequence , and fuse the detection information corresponding to the same target in at least two touch interval sequences.
  • the timing information represents the relationship before and after different targets touch the reference line
  • the touch time interval information represents the time interval before and after different targets touch the reference line.
  • the interval touch time interval information reflects the positional relationship of multiple targets touching the reference line in the touch interval sequence. Since the timing information and the touch time interval information are both specific values, the touch interval sequence is a set of values reflecting the positional relationship between the objects. According to the detection information from different sensors, the corresponding touch interval sequence is obtained. The obtained multiple touch interval sequences are multiple value sets. It is determined that the coincidence degree of the value sets meets the preset threshold. It is only necessary to compare the corresponding values, and complex operations are not required, which improves the matching target formation information. efficiency.
  • the LCS algorithm may be used to determine the contact with the sensor according to multiple touch interval sequences derived from detection information of different sensors.
  • the coincidence degree of each touch interval sequence is higher than the second subsequence of the second threshold.
  • all common sequences of multiple touch interval sequences can be obtained through the LCS algorithm, so as to realize matching of the same position features of the multiple touch interval sequences.
  • the aforementioned second sequence determined by the LCS algorithm may include, among the subsequences whose coincidence degrees with the aforementioned multiple touch interval sequences are all higher than the second threshold, the length longest subsequence.
  • all common sequences of multiple touch interval sequences can be determined by using the LCS algorithm, so as to match all segments of touch interval sequences with the same location feature. If a plurality of fragments are public sequences, and some non-public sequences are interspersed in these public sequences, these non-public sequences interspersed in the public sequences can be identified. Among them, the non-public sequences reflect different positional relationships in different sensors. In this case, it can be considered that the non-public sequence mixed in the public sequence is caused by the false detection or missed detection of the sensor, so that the non-public sequence is fault-tolerant, that is, the non-public sequence is used in the target detected by different sensors. Correspondence to objects to realize the fusion of detection information.
  • the second subsequence determined by the LCS algorithm may include the subsequence with the longest length among the subsequences whose coincidence degrees with multiple touch interval sequences are all higher than the second threshold. Since the time interval of the target touching the reference line may be similar by chance, the longer the determined subsequence length, the lower the possibility of having a similar time interval, and the more chance to avoid this chance, the longest subsequence determined by the LCS algorithm The subsequence of the same target can accurately determine the target formation information of the same target set.
  • the time interval between two targets touching the reference line may be similar by chance, but if the standard is raised to the time interval where ten targets touch the reference line with high coincidence, ten targets with similar time intervals Compared with the possibility of two targets with a similar time interval, the possibility of the target will be greatly reduced, so if the second subsequence of ten targets is determined by the LCS algorithm, these ten targets are different sensors targeting the same The detection results of the ten targets are more likely, reducing the possibility of matching errors.
  • the target formation information may be determined according to the touch partition sequence and the touch interval sequence, specifically:
  • the touch line information includes timing information corresponding to the object detected by the sensor touching the reference line, touch point partition information and touch time interval information, wherein the touch point partition information indicates that the touch point of the object touching the reference line is in the reference line.
  • the partition information in the line, the touch time interval information represents the time interval before and after the object touches the reference line; the formation information includes the touch partition sequence and the touch interval sequence, wherein the touch partition sequence represents the object detected by the corresponding sensor.
  • the time sequence relationship between the partition positions of the reference line before and after, and the touch interval sequence represents the distribution of time intervals corresponding to the objects detected by the sensor touching the reference line.
  • the processing device determines target formation information according to multiple formation information, which may specifically include:
  • the processing device acquires a first subsequence of at least two touch partition sequences, wherein the coincidence degrees of the first subsequence and the multiple touch partition sequences are all higher than a first threshold; the processing device acquires the at least two touch interval sequences the second subsequence, wherein the coincidence degrees of the second subsequence and the multiple touch interval sequences are all higher than the second threshold; the processing device determines the intersection of the first object set and the second object set, and uses the intersection as the target object set, wherein the first object set is the set of objects corresponding to the first subsequence, and the second object set is the set of objects corresponding to the second subsequence; the processing device combines the touch partition sequence of the target object set with the touch The interval sequence is used as target formation information.
  • the first object set corresponding to the first subsequence and the second object set corresponding to the second subsequence are used to determine the intersection of the first object set and the second object set, and use the intersection set as a collection of target objects.
  • the objects in the intersection correspond to the first subsequence, that is, according to the detection information of different sensors, similar touch partition information can be obtained; at the same time, the objects in the intersection correspond to the second subsequence, that is, according to the detection information of different sensors detection information, while having similar touch time interval information.
  • the intersection between objects corresponding to other subsequences can also be taken, such as the first subsequence and the third subsequence.
  • the intersection between the objects corresponding to each sequence, or the intersection between the objects corresponding to the second subsequence and the third subsequence, or the objects corresponding to other subsequences, and any one of the first to third subsequences The intersection between objects corresponding to the sequence.
  • subsequences are also used to represent the positional relationship between objects, such as the distance or direction between objects, etc., which are not limited here.
  • suitable subsequences can be flexibly selected for operation, which improves the feasibility and flexibility of the scheme.
  • the intersection between the corresponding objects of more subsequences can also be taken, for example, taking the first subsequence, the second subsequence and the third subsequence Subsequences correspond to intersections between objects.
  • the greater the number of subsequences taken the more similar types of information representing the positional relationship of objects can be obtained based on the detection information of multiple sensors, and the higher the possibility that the set of objects corresponding to the detection information is the same set of objects. Therefore, by screening the intersection of objects corresponding to multiple subsequences, the formation information of the target can be more accurately distinguished from the formation information of other non-targets, so as to more accurately realize the fusion of detection information for the same target.
  • the target formation information may be determined through the target group distribution map, specifically:
  • the formation information includes a target group distribution map, wherein the target group distribution map represents the positional relationship between objects.
  • the processing device determines a plurality of corresponding formation information according to a plurality of detection information, which may specifically include: the processing device obtains a plurality of corresponding initial target group distribution maps according to a plurality of location feature sets, wherein the initial target group distribution map represents the location of the corresponding sensor. The positional relationship between the detected objects; the processing device obtains the standard perspective maps of multiple initial target group distribution maps through the perspective change algorithm, and uses the multiple standard perspective maps as the corresponding multiple target group distribution maps, wherein the target The array position information of the group distribution map includes the target object distribution information of the target object, and the target object distribution information represents the position of the target object in the object detected by the corresponding sensor.
  • the processing device determines the target formation information according to the at least two formation information, which may specifically include: the processing device obtains image feature sets of multiple target group distribution maps, and uses the image feature sets as the target formation information, wherein the image feature set is associated with the multiple targets.
  • the coincidence degree of the group distribution map is higher than the third threshold.
  • the processing device fuses the detection information corresponding to the same target in the multiple formation information according to the position information of each target, which may specifically include: the processing device, according to the target distribution information corresponding to each target in the image feature set, fuses the detection information.
  • the detection information corresponding to the same target in the distribution map of multiple target groups is fused.
  • a plurality of corresponding initial target group distribution maps are obtained according to detection information from different sensors, and a plurality of corresponding target group distribution maps are obtained through a perspective change algorithm, and then images of multiple target group distribution maps are obtained. feature set, and use the image feature set as the target formation information.
  • an image feature set whose coincidence degree with the multiple target group distribution maps is higher than a preset threshold is determined. Since image features can intuitively reflect the positional relationship between objects displayed in the image, the image feature set determined by multiple target group distribution maps can intuitively reflect detection results with similar The detection results of the sensor to the same target group are matched, so as to accurately realize the fusion of detection information.
  • the acquisition of the image feature set may be implemented in combination with the reference line, specifically:
  • the processing device may acquire multiple contact line information of the target object corresponding to the location feature set according to the multiple location feature sets, wherein each contact line information in the multiple contact line information is used to describe the information detected by the corresponding sensor.
  • the information that the object touches the reference line, and the multiple touch line information corresponds to the multiple position feature sets one-to-one.
  • the processing device obtains a plurality of corresponding initial target group distribution diagrams according to the plurality of location feature sets, which may specifically include: the processing device obtains a plurality of corresponding initial target group distribution diagrams according to the plurality of contact line information, wherein the plurality of initial target groups The objects in the group distribution map have the same antenna information.
  • the initial target group of the approximate time due to the high similarity between the images of the approximate time, if the same time is not determined, when matching the initial target distribution maps from different sensors, the initial target group of the approximate time will be introduced.
  • the interference of the distribution map leads to the matching error of the distribution map and the wrong acquisition of the image feature set, so that the detection information at different times is fused, resulting in a fusion error of the detection information.
  • This error can be avoided by using the contact line information.
  • multiple initial target group distribution maps are determined by the contact line information, and the multiple initial target group distribution maps have the same contact line information, indicating the distribution of the multiple initial target groups.
  • the images are acquired at the same time, which ensures that the fused detection information is acquired at the same time, which improves the accuracy of detection information fusion.
  • any one of the first to tenth embodiments of the first aspect, and the eleventh embodiment of the first aspect of the embodiments of the present application may also be implemented.
  • the mapping of the coordinate system specifically:
  • the plurality of sensors include a first sensor and a second sensor, wherein the space coordinate system corresponding to the first sensor is a standard coordinate system, and the space coordinate system corresponding to the second sensor is a target coordinate system.
  • the method may further include:
  • the processing device determines the mapping relationship between the multiple standard point information and the multiple target point information according to the fusion detection information, wherein the fusion detection information is obtained by fusing the detection information corresponding to the same target in the multiple array information, and the standard point information Represents the position information of each object in the target object set in the standard coordinate system, and the target point information represents the position information of each object in the target object set in the target coordinate system, wherein multiple standard point information and multiple target point information One-to-one correspondence; the processing device determines the mapping relationship between the standard coordinate system and the target coordinate system according to the mapping relationship between the standard point information and the target point information.
  • the mapping relationship between the multiple standard point information and the multiple target point information is determined by fusing the detection information, and the standard coordinates are determined through the mapping relationship between the multiple standard point information and the multiple target point information.
  • the mapping relationship between the system and the target coordinate system In the method described in the embodiments of the present application, as long as detection information from different sensors can be acquired, the mapping of coordinate systems between different sensors can be realized. Subsequent determination of target formation information, point information mapping and other steps can be realized by the processing equipment itself, without manual calibration and mapping. By processing equipment matching target formation information, the accuracy of equipment operation improves the accuracy of point information mapping. At the same time, as long as the detection information from different sensors can be obtained, the fusion of detection information and the mapping of the coordinate system can be realized, which avoids the scene limitation caused by manual calibration and ensures the accuracy and universality of detection information fusion.
  • the alignment of the time axis may further include:
  • the processing device calculates the time difference between the time axes of the multiple sensors according to the fusion result of the detection information corresponding to the same target in the multiple formation information.
  • the time axes of different sensors can be aligned according to the time difference.
  • the time axis alignment method provided by the embodiments of the present application can be implemented as long as the detection information of different sensors can be obtained, and it does not require multiple sensors to be in the same time synchronization system, which expands the application scenarios of the time axis alignment of different sensors, and also expands the The scope of application of information fusion.
  • the correction of the sensor can also be implemented.
  • the plurality of sensors include standard sensors and to-be-tested sensors, and the method may further include:
  • the processing device obtains the target formation information corresponding to the standard formation information of the standard sensor; the processing device obtains the target formation information corresponding to the to-be-measured formation information of the sensor to be tested; the processing device determines the difference between the to-be-measured formation information and the standard formation information; The difference and standard formation information are used to obtain error parameters, wherein the error parameters are used to indicate the error of the formation information to be tested, or to indicate the performance parameters of the sensor to be tested.
  • the standard sensor is used as the detection standard, and the error parameter is obtained according to the difference between the formation information to be tested and the standard formation information.
  • the error parameter is used to indicate the error of the formation information to be measured
  • the information corresponding to the error parameter in the formation information to be measured can be corrected through the error parameter and the standard formation information;
  • the error parameter is used to indicate the performance parameter of the sensor to be measured
  • the performance parameters such as the false detection rate of the sensor to be tested can be determined, and the data analysis of the sensor to be tested can be realized to realize the selection of the sensor.
  • a second aspect of the present application provides a processing device, the processing device is located in a detection system, and the detection system further includes at least two sensors, wherein the detection information acquired by the at least two sensors includes at least two sensors for the same detection information of at least two targets, the processing device includes: a processor and a transceiver.
  • the transceiver is configured to acquire at least two pieces of detection information from at least two sensors, wherein the at least two sensors correspond to the at least two pieces of detection information in one-to-one correspondence.
  • the processor is configured to: determine at least two corresponding formation information according to the at least two detection information, wherein each formation information is used to describe the positional relationship between objects detected by the corresponding sensor, wherein the objects include the aforementioned targets
  • the target formation information is determined according to at least two formation information, the coincidence degree of the target formation information and each formation information of the at least two formation information is higher than a preset threshold, and the target formation information is used to describe the relationship between the at least two targets.
  • the target formation information includes the position information of each target; according to the position information of any target in each target, the detection information corresponding to the same target in at least two formation information is fused.
  • the processing device is adapted to perform the method of the aforementioned first aspect.
  • a third aspect of the embodiments of the present application provides a processing device, where the device includes: a processor and a memory coupled to the processor.
  • the memory is used for storing executable instructions for instructing the processor to perform the method of the aforementioned first aspect.
  • a fourth aspect of the embodiments of the present application provides a computer-readable storage medium, where a program is stored in the computer-readable storage medium, and when the computer executes the program, the method described in the foregoing first aspect is performed.
  • a fifth aspect of the embodiments of the present application provides a computer program product.
  • the computer program product When the computer program product is executed on a computer, the computer executes the method described in the foregoing first aspect.
  • Fig. 1a is a schematic diagram of time axis alignment of multiple sensors
  • Fig. 1b is a schematic diagram of the alignment of the spatial coordinate system of the multi-sensor
  • FIG. 2 is a schematic diagram of a matching target provided by an embodiment of the present application.
  • 3a is a system schematic diagram of an information processing method provided by an embodiment of the present application.
  • FIG. 3b is a schematic diagram of an application scenario of the information processing method provided by the embodiment of the present application.
  • FIG. 4 is a schematic flowchart of an information processing method provided by an embodiment of the present application.
  • FIG. 5 is a characteristic schematic diagram of an information processing method provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a scribing method provided in an embodiment of the present application.
  • FIG. 7 is another schematic flowchart of an information processing method provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of another application scenario of the information processing method provided by the embodiment of the present application.
  • FIG. 9 is another schematic flowchart of an information processing method provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of another application scenario of the information processing method provided by the embodiment of the present application.
  • FIG. 11 is another schematic flowchart of an information processing method provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of another application scenario of the information processing method provided by the embodiment of the present application.
  • FIG. 13 is another schematic flowchart of an information processing method provided by an embodiment of the present application.
  • FIG. 14 is a schematic diagram of another application scenario of the information processing method provided by the embodiment of the present application.
  • FIG. 16 is another schematic diagram of an information processing method provided by an embodiment of the present application.
  • FIG. 17 is another schematic flowchart of an information processing method provided by an embodiment of the present application.
  • FIG. 18 is a schematic diagram of another application scenario of the information processing method provided by the embodiment of the present application.
  • 19 is a schematic diagram of another application scenario of the information processing method provided by the embodiment of the present application.
  • FIG. 20 is a schematic structural diagram of a processing device provided by an embodiment of the present application.
  • FIG. 21 is another schematic structural diagram of a processing device provided by an embodiment of the present application.
  • the embodiments of the present application provide an information processing method and related equipment, which are used to realize fusion of detection information detected by different sensors, so as to improve the efficiency of detection information fusion.
  • the sensor can detect objects, and for the same object, different sensors can detect different detection information.
  • cameras can detect appearance features such as shape and texture of objects
  • radar can detect motion information such as position and speed of objects.
  • FIG. 1a is a schematic diagram of the alignment of the time axes of multiple sensors.
  • the time synchronization device in the time synchronization system generates a time stamp, and transmits the time stamp to a plurality of sensors in the time synchronization system.
  • the alignment of the time axis can be achieved by detecting multiple sensors in the time synchronization system based on the same time stamp.
  • time stamp of the time synchronization equipment can only be transmitted in the time synchronization system, and the sensors outside the time synchronization system cannot receive the time stamp, the alignment of the time axis can only be realized in the same time synchronization system, which limits the fusion of detection information. application scenarios.
  • FIG. 1b is a schematic diagram of the alignment of the spatial coordinate systems of the multi-sensors.
  • Spatial calibration needs to determine the calibration point in the actual space, and manually calibrate the position of the calibration point in different sensor screens, for example, calibrate the calibration point 4 in the screen of sensor A, and calibrate the corresponding calibration point 4' in the screen of sensor B. , and then manually determine the mapping relationship of the same calibration point in different sensor images.
  • multiple calibration points need to be calibrated to achieve a complete mapping of the spatial coordinate system.
  • the spatial calibration needs to be realized manually, there may be deviations between human's subjective cognition and the actual mapping relationship, which may not necessarily reflect the actual mapping relationship.
  • the calibration point 4 and the calibration point 4' shown in Figure 1b, in the cylinder can not find the calibration point that is obviously different from other points, the calibration points for different pictures can not actually reflect the same point, cause calibration errors.
  • any other objects that do not have obvious distinguishing points, such as spheres are prone to the above calibration errors. Therefore, the manually calibrated mapping relationship is not necessarily accurate.
  • the spatial calibration is inaccurate. In the process of fusing the detection information of multiple sensors, the same target in reality may be judged as different targets, or different targets may be judged as the same target. Information is wrong data.
  • spatial calibration in addition to performing spatial calibration on the images of the two cameras as shown in FIG. 1b, spatial calibration can also be performed on multiple sensors that do not belong to the same type. For example, the camera image and the radar image are calibrated. For the calibration of different types of sensor images, the above-mentioned calibration point calibration errors may also occur, which will not be repeated here.
  • the efficiency of manual calibration is low, and the spatial calibration needs to be manually calibrated for multiple calibration points, and the detected area cannot be used during the calibration process, which limits the actual operation.
  • manual demarcation usually requires occupying the train lanes for half a day or a day. Normally, the scheduling of train lanes does not allow for such a long occupancy. In this case, the fusion of spatial calibration and detection information cannot be achieved.
  • the current alignment of the time axes of different sensors is limited by the time synchronization system, and cannot be realized when the sensors are not in the same time synchronization system.
  • the current alignment of different sensor space coordinate systems is limited by the inefficiency and low accuracy of manual calibration, which makes the fusion of detection information prone to errors and limits the scenarios where fusion can be achieved.
  • an embodiment of the present application provides an information processing method, which acquires formation information between objects displayed by the detection information through detection information from multiple sensors. By matching the target formation information with similar characteristics, the target formation information is determined as the detection information of the same object set by different sensors, so as to fuse the detection information of different sensors.
  • the method provided by the embodiment of the present application is actually a reproduction on the device of the process of manually determining the same target in the pictures of different sensors.
  • Each sensor has multiple pictures corresponding to multiple times, and the number and status of the objects reflected in each picture are different. Faced with so much information, the human eye cannot directly capture all the details in the picture, and can only first distinguish the picture of the same set of objects in different pictures as a whole.
  • This process is also referred to as matching the target object set since it is a plurality of pictures in which the same target object set is determined in different pictures.
  • the process of matching the human eye to the target set requires an abstract process. Other details in the picture are omitted, and only the positional relationship between the objects in the picture is extracted, so as to abstract the formation information between the objects.
  • FIG. 2 is a schematic diagram of a matching target provided by an embodiment of the present application.
  • the detection information A there are 5 motor vehicles that form a shape similar to the number "9".
  • the radar picture that is, the detection information B
  • the detection information B there are 5 targets that also form a shape similar to "9”.
  • the five target sets in these two pictures have similar positional characteristics, that is, they have similar formation information. It can be considered that the two are the embodiment of the same target set in different sensor pictures.
  • the same single target can be determined in the pictures of different sensors according to the position of the single target in the target set in the pictures of different sensors.
  • the target at the bottom of formation "9" is target A
  • the detection information B detected by sensor B the bottom of formation "9”
  • the target A' and the target A are the same target.
  • sensor A may be a camera
  • sensor B may be a radar
  • sensor A and sensor B can also be other combinations, for example, sensor A is radar, sensor B is ETC, etc., or sensor A and sensor are the same sensor, such as radar or camera, etc., which are not limited here.
  • the number of sensors is not limited.
  • more detection information can be obtained through more sensors, and the same target in the detection information can be analyzed, which is not limited here.
  • the solution of the embodiment of the present invention mainly includes the following steps: 1. Acquire multiple detection information from different sensors; 2. Determine corresponding formation information according to the multiple detection information; 3. Determine according to the multiple formation information 4. Fusion of the detection information of different sensors for the same target according to the position information of each target in the target formation information.
  • FIG. 3a is a schematic diagram of a system of an information processing method provided by an embodiment of the present application.
  • the system is a detection system, and the system includes a processing device and a plurality of sensors.
  • sensor A and sensor B as an example, sensor A transmits the detected detection information A to the processing device, and sensor B transmits the detected detection information B to the processing device.
  • the processing device obtains the fusion information of the target object according to the detection information A and the detection information B.
  • the devices in the detection system described in this application may have a fixed connection state or may not have a fixed connection state, and data transmission may be implemented in the form of data copying or the like.
  • the detection information of the sensor can be transmitted to the processing device, the sensor and the processing device can be called a detection system, which is not limited here.
  • sensor A and sensor B can acquire detection information respectively, and then copy detection information A and detection information B to a processing device within a certain period of time, and the processing device processes detection information A and detection information B. This mode may also be referred to as offline processing.
  • FIG. 3b is a schematic diagram of an application scenario of the information processing method provided by the embodiment of the present application.
  • the information processing method provided by the embodiment of the present application is mainly used for information fusion in a multi-sensor system.
  • a multi-sensor system can receive detection information from multiple sensors and fuse the detection information from multiple sensors.
  • the detection information may be the license plate from the electronic toll collection (electronic toll collection, ETC) sensor, transaction flow information, and the like.
  • ETC electronic toll collection
  • the multi-sensor system can also obtain other detection information from other sensors, such as the license plate, model information, etc. from the camera, distance and speed information from the radar, etc., which are not limited here.
  • the information processing method provided in the embodiment of the present application realizes the fusion of detection information, and the fusion result can be applied to various scenarios, such as toll auditing on expressways, off-site overtaking, safety monitoring, and the like.
  • the fusion results can also be applied to other scenarios, such as holographic intersections on urban intersections, vehicle entry warning, pedestrian warning, etc., or intrusion detection on closed roads, automatic parking, etc., here Not limited.
  • FIG. 4 is a schematic flowchart of an information processing method provided by an embodiment of the present application. The method includes:
  • the detection information A acquired by the sensor A may include a location feature set.
  • the position feature set includes a plurality of position features, and the position features are used to represent the positional relationship between the object detected by the sensor A and the objects around the object.
  • the detection information is a picture composed of pixels, and the position feature can be embodied as the distance between the pixels.
  • the position feature can also be expressed in other forms, for example, a left-right relationship or a front-back relationship between pixels, which is not limited here.
  • the sensor A in addition to a camera, the sensor A may also be other types of sensors, such as radar, an electronic toll collection (electronic toll collection, ETC) sensor, etc., which are not limited here.
  • sensors there will be corresponding position features.
  • the position feature of radar can be expressed as the distance between objects or the direction between objects, etc.
  • the position feature of ETC can be expressed as the lane information of the vehicle and the timing relationship between the front and rear, etc. , which is not limited here.
  • the detection information B acquired by the sensor B may also include a location feature set.
  • the detection information B, and the position feature refer to the description of the sensor A, the detection information A, and the position feature in step 401, and details are not repeated here.
  • the senor A and the sensor B may be the same type of sensor, or may be different types of sensors.
  • sensor A and sensor B may be cameras with different angles or different radars, or sensor A may be a camera or radar, and sensor B may be ETC, etc., which is not limited here.
  • the number of sensors in the embodiment of the present application is not limited to two, and the number of sensors may be any integer greater than or equal to 2, which is not limited here.
  • Sensor A and sensor B are used as examples of sensors in the monitoring system. If the detection system includes more sensors, for the description of these sensors, please refer to the description of sensor A and sensor B in step 401 and step 402, which will not be repeated here. .
  • the types of the plurality of sensors are also not limited, and may be the same type of sensors or different types of sensors, which are not limited here.
  • the processing device After acquiring the detection information A, the processing device can determine the formation information A according to the detection information A, and the formation information A is used to indicate the positional relationship between the objects detected by the sensor A.
  • the processing device may determine the formation information A according to the position feature set.
  • the processing device may determine the formation information A according to the position feature set.
  • the processing device may determine the formation information A according to the position feature set.
  • the processing device can determine the formation information B according to the detection information B, and the formation information B is used to indicate the positional relationship between the objects detected by the sensor B.
  • the positional relationship between the objects may include at least one of a left-right positional relationship between the objects, or a front-to-back positional relationship between the objects.
  • the processing device may determine the formation information B according to the position feature set.
  • the determination of formation information may be implemented by a method such as a scribing method or an image feature matching method.
  • a method such as a scribing method or an image feature matching method.
  • step 401 and step 402 do not necessarily have a sequential relationship, that is, step 401 may be performed before or after step 402, and step 401 and step 402 may also be performed simultaneously, which is not limited here.
  • Step 403 and step 404 also have no necessary sequence relationship, that is, step 403 can be executed before or after step 404, and step 403 and step 404 can also be executed at the same time, as long as step 403 is executed after step 401, and step 404 is executed after step 402. That is, there is no limitation here.
  • the corresponding formation information should also be determined according to the acquired detection information.
  • the process of determining the corresponding formation information refer to the descriptions of steps 403 and 404, which are not described here. Repeat.
  • the target formation information After obtaining the formation information A and the formation information B, the target formation information can be determined according to the formation information A and the formation information B.
  • the coincidence degree between the target formation information and formation information A and formation information B is higher than a preset threshold, which is used to reflect formation information in formation information A and formation information B that belong to the same target set.
  • the formation information may have various representations, and the criteria for judging the coincidence degree are also different.
  • the process of acquiring and processing the different formation information detailed explanations will be given in the following in conjunction with the embodiments of FIG. 7 to FIG. 17 , which will not be repeated here.
  • the formation information includes the formation information of each target, which is used to indicate the specific position of the target in the target set. Therefore, the target corresponding to the same target in the detection information of different sensors can be determined according to the position information of the target, and the detection information of multiple corresponding targets can be fused.
  • the formation information between the objects detected by the sensors is respectively determined through detection information from different sensors, and the target formation information is determined according to the degree of coincidence with each formation information, so that the target object is determined. .
  • the target formation information is the formation information with similar characteristics detected by different sensors, it reflects the information detected by the same target at different sensors. Therefore, any object reflected in the target formation information can be determined according to the target formation information. Correspondence between detection results at different sensors, according to which the detection results of different sensors for the same object can be fused.
  • the method of obtaining fusion detection information through formation information in the embodiment of the present application can greatly improve the efficiency of obtaining fusion detection information.
  • the method of the embodiment of the present application only needs to provide detection information of different sensors, and does not need to occupy the site to be observed, which expands the scope of application of detection information fusion.
  • the corresponding formation information may be determined according to the location feature set, and in step 405, the target formation information needs to be determined according to a plurality of formation information.
  • the position feature set has different forms, and there are many ways to determine the formation information, mainly including the scribing method and the image feature matching method, and the classification will be described next.
  • the formation information may include three types of information: 1. The relative horizontal positional relationship between objects, such as the left-right positional relationship or the left-right spacing between objects; 2. The relative vertical positional relationship between objects, For example, the front-to-back position relationship of the object or the front-to-back distance, etc.; 3. The characteristics of the object itself, such as length, width, height, shape, etc.
  • FIG. 5 is a schematic diagram of a feature of an information processing method provided by an embodiment of the present application.
  • the formation information may include the front-rear distance and the left-right distance between vehicles, and may also include information of each vehicle, such as vehicle model, license plate number, etc., which are not limited here.
  • FIG. 5 only takes a vehicle on the road as an example, and does not limit the objects detected by the sensor.
  • the sensor can also be used to detect other objects, such as pedestrians, obstacles, etc., which are not limited here.
  • the formation information can be represented as an overall shape, such as the shape "9" in the embodiment shown in FIG. 2 .
  • the processing efficiency of shapes or images is not as high as that of digital processing. Expressing the formation information in the form of continuous or discrete numbers can greatly improve the efficiency of data processing.
  • Converting the overall shape features into digital features can be achieved by the scribing method.
  • a reference line is drawn, and information such as timing and position of objects touching the reference line can be obtained, and shape features can be converted into digital features, which is convenient for processing equipment.
  • touch line information various information about the object touching the reference line is also referred to as touch line information.
  • the touch line information may include timing information of the object touching the reference line, touch point partition information, touch point position information, touch time interval information, etc., which are not limited here.
  • the time sequence information represents the time sequence before and after the object detected by the sensor touches the reference line, which reflects the front and back relationship between the objects.
  • the touch point partition information represents the partition information of the touch point where the object touches the reference line in the reference line.
  • FIG. 6 is a schematic diagram of a scribing method provided by an embodiment of the present application.
  • the baseline can be divided according to different lanes. For example, in the figure, lane 1 is zone 1, lane 2 is zone 2, and lane 3 is zone 3.
  • the touch point position information represents the position information of the touch point in the reference line where the object touches the reference line.
  • the first vehicle in lane 1 is 1.5 meters away from the left endpoint of the baseline
  • the first vehicle in lane 3 is 7.5 meters away from the left endpoint of the baseline.
  • the touch time interval information represents the time interval before and after the object touches the reference line.
  • the touch point partition information and the touch point position information can be classified as the relative lateral position relationship between objects, and the timing information and the touch time interval information can be classified as between the objects. Longitudinal position relative relationship.
  • FIG. 7 is a schematic flowchart of an information processing method provided by an embodiment of the present application.
  • the method includes:
  • the detection information is a picture composed of pixels
  • the position feature in the position feature set can be embodied as the distance between pixels.
  • the position feature can also be expressed in other forms, for example, a left-right relationship or a front-back relationship between pixels, which is not limited here.
  • the sensor A in addition to the camera, the sensor A may also be other types of sensors, such as radar, ETC sensor, etc., which is not limited here.
  • sensors such as radar, ETC sensor, etc.
  • position features there will be corresponding position features.
  • the position feature of radar can be expressed as the distance between objects or the direction between objects, etc.
  • the position feature of ETC can be expressed as the lane information of the vehicle and the timing relationship between the front and rear, etc. , which is not limited here.
  • the detection information is the picture of the object detected by the radar within the detection range, and the position feature in the position feature set can be reflected as the distance between the objects.
  • the position feature can also be expressed in other forms, for example, the left-right relationship or the front-back relationship between objects, which is not limited here.
  • the senor B may also be other types of sensors, such as a camera, an ETC sensor, etc., which is not limited here.
  • sensors there will be corresponding location features, which are not limited here.
  • sensor A and sensor B are only examples of sensors, and do not limit the type and quantity of sensors.
  • the touch line information is the information that the pixels of the object touch the reference line.
  • the processing device may acquire, according to the detection information A, the timing information A of the object pixel touching the reference line and the touch point partition information A.
  • FIG. 8 is a schematic diagram of an application scenario of the information processing method provided by the embodiment of the application.
  • the sequence number column indicates the sequence before and after each object touches the reference line, that is, the timing information A;
  • the column of point partition information indicates the partition information of the touch point in the baseline when each object touches the baseline, that is, the touch point partition information A, where 1 represents lane 1 and 3 represents lane 3.
  • the touch line information is the information that the object touches the reference line.
  • the processing device may acquire, according to the detection information B, the timing information B of the object touching the reference line and the touch point partition information B.
  • the column of serial number indicates the sequence before and after each object touches the reference line, namely timing information B;
  • the column of touch point partition information indicates the partition of the touch point in the reference line when each object touches the reference line information, that is, the touch point partition information B, where 1 means 1 lane and 3 means 3 lanes.
  • step 701 and step 702 do not have a certain sequence, and step 701 may be performed before or after step 702, or step 701 and step 702 may be performed simultaneously, which is not limited here.
  • Step 703 and step 704 have no necessary sequence, and step 703 can be executed before or after step 704, or step 703 and step 704 can be executed at the same time, as long as step 703 is executed after step 701, and step 704 is executed after step 702. Yes, there is no limitation here.
  • the touch point partition information A can be arranged in sequence according to the time sequence, and the touch partition sequence A can be obtained.
  • the touch point partition information B can be arranged in sequence according to the time sequence, and the touch partition sequence B can be obtained.
  • step 705 and step 706 do not have a certain sequence, and step 705 may be performed before or after step 706, or step 705 and step 706 may be performed at the same time, as long as step 705 is performed after step 703, step 706 It can be executed after step 704, which is not limited here.
  • the touch partition sequence A and the touch partition sequence B are essentially two sequences, and the processing device can compare the two sequences. When it is found that the two sequences contain the same or highly overlapping sequence fragments, it can be considered that the sequence Fragments are the common part of both sequences. In the embodiments of the present application, the sequence fragment is also referred to as the first subsequence. Because the touch partition sequence reflects the positional relationship between the objects detected by the sensor, that is, the formation information between the objects. When two sequence segments include the same or highly overlapping sequence segments, it means that the object sets corresponding to the segments in the two sequence sequences have the same positional relationship, that is, they have the same formation information. When different sensors detect the same or similar formation information, it can be considered that the two sensors detect the same set of objects.
  • the first subsequence is also referred to as target formation information, which represents the same or similar formation information detected by multiple sensors.
  • the coincidence degree with the touch partition sequence B may be higher than the first threshold.
  • the degree of coincidence is also referred to as the degree of similarity.
  • the first threshold may be 90%, and besides 90%, the first threshold may also be other values, such as 95%, 99%, etc., which are not limited here.
  • the touch partition sequence A and the touch partition sequence B shown in FIG. 8 both include sequence fragments of (3, 3, 1, 3, 1).
  • the processing device may use the segment as the first subsequence.
  • the coincidence degree of the first sub-sequence with the touch sub-sequence A and the touch sub-sequence B are both 100%.
  • the first subsequence may be determined through an LCS algorithm.
  • all common sequences of multiple touch partition sequences can be obtained through the LCS algorithm, so as to achieve matching of the same position features of the multiple touch partition sequences. Since the LCS algorithm calculates the longest common subsequence, the first subsequence calculated by the LCS algorithm may include subsequences whose coincidence degrees with the aforementioned multiple touch partition sequences are all higher than the first threshold, longest subsequence.
  • all common sequences of multiple touch partition sequences can be determined through the LCS algorithm, so as to match all the fragments of the touch partition sequences with the same location feature. If a plurality of fragments are public sequences, and some non-public sequences are interspersed in these public sequences, these non-public sequences interspersed in the public sequences can be identified. Among them, the non-public sequences reflect different positional relationships in different sensors. In this case, it can be considered that the non-public sequence mixed in the public sequence is caused by the false detection or missed detection of the sensor, so that the non-public sequence is fault-tolerant, that is, the non-public sequence is used in the target detected by different sensors. Correspondence to objects to realize the fusion of detection information.
  • the first subsequence determined by the LCS algorithm may include the subsequence with the longest length among the subsequences whose coincidence degrees with multiple touch partition sequences are all higher than the first threshold. Since the positional relationship between the targets may be similar by chance, the longer the determined subsequence is, the lower the possibility of having a similar positional relationship is, and the more chance can be avoided, the longest subsequence is determined by the LCS algorithm. sequence, the target formation information of the same target set can be accurately determined.
  • the positional relationship of two objects may be similar by chance, but if the standard is raised to the positional relationship between ten objects with a high degree of coincidence, the probability of ten objects with similar positional relationship is relatively high.
  • the possibility of two targets with similar positional relationship will be greatly reduced, so if the first subsequence of ten targets is determined by the LCS algorithm, these ten targets are the same ten targets by different sensors. Detection results are more likely, reducing the chance of matching errors.
  • the first subsequence is composed of multiple touch point partition information, and for each touch point partition information in the first subsequence, corresponding data can be found in the touch partition sequence A and the touch partition sequence B .
  • the touch point partition information with the sequence number of 4 in the touch partition sequence A has its own touch point partition information 3, and the touch point partition information before and after is 1.
  • the single touch point partition information in the touch partition sequence or the first sub-sequence is also referred to as position information, which indicates the position of a single target in the target set.
  • the self-partition information is called self-feature
  • the preceding or nearby partition information is called peripheral feature
  • the peripheral feature may also include more nearby touch point partition information, which is not limited here.
  • the processing device can fuse the detection information corresponding to the serial number 4 with the detection information corresponding to the serial number 13 to obtain the fusion information of the target object.
  • the appearance information such as the size and shape of the object corresponding to the serial number 4 can be detected.
  • information such as the model, color, license plate and other information of the vehicle corresponding to serial number 4 can be detected.
  • the radar corresponding to the partition sequence B information such as the moving speed of the object corresponding to serial number 13 can be detected.
  • information such as vehicle speed and acceleration corresponding to serial number 13 can be detected.
  • the processing device can fuse the aforementioned model, color, license plate and other information with vehicle speed, acceleration and other information to obtain the fusion information of the vehicle.
  • the timing information represents the front-and-rear relationship between different targets touching the reference line
  • the touch point partition information represents the left-right relationship between different targets touching the reference line
  • the touch point partition information reflects the positional relationship of multiple targets touching the reference line in the touch partition sequence. Since the timing information and the touch point partition information are both specific numerical values, the touch partition sequence is a set of numerical values reflecting the positional relationship between the objects. According to the detection information from different sensors, the corresponding touch partition sequence is obtained. The obtained multiple touch partition sequences are multiple value sets. It is determined that the coincidence degree of the value sets meets the preset threshold. It is only necessary to compare the corresponding values, and no complicated operations are required, which improves the matching target formation information. efficiency.
  • the target formation information in addition to determining the target formation information according to the timing information and the touch point partition information, the target formation information may also be determined according to the timing information and the touch time interval information.
  • FIG. 9 is a schematic flowchart of an information processing method provided by an embodiment of the present application.
  • the method includes:
  • step 901 and step 902 refer to step 701 and step 702 in the embodiment shown in FIG. 7, and details are not repeated here.
  • the touch line information is the information that the pixels of the object touch the reference line.
  • the processing device may acquire, according to the detection information A, the timing information A and the touching time interval information A of the object pixels touching the reference line.
  • FIG. 10 is a schematic diagram of an application scenario of the information processing method provided by the embodiment of the present application.
  • the serial number column indicates the sequence before and after each object touches the reference line, that is, timing information A;
  • the column of time interval information indicates the time difference between each object touching the reference line and the time difference between the previous object touching the reference line, that is, the touch time interval information A, wherein the touch time interval information is in seconds.
  • the touch time interval information can also be in milliseconds, which is not limited here.
  • the touch line information is the information that the object touches the reference line.
  • the processing device may acquire, according to the detection information B, the timing information B and the touching time interval information B of the object touching the reference line.
  • the column of serial number indicates the sequence before and after each object touches the reference line, namely timing information B;
  • the column of touch time interval information indicates the time difference between each object touching the reference line and the previous object touching the reference line , that is, the touch time interval information B, wherein the touch time interval information is in seconds.
  • the touch time interval information can also be in milliseconds, which is not limited here.
  • step 901 and step 902 do not have a certain sequence, and step 901 may be performed before or after step 902, or step 901 and step 902 may be performed simultaneously, which is not limited here.
  • Step 903 and step 904 have no necessary sequence, step 903 can be executed before or after step 904, or step 903 and step 904 can be executed at the same time, as long as step 903 is executed after step 901, and step 904 is executed after step 902. Yes, there is no limitation here.
  • the touch time interval information A can be arranged in sequence according to the time sequence, and the touch interval sequence A can be obtained.
  • the touch time interval information B can be arranged in sequence according to the time sequence, and the touch interval sequence B can be obtained.
  • step 905 and step 906 do not have a certain sequence, and step 905 may be performed before or after step 906, or step 905 and step 906 may be performed at the same time, as long as step 905 is performed after step 903, step 906 It can be executed after step 904, which is not limited here.
  • the touch interval sequence A and the touch interval sequence B are essentially two sequences, and the processing device can compare the two sequences, and when it is found that the two sequences contain the same or highly overlapping sequence fragments, it can be considered that the sequence Fragments are the common part of both sequences.
  • the sequence fragment is also referred to as the second subsequence. Because the touch interval sequence reflects the positional relationship between the objects detected by the sensor, that is, the formation information between the objects. When two sequence segments include the same or highly overlapping sequence segments, it means that the object sets corresponding to the segments in the two sequence sequences have the same positional relationship, that is, they have the same formation information. When different sensors detect the same or similar formation information, it can be considered that the two sensors detect the same set of objects.
  • the second subsequence is also called target formation information, which represents the same or similar formation information detected by multiple sensors.
  • the coincidence degree with the touch interval sequence B may be higher than the second threshold.
  • the degree of coincidence is also referred to as the degree of similarity.
  • the second threshold may be 90%, and besides 90%, the second threshold may also be other values, such as 95%, 99%, etc., which are not limited here.
  • the touch spacer sequence A and the touch spacer sequence B shown in FIG. 10 both contain sequence fragments of (2.0s, 0.3s, 1.9s, 0.4s).
  • the processing device may use the segment as the second subsequence.
  • the degree of coincidence between the second subsequence and the touch interval sequence A and the touch interval sequence B are both 100%.
  • the second subsequence may be determined through the LCS algorithm.
  • all common sequences of multiple touch interval sequences can be obtained through the LCS algorithm, so as to realize matching of the same position features of the multiple touch interval sequences. Since the LCS algorithm calculates the longest common subsequence, the second subsequence calculated by the LCS algorithm may include subsequences whose coincidence degrees with the aforementioned multiple touch interval sequences are all higher than the second threshold, longest subsequence.
  • all common sequences of multiple touch interval sequences can be determined by using the LCS algorithm, so as to match all segments of touch interval sequences with the same location feature. If a plurality of fragments are public sequences, and some non-public sequences are interspersed in these public sequences, these non-public sequences interspersed in the public sequences can be identified. Among them, the non-public sequences reflect different positional relationships in different sensors. In this case, it can be considered that the non-public sequence mixed in the public sequence is caused by the false detection or missed detection of the sensor, so that the non-public sequence is fault-tolerant, that is, the non-public sequence is used in the target detected by different sensors. Correspondence to objects to realize the fusion of detection information.
  • the second subsequence determined by the LCS algorithm may include the subsequence with the longest length among the subsequences whose coincidence degrees with multiple touch interval sequences are all higher than the second threshold. Since the positional relationship between the targets may be similar by chance, the longer the determined subsequence is, the lower the possibility of having a similar positional relationship is, and the more chance can be avoided, the longest subsequence is determined by the LCS algorithm. sequence, the target formation information of the same target set can be accurately determined.
  • the positional relationship of two objects may be similar by chance, but if the standard is raised to the positional relationship between ten objects with a high degree of coincidence, the probability of ten objects with similar positional relationship is relatively high.
  • the possibility of two targets with similar positional relationship will be greatly reduced, so if the first subsequence of ten targets is determined by the LCS algorithm, these ten targets are the same ten targets by different sensors. Detection results are more likely, reducing the chance of matching errors.
  • the second subsequence is composed of multiple touch time interval information.
  • the corresponding data can be found in the touch interval sequence A and the touch interval sequence B .
  • the touch time interval information with the sequence number 3 in the touch interval sequence A has its own touch time interval information of 0.3s, and the touch time interval information before and after it is 2.0s and 1.9s, respectively.
  • the single touch time interval information in the touch interval sequence or the second subsequence is also referred to as position information, which represents the position of a single target in the target set.
  • the touch time interval information of the self is called the self feature, and the touch time interval information around or nearby is called the peripheral feature.
  • the peripheral feature may also include more nearby touch time interval information, which is not limited here.
  • the touch time interval information with the same self-feature and surrounding features that is, the touch time interval information with the serial number 12
  • the processing device can fuse the detection information corresponding to the serial number 3 with the detection information corresponding to the serial number 12 to obtain the fusion information of the target object.
  • the appearance information such as the size and shape of the object corresponding to the serial number 3 can be detected.
  • information such as the model, color, license plate and other information of the vehicle corresponding to serial number 3 can be detected.
  • the radar corresponding to the partition sequence B information such as the moving speed of the object corresponding to serial number 12 can be detected.
  • information such as vehicle speed and acceleration corresponding to the serial number 12 can be detected.
  • the processing device can fuse the aforementioned model, color, license plate and other information with vehicle speed, acceleration and other information to obtain the fusion information of the vehicle.
  • the timing information represents the relationship before and after different targets touch the reference line
  • the touch time interval information represents the time interval before and after different targets touch the reference line.
  • the interval touch time interval information reflects the positional relationship of multiple targets touching the reference line in the touch interval sequence. Since the timing information and the touch time interval information are both specific values, the touch interval sequence is a set of values reflecting the positional relationship between the objects. According to the detection information from different sensors, the corresponding touch interval sequence is obtained. The obtained multiple touch interval sequences are multiple value sets. It is determined that the coincidence degree of the value sets meets the preset threshold. It is only necessary to compare the corresponding values, and complex operations are not required, which improves the matching target formation information. efficiency.
  • the target formation information may also be determined according to timing information and touch point position information.
  • FIG. 11 is a schematic flowchart of an information processing method provided by an embodiment of the present application.
  • the method includes:
  • step 1101 and step 1102 refer to step 701 and step 702 of the embodiment shown in FIG. 7, and details are not repeated here.
  • the touch line information is the information that the pixels of the object touch the reference line.
  • the processing device may obtain, according to the detection information A, the timing information A of the object pixel touching the reference line and the touch point position information A.
  • the touch point position information A indicates the position of the touch point on the reference line.
  • the touch point position information A may represent the positional relationship between the touch points of different objects, and specifically may represent the left-right relationship between the touch points, so as to reflect the left-right relationship between the objects.
  • the touch point position information A may represent the distance between the touch point and the reference point on the reference line, and the distance between different touch points can be used to reflect the distance between the touch points. positional relationship.
  • the distance between the touch point and the left end point of the reference line is taken as an example, but it does not limit the position information of the touch point.
  • the position information of the touch point may represent the difference between the touch point and any point on the reference line. The positional relationship between them is not limited here.
  • FIG. 12 is a schematic diagram of an application scenario of the information processing method provided by the embodiment of the application.
  • the serial number column indicates the sequence before and after each object touches the reference line, that is, the timing information A;
  • the column of point position information indicates the distance between the touch point of each object touching the reference line and the left end point of the reference line, that is, the touch point position information A.
  • the position information of the touch point may represent the positional relationship between the touch point and any point on the reference line, which is not limited here.
  • the touch line information is the information that the object touches the reference line.
  • the processing device may acquire, according to the detection information B, the time sequence information B and the touch point position information B of the object touching the reference line.
  • the touch point position information B refer to the description of the touch point position information A in step 1103, and details are not repeated here.
  • the column of serial numbers indicates the sequence before and after each object touches the reference line, that is, timing information B;
  • the location information of the touch point may represent the location relationship between the touch point and any point on the reference line, so as to reflect the location relationship between different touch points, which is not limited here.
  • step 1101 and step 1102 do not have a necessary sequence, and step 1101 may be performed before or after step 1102, or step 1101 and step 1102 may be performed simultaneously, which is not limited here.
  • Step 1103 and step 1104 are not necessarily in order. Step 1103 can be executed before or after step 1104, or step 1103 and step 1104 can be executed at the same time, as long as step 1103 is executed after step 1101, and step 1104 is executed after step 1102. Yes, there is no limitation here.
  • the touch point position information A can be arranged in sequence according to the time sequence, and the touch position sequence A can be obtained.
  • the touch point position information B can be arranged in sequence according to the time sequence, and the touch position sequence B can be obtained.
  • step 1105 and step 1106 do not have a necessary sequence, and step 1105 may be performed before or after step 1106, or step 1105 and step 1106 may be performed at the same time, as long as step 1105 is performed after step 1103, step 1106 It may be executed after step 1104, which is not limited here.
  • the touch position sequence A and the touch position sequence B are essentially two sequences, and the processing device can compare the two sequences, and when it is found that the two sequences contain the same or highly overlapping sequence fragments, it can be considered that the sequence Fragments are the common part of both sequences.
  • the sequence fragment is also referred to as the third subsequence. Because the touch position sequence reflects the positional relationship between the objects detected by the sensor, that is, the formation information between the objects. When the two sequences include the same or highly overlapping sequence segments, it means that the object sets corresponding to the segments in the two sequences have the same positional relationship, that is, they have the same formation information. When different sensors detect the same or similar formation information, it can be considered that the two sensors detect the same set of objects.
  • the third subsequence is also called target formation information, which represents the same or similar formation information detected by multiple sensors.
  • the degree of coincidence with the touch position sequence B is higher than the third threshold.
  • the degree of coincidence is also referred to as the degree of similarity.
  • the third threshold may be 90%, and besides 90%, the third threshold may also be other values, such as 95%, 99%, etc., which are not limited here.
  • the touch position sequence A and the touch position sequence B shown in FIG. 12 both include sequence fragments of (7.5m, 7.3m, 1.5m, 7.6m, 1.3m).
  • the processing device may use the segment as a third subsequence.
  • the coincidence degree of the third subsequence with the touch position sequence A and the touch position sequence B are both 100%.
  • the third subsequence may be determined through the LCS algorithm.
  • all common sequences of multiple touch position sequences can be obtained through the LCS algorithm, so as to achieve matching of the same position features of the multiple touch position sequences. Since the LCS algorithm calculates the longest common subsequence, the third subsequence calculated by the LCS algorithm may include subsequences whose coincidence degrees with the aforementioned multiple touch position sequences are all higher than the second threshold, longest subsequence.
  • the third subsequence is composed of multiple touch point position information.
  • the corresponding data can be found in the touch position sequence A and the touch position sequence B .
  • the position information of the touch point whose sequence number is 2 in the touch position sequence A is 7.3m
  • the position information of the front and rear touch points is 7.5m and 1.5m, respectively.
  • the position information of a single touch point in the touch position sequence or the third sub-sequence is also referred to as position information, which represents the position of a single target in the target set.
  • the location information of the touch point of the self is called the self feature
  • the location information of the touch point in front, back or nearby is called the peripheral feature
  • the peripheral feature may also include more location information of nearby touch points, which is not limited here.
  • the processing device can fuse the detection information corresponding to the sequence number 2 with the detection information corresponding to the sequence number 11 to obtain the fusion information of the target object.
  • the appearance information such as the size and shape of the object corresponding to the sequence number 2 can be detected.
  • information such as the model, color, license plate and other information of the vehicle corresponding to serial number 2 can be detected.
  • Touching the radar corresponding to the position sequence B can detect information such as the moving speed of the object corresponding to serial number 11.
  • information such as vehicle speed and acceleration corresponding to the serial number 11 can be detected.
  • the processing device can fuse the aforementioned model, color, license plate and other information with vehicle speed, acceleration and other information to obtain the fusion information of the vehicle.
  • all common sequences of multiple touch interval sequences can be determined by using the LCS algorithm, so as to match all segments of touch interval sequences with the same location feature. If a plurality of fragments are public sequences, and some non-public sequences are interspersed in these public sequences, these non-public sequences interspersed in the public sequences can be identified. Among them, the non-public sequences reflect different positional relationships in different sensors. In this case, it can be considered that the non-public sequence mixed in the public sequence is caused by the false detection or missed detection of the sensor, so that the non-public sequence is fault-tolerant, that is, the non-public sequence is used in the target detected by different sensors. Correspondence to objects to realize the fusion of detection information.
  • the third subsequence determined by the LCS algorithm may include the subsequence with the longest length among the subsequences whose coincidence degrees with multiple touch position sequences are all higher than the third threshold. Since the positional relationship between the targets may be similar by chance, the longer the determined subsequence is, the lower the possibility of having a similar positional relationship is, and the more chance can be avoided, the longest subsequence is determined by the LCS algorithm. sequence, the target formation information of the same target set can be accurately determined.
  • the positional relationship of two objects may be similar by chance, but if the standard is raised to the positional relationship between ten objects with a high degree of coincidence, the probability of ten objects with similar positional relationship is relatively high.
  • the possibility of two targets with similar positional relationship will be greatly reduced, so if the first subsequence of ten targets is determined by the LCS algorithm, these ten targets are the same ten targets by different sensors. Detection results are more likely, reducing the chance of matching errors.
  • the touch point position information represents the left-right relationship between different targets touching the reference line, and may be continuous numerical values or data. Therefore, based on the continuous value or data, the formation information of the target can be more accurately distinguished from the formation information of other non-targets, so as to more accurately realize the fusion of detection information for the same target.
  • the movement trend between the targets can be analyzed or calculated through the continuous numerical value or data.
  • other information such as the movement trajectory of the target objects, etc., can also be calculated, which is not limited here.
  • the subsequences in addition to determining the corresponding subsequences, can also be combined to improve the accuracy of formation matching.
  • FIG. 13 is a schematic flowchart of an information processing method provided by an embodiment of the present application.
  • the method includes:
  • step 1301 and step 1302 refer to step 701 and step 702 in the embodiment shown in FIG. 7, and details are not repeated here.
  • the touch line information is the information that the pixels of the object touch the reference line.
  • the processing device may acquire, according to the detection information A, the timing information A of the object pixel touching the reference line, the touch point partition information A, and the touch time interval information A.
  • FIG. 14 is a schematic diagram of an application scenario of the information processing method provided by the embodiment of the present application.
  • the serial number column indicates the sequence before and after each object touches the reference line, that is, the timing information A;
  • the column of point partition information indicates the partition information of the touch point in the baseline when each object touches the baseline, that is, the touch point partition information A, where 1 represents lane 1 and 3 represents lane 3.
  • the column of touch time interval information indicates the time difference between each object touching the reference line and the time difference between the previous object touching the reference line, that is, touch time interval information A, wherein the touch time interval information is in seconds. In addition to seconds, the touch time interval information can also be in milliseconds, which is not limited here.
  • the touch line information is the information that the object touches the reference line.
  • the processing device may acquire, according to the detection information B, the time sequence information B of the object touching the reference line, the touch point partition information B, and the touch time interval information B.
  • the column of serial number indicates the sequence before and after each object touches the reference line, that is, timing information B;
  • the column of touch point partition information indicates the partition of the touch point in the reference line when each object touches the reference line information, that is, the touch point partition information B, where 1 means 1 lane and 3 means 3 lanes.
  • the column of touch time interval information indicates the time difference between each object touching the reference line and the time difference between the previous object touching the reference line, that is, touch time interval information B, wherein the touch time interval information is in seconds. In addition to seconds, the touch time interval information can also be in milliseconds, which is not limited here.
  • step 1301 and step 1302 do not have a certain sequence, and step 1301 may be performed before or after step 1302, or step 1301 and step 1302 may be performed simultaneously, which is not limited here.
  • Step 1303 and step 1304 are not necessarily in order. Step 1303 can be executed before or after step 1304, or step 1303 and step 1304 can be executed at the same time, as long as step 1303 is executed after step 1301, and step 1304 is executed after step 1302. Yes, there is no limitation here.
  • step 705 For the step for the processing device to acquire the touch partition sequence A according to the timing information A and the touch point partition information, refer to step 705 in the embodiment shown in FIG. 7 , which will not be repeated here.
  • step 905 For the step of acquiring the touch interval sequence A by the processing device according to the timing information A and the touch time interval information A, refer to step 905 in the embodiment shown in FIG. 9 , which will not be repeated here.
  • step 706 For the step of acquiring the touch partition sequence B by the processing device according to the timing information B and the touch point partition information, refer to step 706 in the embodiment shown in FIG. 7 , which will not be repeated here.
  • step 906 For the step of acquiring the touch interval sequence B by the processing device according to the timing information B and the touch time interval information B, refer to step 906 in the embodiment shown in FIG. 9 , which will not be repeated here.
  • step 1305 and step 1306 do not have a necessary sequence, and step 1305 may be performed before or after step 1306, or step 1305 and step 1306 may be performed at the same time, as long as step 1305 is performed after step 1303, step 1306 It can be executed after step 1304, which is not limited here.
  • the touch partition sequence A and the touch partition sequence B are essentially two sequences, and the processing device can compare the two sequences. When it is found that the two sequences contain the same or highly overlapping sequence fragments, it can be considered that the sequence Fragments are the common part of both sequences. In the embodiments of the present application, the sequence fragment is also referred to as the first subsequence. Because the touch partition sequence reflects the positional relationship between the objects detected by the sensor, that is, the formation information between the objects. When two sequence segments include the same or highly overlapping sequence segments, it means that the object sets corresponding to the segments in the two sequence sequences have the same positional relationship, that is, they have the same formation information. When different sensors detect the same or similar formation information, it can be considered that the two sensors detect the same set of objects.
  • the first subsequence is also referred to as target formation information, which represents the same or similar formation information detected by multiple sensors.
  • the coincidence degree with the touch partition sequence B may be higher than the first threshold.
  • the degree of coincidence is also referred to as the degree of similarity.
  • the first threshold may be 90%, and besides 90%, the first threshold may also be other values, such as 95%, 99%, etc., which are not limited here.
  • the touch partition sequence A and the touch partition sequence B shown in FIG. 8 both include sequence fragments of (3, 3, 1, 3, 1).
  • the processing device may use the segment as the first subsequence.
  • the coincidence degree of the first sub-sequence with the touch sub-sequence A and the touch sub-sequence B are both 100%.
  • the touch interval sequence A and the touch interval sequence B are essentially two sequences, and the processing device can compare the two sequences, and when it is found that the two sequences contain the same or highly overlapping sequence fragments, it can be considered that the sequence Fragments are the common part of both sequences.
  • the sequence fragment is also referred to as the second subsequence. Because the touch interval sequence reflects the positional relationship between the objects detected by the sensor, that is, the formation information between the objects. When two sequence segments include the same or highly overlapping sequence segments, it means that the object sets corresponding to the segments in the two sequence sequences have the same positional relationship, that is, they have the same formation information. When different sensors detect the same or similar formation information, it can be considered that the two sensors detect the same set of objects.
  • the second subsequence is also called target formation information, which represents the same or similar formation information detected by multiple sensors.
  • the coincidence degree with the touch interval sequence B may be higher than the second threshold.
  • the degree of coincidence is also referred to as the degree of similarity.
  • the second threshold may be 90%, and besides 90%, the second threshold may also be other values, such as 95%, 99%, etc., which are not limited here.
  • the touch spacer sequence A and the touch spacer sequence B shown in FIG. 10 both contain sequence fragments of (2.0s, 0.3s, 1.9s, 0.4s).
  • the processing device may use the segment as the second subsequence.
  • the degree of coincidence between the second subsequence and the touch interval sequence A and the touch interval sequence B are both 100%.
  • the objects indicated by the first subsequence (3, 3, 1, 3, 1), on the sensor A side have serial numbers from 1 to 5, and correspond to the objects on the sensor B side with serial numbers from 10 to 14.
  • the object set corresponding to the first subsequence is also referred to as the first object set.
  • the objects indicated by the second subsequence (2.0s, 0.3s, 1.9s, 0.4s), on the sensor A side, have serial numbers 2 to 5, and correspond to the objects on the sensor B side with serial numbers 11 to 14.
  • the object set corresponding to the second subsequence is also referred to as the second object set.
  • intersection of the two object sets Take the intersection of the two object sets, that is, on the sensor A side, take the intersection of the objects with numbers 1 to 5 and objects with numbers 2 to 5, that is, determine that the intersection is the set of objects with numbers 2 to 5.
  • the intersection is the set of targets with serial numbers 11 to 14 .
  • the intersection of the first object set and the second object set is also referred to as a target object set.
  • the first subsequence is composed of multiple touch point partition information, and for each touch point partition information in the first subsequence, corresponding data can be found in the touch partition sequence A and the touch partition sequence B .
  • the partition information of the touch point whose sequence number is 4 in the touch partition sequence A is 3, and the partition information of the front and rear touch points is 1.
  • the single touch point partition information in the touch partition sequence or the first sub-sequence is also referred to as position information, which indicates the position of a single target in the target set.
  • the self-partition information is called self-feature
  • the preceding or nearby partition information is called peripheral feature
  • the peripheral feature may also include more nearby touch point partition information, which is not limited here.
  • the processing device can fuse the detection information corresponding to the serial number 4 with the detection information corresponding to the serial number 13 to obtain the fusion information of the target object.
  • the appearance information such as the size and shape of the object corresponding to the serial number 4 can be detected.
  • information such as the model, color, license plate and other information of the vehicle corresponding to serial number 4 can be detected.
  • the radar corresponding to the partition sequence B information such as the moving speed of the object corresponding to serial number 13 can be detected.
  • information such as vehicle speed and acceleration corresponding to serial number 13 can be detected.
  • the processing device can fuse the aforementioned model, color, license plate and other information with vehicle speed, acceleration and other information to obtain the fusion information of the vehicle.
  • detection information with the same self-feature and surrounding features in the second subsequence may also be fused.
  • detection information with the same self-feature and surrounding features in the second subsequence may also be fused.
  • step 908 of the embodiment shown in FIG. 9 For the description of the self-features and surrounding features of the second subsequence, refer to step 908 of the embodiment shown in FIG. 9 , and details are not repeated here.
  • the first object set corresponding to the first subsequence and the second object set corresponding to the second subsequence are used to determine the intersection of the first object set and the second object set, and use the intersection set as a collection of target objects.
  • the objects in the intersection correspond to the first subsequence, that is, according to the detection information of different sensors, similar touch partition information can be obtained; at the same time, the objects in the intersection correspond to the second subsequence, that is, according to the detection information of different sensors detection information, while having similar touch time interval information.
  • the intersection between objects corresponding to other subsequences can also be taken, such as the first subsequence and the third subsequence.
  • the intersection between the objects corresponding to each sequence, or the intersection between the objects corresponding to the second subsequence and the third subsequence, or the objects corresponding to other subsequences, and any one of the first to third subsequences The intersection between objects corresponding to the sequence.
  • subsequences are also used to represent the positional relationship between objects, such as the distance or direction between objects, etc., which are not limited here.
  • suitable subsequences can be flexibly selected for operation, which improves the feasibility and flexibility of the scheme.
  • the intersection between the corresponding objects of more subsequences can also be taken, for example, taking the first subsequence, the second subsequence and the third subsequence Subsequences correspond to intersections between objects.
  • the greater the number of subsequences taken the more similar types of information representing the positional relationship of objects can be obtained based on the detection information of multiple sensors, and the higher the possibility that the set of objects corresponding to the detection information is the same set of objects. Therefore, by screening the intersection of objects corresponding to multiple subsequences, the formation information of the target can be more accurately distinguished from the formation information of other non-targets, so as to more accurately realize the fusion of detection information for the same target.
  • the touch line information is obtained through the position feature set. Since the touch line information is the information of the object touching the reference line, the touch time, touch interval, touch position, etc., including specific values, can be obtained by touching the reference line. or data on specific location features. Therefore, through the specific numerical values or specific position characteristics of the touch lines of multiple targets, a collection of touch line data can be obtained, such as a sequence composed of multiple touch times, a sequence composed of multiple touch intervals, or multiple touch locations. composition distribution, etc. Since the above-mentioned sets of antenna data all have specific numerical values or position characteristics, they can be directly calculated without other data processing, so that target formation information whose coincidence degree meets the preset threshold can be quickly determined.
  • the formation information by the scribing method in addition to determining the formation information by the scribing method, it can also be determined by other methods, such as an image feature matching method.
  • formation information can be represented as an overall shape.
  • this abstracted overall shape can be represented by image features.
  • the method of determining the formation information by using the overall image features is called the image feature matching method.
  • FIG. 15 is a schematic flowchart of an information processing method provided by an embodiment of the present application. The method includes:
  • steps 1501 and 1502 refer to steps 701 and 702 in the embodiment shown in FIG. 7, and details are not repeated here.
  • the processing device can distinguish different objects according to the pixels in the picture, and mark feature points on the objects.
  • the shape composed of each feature point is used as the initial target group distribution map A.
  • the labeling of the feature points may follow a uniform rule.
  • the center point of the front of the vehicle may be used as the feature point.
  • it can also be other points, such as the center point of the license plate, etc., which is not limited here.
  • FIG. 16 is a schematic diagram of an application scenario of the information processing method provided by the embodiment of the present application.
  • the center point of the license plate is marked, and the marked points are connected to form the initial target group distribution map A, which has a shape similar to the number "9".
  • the corresponding shape feature can be extracted by a scale-invariant feature transform (SIFT) algorithm, so as to obtain the initial target group distribution map A.
  • SIFT scale-invariant feature transform
  • the detection information B is a picture of the object detected by the radar within the detection range
  • the object detected by the radar has label information in the picture
  • the label information represents the corresponding object.
  • the processing device may use the shape formed by each label information in the picture as the initial target group distribution map B.
  • the locations where the annotation information is located are connected to form an initial target group distribution map B, which also has a shape similar to the number "9".
  • the corresponding shape feature can be extracted by the SIFT algorithm, so as to obtain the initial target group distribution map B.
  • the processing device can obtain the standard view diagram of the initial target group distribution diagram A through the viewing angle change algorithm, and use the standard view angle diagram of the initial target group distribution diagram A as the target group distribution diagram A.
  • the processing device can obtain the standard view of the initial target group distribution map B through the viewing angle change algorithm, and use the standard view of the initial target group distribution map B as the target group distribution map B.
  • the target group distribution map A and the target group distribution map B are two shapes, and the processing device can compare the image features of these two shapes.
  • the feature set is the common part of the two image features.
  • the feature set is also referred to as an image feature set. Because the image features reflect the positional relationship between the objects detected by the sensor, that is, the formation information between the objects.
  • two image features include the same feature set or a feature set with a high degree of coincidence, it means that the object set corresponding to the feature set in the two image features has the same positional relationship, that is, has the same formation information.
  • different sensors detect the same or similar formation information it can be considered that the two sensors detect the same set of objects.
  • the image feature set is also called target formation information, which represents the same or similar formation information detected by multiple sensors.
  • the sensor has a certain missed detection rate, it is not required that the image feature set completely coincide with the features in the target group distribution map A and target group distribution map B, as long as the image feature set and the target group distribution map A and the target are guaranteed to be completely coincident.
  • All the coincidence degrees of the group distribution map B may be higher than the third threshold.
  • the degree of coincidence is also referred to as the degree of similarity.
  • the third threshold may be 90%, and besides 90%, the third threshold may also be other values, such as 95%, 99%, etc., which are not limited here.
  • the image feature sets of the distribution maps of different target groups may be matched through a face recognition algorithm or a fingerprint recognition algorithm.
  • the image feature set is composed of multiple annotation information or annotation points.
  • the corresponding data can be found in the target group distribution map A and the target group distribution map B.
  • the marked point at the bottom of the shape "9" in the target group distribution map A is the marked point at the bottom of the shape "9" in the target group distribution map A.
  • the single label information or label point in the target group distribution map or the image feature set is also referred to as position information, which represents the position of a single target in the target set.
  • the label information with the same position can also be found in the target group distribution map B, that is, the label information at the bottom of the shape "9" in the target group distribution map B. Since the two annotation information and annotation points are in the image feature set and have the same location features, it can be considered that the two annotation information and annotation points reflect the same object. Therefore, the processing device can fuse the detection information corresponding to the label point at the bottom of the shape "9” with the detection information corresponding to the label information at the bottom of the shape "9” to obtain the fusion information of the target.
  • the camera corresponding to the target group distribution map A can detect the appearance information such as the size and shape of the object.
  • information such as the model, color, license plate and other information of the corresponding vehicle can be detected.
  • the radar corresponding to the target group distribution map B can detect the moving speed of the object and other information.
  • information such as the speed and acceleration of the corresponding vehicle can be detected.
  • the processing device can fuse the aforementioned model, color, license plate and other information with vehicle speed, acceleration and other information to obtain the fusion information of the vehicle.
  • a plurality of corresponding initial target group distribution maps are obtained according to detection information from different sensors, and a plurality of corresponding target group distribution maps are obtained through a perspective change algorithm, and then images of multiple target group distribution maps are obtained. feature set, and use the image feature set as the target formation information.
  • an image feature set whose coincidence degree with the multiple target group distribution maps is higher than a preset threshold is determined. Since image features can intuitively reflect the positional relationship between objects displayed in the image, the image feature set determined by multiple target group distribution maps can intuitively reflect detection results with similar The detection results of the sensor to the same target group are matched, so as to accurately realize the fusion of detection information.
  • the image feature matching method and the scribing method can also be combined to obtain more accurate results.
  • the image feature matching method is combined with the scribing method.
  • FIG. 17 is a schematic flowchart of an information processing method provided by an embodiment of the present application.
  • the method includes:
  • steps 1701 and 1702 refer to steps 701 and 702 in the embodiment shown in FIG. 7, and details are not described herein again.
  • the touch line information includes timing information of the object touching the reference line, touch point partition information, touch point position information, touch time interval information, and the like.
  • the processing device may acquire any of the foregoing contact line information according to the detection information, for example, the timing information A and the touch point partition information A may be acquired according to the detection information A.
  • the timing information A and the touch point partition information A For the acquisition process of the timing information A and the touch point partition information A, refer to step 703 of the embodiment shown in FIG. 7 , and details are not repeated here.
  • the processing device may also acquire other touch line information, such as timing information A and touch time interval information A shown in step 903 in the embodiment shown in FIG. 9 , or Timing information A and touch point position information A shown in step 1103 in the embodiment shown in FIG. 11 , or timing information A shown in step 1303 in the embodiment shown in FIG. 13 , touch point partition information A and touch point information A
  • the time interval information A, etc. is not limited here.
  • step 1703 which type of wireline information the processing device obtains according to the detection information A, correspondingly, the same type of wireline information should be obtained according to the detection information B.
  • the process of obtaining the wireline information refer to the aforementioned FIG. 7 . 9. The embodiment shown in FIG. 11 or FIG. 13 will not be repeated here.
  • the object touches the reference line only for a moment, so the touch line information can reflect the moment of detection information.
  • the processing device may determine the initial target group distribution map A according to the detection information A at the moment reflected by the antenna line information A.
  • the initial target group distribution map A obtained here reflects the formation information at the moment when the touch line information A is located.
  • the processing device can determine the antenna information B that has the same formation information as the antenna information A. Since the antenna information mainly reflects the formation information of the object set, it can be considered that the antenna information B is the same as the antenna information A.
  • the processing device may determine the initial target group distribution map B according to the detection information B at the moment reflected by the antenna line information B.
  • the initial target group distribution map B obtained here reflects the formation information at the moment when the touch line information B is located.
  • step 1504 For the process of acquiring the initial target group distribution map B, refer to step 1504 in the embodiment shown in FIG. 15 , and details are not repeated here.
  • Steps 1707 to 1709 refer to steps 1505 to 1507 of the embodiment shown in FIG. 15 , and details are not repeated here.
  • the initial target group of the approximate time due to the high similarity between the images of the approximate time, if the same time is not determined, when matching the initial target distribution maps from different sensors, the initial target group of the approximate time will be introduced.
  • the interference of the distribution map leads to the matching error of the distribution map and the wrong acquisition of the image feature set, so that the detection information at different times is fused, resulting in a fusion error of the detection information.
  • This error can be avoided by using the contact line information.
  • multiple initial target group distribution maps are determined by the contact line information, and the multiple initial target group distribution maps have the same contact line information, indicating the distribution of the multiple initial target groups.
  • the images are acquired at the same time, which ensures that the fused detection information is acquired at the same time, which improves the accuracy of detection information fusion.
  • the methods described in the embodiments of the present application can not only be used to obtain fusion information, but also can be used for other purposes, such as realizing the mapping of the spatial coordinate system of different sensors, realizing the mapping of the time axis of different sensors, error correction of sensors or filtering, etc.
  • the plurality of sensors may include a first sensor and a second sensor, wherein the space coordinate system corresponding to the first sensor is a standard coordinate system, and the space coordinate system corresponding to the second sensor is a target coordinate system.
  • the space coordinate system corresponding to the first sensor is a standard coordinate system
  • the space coordinate system corresponding to the second sensor is a target coordinate system.
  • the processing device determines the mapping relationship between the multiple standard point information and the multiple target point information according to the fusion detection information, wherein the fusion detection information is obtained by fusing the detection information corresponding to the same target in the multiple formation information.
  • the standard point information represents the position information of each object in the target object set in the standard coordinate system
  • the target point information represents the position information of each object in the target object set in the target coordinate system, wherein, the multiple standard point information and There is a one-to-one correspondence of multiple target point information.
  • the processing device can determine the mapping relationship between the standard coordinate system and the target coordinate system according to the mapping relationship between the standard point information and the target point information.
  • the mapping relationship between the multiple standard point information and the multiple target point information is determined by fusing the detection information, and the standard coordinates are determined through the mapping relationship between the multiple standard point information and the multiple target point information.
  • the mapping relationship between the system and the target coordinate system In the method described in the embodiments of the present application, as long as detection information from different sensors can be acquired, the mapping of coordinate systems between different sensors can be realized. Subsequent determination of target formation information, point information mapping and other steps can be realized by the processing equipment itself, without manual calibration and mapping. By processing equipment matching target formation information, the accuracy of equipment operation improves the accuracy of point information mapping. At the same time, as long as the detection information from different sensors can be obtained, the fusion of detection information and the mapping of the coordinate system can be realized, which avoids the scene limitation caused by manual calibration and ensures the accuracy and universality of detection information fusion.
  • the processing device calculates the time difference between the time axes of the multiple sensors according to the fusion result of the detection information corresponding to the same target in the multiple formation information. Through this time difference, the time axis mapping between different sensors can be realized.
  • the time axes of different sensors can be aligned according to the time difference.
  • the time axis alignment method provided by the embodiments of the present application can be implemented as long as the detection information of different sensors can be obtained, and it does not require multiple sensors to be in the same time synchronization system, which expands the application scenarios of the time axis alignment of different sensors, and also expands the The scope of application of information fusion.
  • the plurality of sensors may include a standard sensor and a sensor to be tested, and the method may further include:
  • the processing device obtains the target formation information corresponding to the standard formation information of the standard sensor; the processing device obtains the target formation information corresponding to the to-be-measured formation information of the sensor to be tested; the processing device determines the difference between the to-be-measured formation information and the standard formation information; The difference and standard formation information are used to obtain error parameters, wherein the error parameters are used to indicate the error of the formation information to be tested, or to indicate the performance parameters of the sensor to be tested.
  • FIG. 18 is a schematic diagram of an application scenario of the information processing method provided by the embodiment of the present application.
  • sensor B detects a piece of data by mistake, such as the v6a6 data in the figure, it can be divided according to the touch
  • the difference between sequence A and touch partition sequence B confirms that the data of serial number 15 is detected by sensor B by mistake.
  • the false detection information of the sensor can be obtained to calculate the false detection rate of the sensor to evaluate the performance of the sensor.
  • FIG. 19 is a schematic diagram of an application scenario of the information processing method provided by the embodiment of the present application. As shown in FIG. 19 , if sensor B misses detection of a piece of data, the data on the 3 lanes corresponding to the serial number 2 in the figure is displayed. According to the difference between the touch partition sequence A and the touch partition sequence B, it can be determined that a target object is missed between the serial number 10 and the serial number 11.
  • the missed detection information of the sensor can be obtained, so as to calculate the missed detection rate of the sensor to evaluate the performance of the sensor.
  • the standard sensor is used as the detection standard, and the error parameter is obtained according to the difference between the formation information to be tested and the standard formation information.
  • the error parameter is used to indicate the error of the formation information to be measured
  • the information corresponding to the error parameter in the formation information to be measured can be corrected through the error parameter and the standard formation information;
  • the error parameter is used to indicate the performance parameter of the sensor to be measured
  • the performance parameters such as the false detection rate of the sensor to be tested can be determined, and the data analysis of the sensor to be tested can be realized to realize the selection of the sensor.
  • a processing device corresponding to the information processing method in the embodiment of the present application.
  • FIG. 20 is a schematic structural diagram of a processing device provided by an embodiment of the present application.
  • the processing device 2000 is located in a detection system, the detection system further includes at least two sensors, wherein the detection information acquired by the at least two sensors includes detection information of the at least two sensors on the same at least two targets respectively, the Processing device 2000 may include processor 2001 and transceiver 2002 .
  • the transceiver 2002 is configured to acquire at least two pieces of detection information from at least two sensors, wherein the at least two sensors are in one-to-one correspondence with the at least two pieces of detection information.
  • the processor 2001 is configured to: determine at least two corresponding formation information according to the at least two detection information, wherein each formation information is used to describe the positional relationship between objects detected by the corresponding sensor, wherein the objects include The aforementioned target object; target formation information is determined according to at least two formation information, the degree of coincidence between the target formation information and each formation information in the at least two formation information is higher than a preset threshold, and the target formation information is used to describe at least two targets
  • the positional relationship between objects, the target formation information includes the position information of each target; according to the position information of any target in each target, the detection information corresponding to the same target in at least two formation information fusion.
  • the detection information includes a position feature set
  • the position feature set includes at least two position features
  • the position features represent the positional relationship between the object detected by the corresponding sensor and the objects around the object.
  • the processor 2001 is specifically configured to: acquire corresponding at least two antenna information according to the at least two position feature sets, wherein each of the at least two antenna information is used as In order to describe the information that the object detected by the corresponding sensor touches the reference line, at least two contact line information is in one-to-one correspondence with at least two position feature sets; at least two corresponding formation information are respectively determined according to the at least two contact line information, The at least two antenna information is in one-to-one correspondence with the at least two formation information.
  • the touch line information includes time sequence information and touch point partition information corresponding to the object detected by the sensor touching the reference line, and the touch point partition information represents the touch point of the object touching the reference line.
  • the processor 2001 is specifically configured to: acquire a first subsequence of at least two touch partition sequences, and use the first subsequence as target formation information, wherein the first subsequence and the at least two touch partition sequences have a high degree of coincidence at the first threshold; according to the touch point partition information corresponding to each target in the first subsequence, the detection information corresponding to the same target in at least two touch partition sequences is fused.
  • the touch line information includes time sequence information and touch time interval information corresponding to the object detected by the sensor touching the reference line, and the touch time interval information represents the time interval before and after the object touches the reference line ;
  • the formation information includes the touch interval sequence, and the touch interval sequence represents the distribution of the time interval when the object detected by the corresponding sensor touches the reference line.
  • the processor 2001 is specifically configured to: acquire a second subsequence of at least two touch interval sequences, and use the second subsequence as target formation information, wherein the degree of coincidence between the second subsequence and the at least two touch interval sequences is high at the second threshold; according to the touch time distribution information corresponding to each target in the second subsequence, the detection information corresponding to the same target in at least two touch interval sequences is fused.
  • the touch line information includes the timing information corresponding to the object detected by the sensor touching the reference line, the touch point partition information and the touch time interval information, and the touch point partition information Represents the partition information of the touch point in the baseline where the object touches the baseline, and the touch time interval information represents the time interval before and after the object touches the baseline;
  • the formation information includes the touch partition sequence and the touch interval sequence.
  • the touch partition sequence represents the temporal relationship before and after the location of the partition corresponding to the object detected by the sensor touching the reference line
  • the touch interval sequence represents the distribution of the time interval corresponding to the object detected by the sensor touching the reference line.
  • the processor 2001 is specifically configured to: acquire a first subsequence of at least two touch partition sequences, and the coincidence degrees of the first subsequence and the at least two touch partition sequences are all higher than a first threshold; acquire at least two touch intervals In the second subsequence of the sequence, the coincidence degrees of the second subsequence and at least two touch interval sequences are higher than the second threshold; determine the intersection of the first object set and the second object set, and use the intersection as the target object set, where , the first set of objects is the set of objects corresponding to the first subsequence, and the second set of objects is the set of objects corresponding to the second subsequence; the touch partition sequence and touch interval sequence of the target object set are used as the Target formation information.
  • the formation information includes a target group distribution map
  • the target group distribution map represents the positional relationship between objects.
  • the processor 2001 is specifically configured to: obtain at least two corresponding initial target group distribution maps according to at least two position feature sets, and the initial target distribution maps represent the positional relationship between objects detected by the corresponding sensors; Acquiring at least two standard perspective maps of the initial target group distribution maps, and using the at least two standard perspective maps as the corresponding at least two target group distribution maps, wherein the position information of the target group distribution map includes the target object distribution information of the target object , the target object distribution information represents the position of the target object in the object detected by the corresponding sensor; the image feature sets of at least two target group distribution maps are obtained, and the image feature sets are used as the target formation information, wherein the image feature sets are the same as The coincidence degree of the at least two target group distribution maps is higher than the third threshold; according to the target object distribution information corresponding to each target in the image feature set, the detection information corresponding to the same target in the at least two target group distribution maps is fused .
  • the processor 2001 is further configured to: acquire at least two antenna information of the corresponding target in the image feature set according to the at least two position feature sets, wherein, among the at least two antenna information Each touch line information of is used to describe the information that the object detected by the corresponding sensor touches the reference line, and at least two touch line information corresponds to at least two position feature sets one-to-one.
  • the processor 2001 is specifically configured to acquire at least two corresponding initial target group distribution maps according to the at least two antenna information, wherein the objects in the at least two initial target group distribution maps have the same antenna information.
  • the at least two sensors include a first sensor and a second sensor
  • the space coordinate system corresponding to the first sensor is a standard coordinate system
  • the space coordinate system corresponding to the second sensor is a target coordinate system
  • the processor 2001 is further configured to: determine the mapping relationship between the at least two standard point information and the at least two target point information according to the fusion detection information obtained by fusing the detection information corresponding to the same target in the at least two formation information.
  • the point information represents the position information of each object in the target object set in the standard coordinate system
  • the target point information represents the position information of each object in the target coordinate system, wherein at least two standard point information and at least two target point information one by one Corresponding; according to the mapping relationship between the standard point information and the target point information, determine the mapping relationship between the standard coordinate system and the target coordinate system.
  • the processor 2001 is further configured to calculate the time difference between the time axes of the at least two sensors according to the fusion result of the detection information corresponding to the same target in the at least two formation information.
  • the at least two sensors include a standard sensor and a sensor to be tested.
  • the processor 2001 is further configured to: obtain the standard formation information corresponding to the target formation information in the standard sensor; obtain the formation information to be measured corresponding to the target formation information in the sensor to be measured; determine the difference between the formation information to be measured and the standard formation information; Difference and standard formation information, obtain error parameters, the error parameters are used to indicate the error of the formation information to be tested, or to indicate the performance parameters of the sensor to be tested.
  • the processing device 2000 can perform the operations performed by the processing device in the foregoing embodiments shown in FIG. 4 to FIG. 17 , and details are not repeated here.
  • FIG. 21 is a schematic structural diagram of a processing device provided by an embodiment of the present application.
  • the processing device 2100 may include one or more central processing units (CPUs) 2101 and memory 2105 .
  • the memory 2105 stores one or more application programs or data.
  • the memory 2105 may be volatile storage or persistent storage.
  • a program stored in memory 2105 may include one or more modules, each of which may include a series of instructions to operate on a processing device.
  • the central processing unit 2101 may be arranged to communicate with the memory 2105 to execute a series of instruction operations in the memory 2105 on the processing device 2100.
  • the processing device 2100 may also include one or more power supplies 2102, one or more wired or wireless network interfaces 2103, one or more transceiver interfaces 2104, and/or, one or more operating systems, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
  • operating systems such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
  • the processing device 2100 can perform the operations performed by the processing device in the foregoing embodiments shown in FIG. 4 to FIG. 17 , and details are not repeated here.
  • the disclosed system, apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium.
  • the technical solutions of the present application can be embodied in the form of software products in essence, or the parts that contribute to the prior art, or all or part of the technical solutions, and the computer software products are stored in a storage medium , including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Electromagnetism (AREA)
  • Image Analysis (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Position Input By Displaying (AREA)

Abstract

一种信息处理方法以及相关设备,用于提高检测信息融合的效率。该方法包括:获取来自多个传感器的多个检测信息,检测信息包括不同传感器对同一目标物的检测信息。根据多个检测信息获取对应的阵型信息,并根据多个阵型信息确定目标阵型信息,目标阵型信息表示不同传感器检测到的针对同一目标物集合的检测信息。根据每个目标物在目标物集合中的阵位信息,将多个阵型信息中同一目标物对应的检测信息融合。

Description

一种信息处理方法及相关设备
本申请要求于2021年2月27日提交的申请号为202110221913.6、发明名称为“一种信息处理方法及相关设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及数据处理领域,尤其涉及一种信息处理方法以及相关设备。
背景技术
针对同一目标物,不同类型的传感器所能检测的特征信息并不相同,例如摄像头可以检测目标物的外观特征,雷达可以检测目标物的运动速度与距离等。针对同一目标物,为了获取该目标物的更多特征信息,需要将不同传感器的检测结果合并,得到该目标物的融合检测信息。
为了实现不同类型传感器的检测结果之间的融合,需要对齐不同传感器间的空间与时间。空间的对齐过程如下:获取各传感器所能检测到的画面,在实际空间中确定标定点,将标定点在实际空间中的位置,与标定点在画面中显示的位置进行关联。通过对多个标定点进行以上操作,建立实际空间与各传感器画面之间的映射关系,也就建立了各传感器画面之间的映射关系。再将不同传感器的时间对齐,当在同一时刻上,某一传感器画面上的某一点检测到物体信息,在其他传感器画面上与该点对应的点上也检测到物体信息,就可以确定这两个信息为同一物体的信息。因此就可以将不同传感器针对该物体的检测结果合并到一起,作为该物体的融合检测信息。
由于该种方法需要人工标定实现,导致信息融合的效率低下。
发明内容
本申请实施例提供了一种信息处理方法,用于实现不同传感器所检测到的检测信息的融合,以提高检测信息融合的效率。
本申请实施例第一方面提供了一种信息处理方法,该方法应用于监测***中的处理设备,该检测***还包括多个传感器。其中,多个传感器中的每个传感器所获取的检测信息中,均包括对相同的多个目标物的检测信息,该方法包括:
处理设备从上述多个传感器获取多个检测信息,其中,多个检测信息与多个传感器一一对应,多个检测信息中的每个检测信息,均为该检测信息所对应的传感器检测到的。处理设备根据该多个检测信息确定对应的多个阵型信息,其中,多个阵型信息与多个检测信息一一对应,每个阵型信息用于描述该阵型信息所对应的传感器检测到的物体之间的位置关系,并且,该物体中包括前述目标物。处理设备根据多个阵型信息确定目标阵型信息,该目标阵型信息与前述多个阵型信息的重合度均高于预设阈值,其中,目标阵型信息用于描述前述多个目标物之间的位置关系,并且,目标阵型信息中包括每个目标物的阵位信息。处理设备根据每个目标物中任一目标物的阵位信息,将多个阵型信息中同一目标物对应的 检测信息融合。
在本申请实施例中,通过来自不同传感器的检测信息,分别确定传感器所检测到的物体之间的阵型信息,根据与每个阵型信息的重合度,确定目标阵型信息,从而就确定了目标物。由于目标阵型信息是不同传感器检测出的具有相似特征的阵型信息,反映了相同的目标物在不同传感器处检测到的信息,因此就可以根据目标阵型信息,确定目标阵型信息中所反映的任意物体在不同传感器处的检测结果之间的对应关系,根据该对应关系就能实现将不同传感器对同一物的检测结果融合。相较于人工标定的方法,本申请实施例通过阵型信息获取融合检测信息的方法,得到融合检测信息的效率可以大幅提升。
并且,在信息采集方面,本申请实施例的方法只需要提供不同传感器的检测信息即可,并不需要占用被观测的场地,扩展了检测信息融合的适用范围。
结合第一方面,本申请实施例第一方面的第一种实施方式中,检测信息可以包括位置特征集,位置特征集可以包括多个位置特征,位置特征用于表示对应传感器所检测到的物体,与该物体四周的物体之间的位置关系。
在本申请实施例中,检测信息中包括位置特征集,通过位置特征集可以准确地反映传感器所检测到的物体之间的位置关系,也就可以通过物体之间的位置关系确定出准确的阵型信息,从何准确地将来自不同传感器的针对同一目标物的检测信息融合。
结合第一方面的第一种实施方式,本申请实施例第一方面的第二种实施方式中,处理设备根据多个检测信息确定对应的多个阵型信息,具体可以包括:处理设备根据多个位置特征集获取对应的多个触线信息,其中,多个触线信息中的每个触线信息,均用于描述对应传感器所检测到的物体触碰基准线的信息,前述多个触线信息与前述多个位置特征集一一对应。处理设备根据前述多个触线信息,分别确定对应的多个阵型信息,其中,前述多个触线信息与前述多个阵型信息一一对应。
在本申请实施例中,通过位置特征集获取触线信息,由于触线信息是物体触碰基准线的信息,触碰基准线可以获取触碰时间、触碰间隔、触碰位置等包括具体数值或具***置特征的数据。因此,通过多个目标物触线的具体数值或具***置特征,就可以获取触线数据的集合,例如多个触碰时间组成的数列、多个触碰间隔组成的数列或多个触碰位置组成的分布关系等。由于上述触线数据的集合均具有具体的数值或位置特征,不需要再进行其他的数据处理就可以直接运算,从而可以快速地确定出重合度符合预设阈值的目标阵型信息。
结合第一方面的第二种实施方式,本申请实施例第一方面的第三种实施方式中,可以根据触碰分区序列确定目标阵型信息,具体的:
触线信息包括对应传感器所检测到的物体触碰基准线的时序信息和触碰点分区信息,触碰点分区信息表示物体触碰基准线的触碰点,在基准线中的分区信息;阵型信息包括触碰分区序列,触碰分区序列表示对应传感器所检测到的物体触碰基准线的分区位置的前后时序关系。
处理设备根据多个阵型信息确定目标阵型信息,具体可以包括:处理设备获取多个触碰分区序列的第一子序列,并将第一子序列作为目标阵型信息,其中,第一子序列与多个 触碰分区序列的重合度均高于第一阈值。
处理设备根据前述每个目标物的阵位信息,将前述多个阵型信息中同一目标物对应的检测信息融合,具体可以包括:处理设备根据每个目标物在第一子序列中对应的触碰点分区信息,将多个触碰分区序列中同一目标物对应的检测信息融合。
在本申请实施例中,时序信息表示不同目标物触碰基准线的前后关系,触碰点分区信息表示不同目标物触碰基准线的左右关系,通过表示前后关系的时序信息与表示左右关系的触碰点分区信息,将多个目标物触碰基准线的位置关系,体现在触碰分区序列中。由于时序信息与触碰点分区信息均为具体的数值,触碰分区序列即为反映目标物之间位置关系的数值的集合。根据来自不同传感器的检测信息,获取对应的触碰分区序列。得到的多个触碰分区序列即为多个数值集合,确定数值集合的重合度满足预设阈值,只需要比对对应的数值即可,不需要进行复杂的运算,提升了匹配目标阵型信息的效率。
结合第一方面的第三种实施方式,本申请实施例第一方面的第四种实施方式中,可以通过最长公共子序列(longest common sequence,LCS)算法,根据源自于不同传感器的检测信息的多个触碰分区序列,确定与每个触碰分区序列的重合度均高于第一阈值的第一子序列。在本申请实施例中,可以通过LCS算法获取多个触碰分区序列的所有公共序列,从而实现对多个触碰分区序列的相同位置特征的匹配。由于LCS算法计算的是最长的公共子序列,因此,前述通过LCS算法确定的第一子序列可以包括,与前述多个触碰分区序列的重合度均高于第一阈值的子序列中,长度最长的子序列。
在本申请实施例中,可以通过LCS算法确定出多个触碰分区序列的所有公共序列,从而匹配出所有具有相同位置特征的触碰分区序列的片段。若有多个片段为公共序列,在这些公共序列之中夹杂着一些非公共序列,就可以将这些夹杂在公共序列中的非公共序列标识出来。其中,非公共序列在不同传感器中体现了不同的位置关系。在这种情况下,可以认为公共序列中夹杂的非公共序列,其出现的原因为传感器的误检或漏检,从而对非公共序列容错,即,将非公共序列在不同传感器检测到的目标物对应,实现检测信息的融合。
在本申请实施例中,通过LCS算法确定出的第一子序列,可以包括与多个触碰分区序列的重合度均高于第一阈值的子序列中,长度最长的子序列。由于目标物之间的位置关系可能存在偶然性的相似,确定出的子序列长度越长,具有相似位置关系的可能性越低,就越能规避这种偶然性,通过LCS算法确定出最长的子序列,就能准确地确定出相同目标物集合的目标阵型信息。例如,两个目标物的位置关系有可能存在偶然性的相似,但若将标准提升为十个目标物之间的位置关系具有高重合度,具有相似位置关系的十个目标物的可能性相较于具有相似位置关系的两个目标物的可能性将大大降低,因此若通过LCS算法确定出十个目标物的第一子序列,这十个目标物为不同传感器针对相同的十个目标物的检测结果的可能性更大,降低了匹配错误的可能性。
结合第一方面的第二种实施方式,本申请实施例第一方面的第五种实施方式中,可以根据触碰位置序列确定目标阵型信息,具体的:
触线信息包括对应传感器所检测到的物体触碰基准线的时序信息和触碰点位置信息,触碰点位置信息表示物体触碰基准线的触碰点,在基准线中的位置信息,体现了目标物之 间的左右位置关系;阵型信息包括触碰位置序列,触碰位置序列表示对应传感器所检测到的物体触碰基准线的位置的前后时序关系。
处理设备根据多个阵型信息确定目标阵型信息,具体可以包括:处理设备获取多个触碰位置序列的第三子序列,并将第三子序列作为目标阵型信息,其中,第三子序列与多个触碰位置序列的重合度均高于第三阈值。
处理设备根据前述每个目标物的阵位信息,将前述多个阵型信息中同一目标物对应的检测信息融合,具体可以包括:处理设备根据每个目标物在第三子序列中对应的触碰点位置信息,将多个触碰位置序列中同一目标物对应的检测信息融合。
在本申请实施例中,触碰点位置信息表示不同目标物触碰基准线的左右关系,并且可以是连续的数值或数据。因此,基于该连续的数值或数据,就可以更准确的将目标物的阵型信息区别于其他非目标物的阵型信息,从而更加准确地实现针对同一目标物的检测信息的融合。
并且,可以通过该连续的数值或数据,分析或计算出目标物之间的运动趋势,除了运动趋势,还可以计算出其他信息,例如目标物的运动轨迹等,此处不做限定。
结合第一方面的第二种实施方式,本申请实施例第一方面的第六种实施方式中,可以根据触碰间隔序列确定目标阵型信息,具体的:
触线信息包括对应传感器所检测到的物体触碰基准线的时序信息和触碰时间间隔信息,其中,触碰时间间隔信息表示物体触碰基准线的前后时间间隔;阵型信息包括触碰间隔序列,触碰间隔序列表示对应传感器所检测到的物体触碰基准线的时间间隔的分布。
处理设备根据多个阵型信息确定目标阵型信息,具体可以包括:处理设备获取多个触碰间隔序列的第二子序列,并将第二子序列作为目标阵型信息,其中,第二子序列与多个触碰间隔序列的重合度均高于第二阈值。
处理设备根据每个目标物的阵位信息,将至少两个阵型信息中同一目标物对应的检测信息融合,包括:处理设备根据每个目标物在第二子序列中对应的触碰时间分布信息,将至少两个触碰间隔序列中同一目标物对应的检测信息融合。
在本申请实施例中,时序信息表示不同目标物触碰基准线的前后关系,触碰时间间隔信息表示不同目标物触碰基准线的前后时间间隔,通过表示前后关系的时序信息与表示前后时间间隔的触碰时间间隔信息,将多个目标物触碰基准线的位置关系,体现在触碰间隔序列中。由于时序信息与触碰时间间隔信息均为具体的数值,触碰间隔序列即为反映目标物之间位置关系的数值的集合。根据来自不同传感器的检测信息,获取对应的触碰间隔序列。得到的多个触碰间隔序列即为多个数值集合,确定数值集合的重合度满足预设阈值,只需要比对对应的数值即可,不需要进行复杂的运算,提升了匹配目标阵型信息的效率。
结合第一方面的第六种实施方式,本申请实施例第一方面的第七种实施方式中,可以通过LCS算法,根据源自于不同传感器的检测信息的多个触碰间隔序列,确定与每个触碰间隔序列的重合度均高于第二阈值的第二子序列。在本申请实施例中,可以通过LCS算法获取多个触碰间隔序列的所有公共序列,从而实现对多个触碰间隔序列的相同位置特征的匹配。由于LCS算法计算的是最长的公共子序列,因此,前述通过LCS算法确定的第二序 列可以包括,与前述多个触碰间隔序列的重合度均高于第二阈值的子序列中,长度最长的子序列。
在本申请实施例中,可以通过LCS算法确定出多个触碰间隔序列的所有公共序列,从而匹配出所有具有相同位置特征的触碰间隔序列的片段。若有多个片段为公共序列,在这些公共序列之中夹杂着一些非公共序列,就可以将这些夹杂在公共序列中的非公共序列标识出来。其中,非公共序列在不同传感器中体现了不同的位置关系。在这种情况下,可以认为公共序列中夹杂的非公共序列,其出现的原因为传感器的误检或漏检,从而对非公共序列容错,即,将非公共序列在不同传感器检测到的目标物对应,实现检测信息的融合。
在本申请实施例中,通过LCS算法确定出的第二子序列,可以包括与多个触碰间隔序列的重合度均高于第二阈值的子序列中,长度最长的子序列。由于目标物触碰基准线的时间间隔可能存在偶然性的相似,确定出的子序列长度越长,具有相似时间间隔的可能性越低,就越能规避这种偶然性,通过LCS算法确定出最长的子序列,就能准确地确定出相同目标物集合的目标阵型信息。例如,两个目标物触碰基准线的时间间隔有可能存在偶然性的相似,但若将标准提升为十个目标物触碰基准线的时间间隔具有高重合度,具有相似时间间隔的十个目标物的可能性相较于具有相似时间间隔的两个目标物的可能性将大大降低,因此若通过LCS算法确定出十个目标物的第二子序列,这十个目标物为不同传感器针对相同的十个目标物的检测结果的可能性更大,降低了匹配错误的可能性。
结合第一方面的第二种实施方式,本申请实施例第一方面的第八种实施方式中,可以根据触碰分区序列和触碰间隔序列确定目标阵型信息,具体的:
触线信息包括对应传感器所检测到的物体触碰基准线的时序信息,触碰点分区信息和触碰时间间隔信息,其中,触碰点分区信息表示物体触碰基准线的触碰点在基准线中的分区信息,触碰时间间隔信息表示物体触碰基准线的前后时间间隔;阵型信息包括触碰分区序列和触碰间隔序列,其中,触碰分区序列表示对应传感器所检测到的物体触碰基准线的分区位置的前后时序关系,触碰间隔序列表示对应传感器所检测到的物体触碰基准线的时间间隔的分布。
处理设备根据多个阵型信息确定目标阵型信息,具体可以包括:
处理设备获取至少两个触碰分区序列的第一子序列,其中,第一子序列与多个触碰分区序列的重合度均高于第一阈值;处理设备获取至少两个触碰间隔序列的第二子序列,其中,第二子序列与多个触碰间隔序列的重合度均高于第二阈值;处理设备确定第一物体集合与第二物体集合的交集,并将该交集作为目标物体集合,其中,第一物体集合为第一子序列所对应的物体的集合,第二物体集合为第二子序列所对应的物体的集合;处理设备将目标物体集合的触碰分区序列和触碰间隔序列作为目标阵型信息。
在本申请实施例中,通过第一子序列所对应的第一物体集合,与第二子序列所对应的第二物体集合,确定第一物体集合与第二物体集合的交集,并将该交集作为目标物体集合。该交集中的物体对应于第一子序列,即根据不同传感器的检测信息,都能获取相似的触碰分区信息;同时该交集中的物体对应于第二子序列,也就是说根据不同传感器的检测信息,同时具有相似的触碰时间间隔信息。若根据多个传感器的检测信息,可以获取相似的多种 表示物***置关系的信息,则比只能获取相似的一种表示物***置关系的信息,检测信息所对应的物体集合为同一物体集合的可能性更高。因此,通过筛选多个子序列对应物体的交集,就可以更准确的将目标物的阵型信息区别于其他非目标物的阵型信息,从而更加准确地实现针对同一目标物的检测信息的融合。
在本申请实施例中,除了取第一子序列对应的物体与第二子序列对应的物体的交集,也可以取其他子序列对应的物体之间的交集,例如第一子序列与第三子序列各自对应的物体之间的交集,或第二子序列与第三子序列各自对应的物体之间的交集,或其他子序列所对应的物体,与第一至第三子序列中任一子序列所对应的物体之间的交集。其中,其他子序列也用于表示物体之间的位置关系,例如物体之间的距离或方向等,此处不做限定。通过取不同子序列所对应物体之间的交集,可以灵活的选取合适的子序列进行运算,提升了方案的可行性与灵活性。
在本申请实施例中,除了取两个子序列之各自对应物体之间的交集,也可以取更多子序列各自对应物体之间的交集,例如取第一子序列、第二子序列和第三子序列各自对应物体之间的交集。取的子序列数量越多,说明根据多个传感器的检测信息,可以获取相似的表示物***置关系的信息的种类越多,检测信息所对应的物体集合为同一物体集合的可能性更高。因此,通过筛选多个子序列对应物体的交集,就可以更准确的将目标物的阵型信息区别于其他非目标物的阵型信息,从而更加准确地实现针对同一目标物的检测信息的融合。
结合第一方面的第一种实施方式,本申请实施例第一方面的第九种实施方式中,可以通过目标群分布图确定目标阵型信息,具体的:
阵型信息包括目标群分布图,其中,目标群分布图表示物体之间的位置关系。
处理设备根据多个检测信息确定对应的多个阵型信息,具体可以包括:处理设备根据多个位置特征集,获取对应的多个初始目标群分布图,其中,初始目标群分布图表示对应传感器所检测到的物体之间的位置关系;处理设备通过视角变化算法,获取多个初始目标群分布图的标准视角图,并将多个标准视角图作为对应的多个目标群分布图,其中,目标群分布图的阵位信息包括目标物的目标物分布信息,目标物分布信息表示目标物在对应传感器所检测到的物体中的位置。
处理设备根据至少两个阵型信息确定目标阵型信息,具体可以包括:处理设备获取多个目标群分布图的图像特征集,并将图像特征集作为目标阵型信息,其中,图像特征集与多个目标群分布图的重合度均高于第三阈值。
处理设备根据每个目标物的阵位信息,将多个阵型信息中同一目标物对应的检测信息融合,具体可以包括:处理设备根据每个目标物在图像特征集中对应的目标物分布信息,将多个目标群分布图中同一目标物对应的检测信息融合。
在本申请实施例中,根据来自不同传感器的检测信息获取对应的多个初始目标群分布图,并通过视角变化算法获取对应的多个目标群分布图,再获取多个目标群分布图的图像特征集,并将该图像特征集作为目标阵型信息。通过来源于多个传感器的多个目标群分布图,确定与多个目标群分布图的重合度均高于预设阈值的图像特征集。由于图像特征可以 直观地反映图像中所显示物体之间的位置关系,因此通过多个目标群分布图确定图像特征集,可以直观地反映具有相似位置关系的检测结果,也就可以直观地将不同传感器对同一目标群的检测结果匹配出来,从而准确地实现检测信息的融合。
结合第一方面的第九种实施方式,本申请实施例第一方面的第十种实施方式中,可以结合基准线实现图像特征集的获取,具体的:
处理设备可以根据多个位置特征集,获取位置特征集对应的目标物的多个触线信息,其中,多个触线信息中的每个触线信息,均用于描述对应传感器所检测到的物体触碰基准线的信息,多个触线信息与多个位置特征集一一对应。
处理设备根据多个位置特征集,获取对应的多个初始目标群分布图,具体可以包括:处理设备根据多个触线信息,获取对应的多个初始目标群分布图,其中,多个初始目标群分布图中的物体,具有相同的触线信息。
在本申请实施例中,由于近似时间的图像之间相似度高,若不确定相同的时间,则在匹配来源于不同传感器的初始目标物分布图的时候,将会引入近似时间的初始目标群分布图的干扰,导致分布图匹配错误,图像特征集获取错误,从而将不同时刻的检测信息融合,造成检测信息融合错误。通过触线信息即可避免这种错误,具体的,通过触线信息确定多个初始目标群分布图,该多个初始目标群分布图具有相同的触线信息,表示该多个初始目标群分布图是在相同时间获取的,就能保证融合的检测信息是在同一时刻获取的,提升检测信息融合的准确性。
结合第一方面,第一方面的第一种实施方式至第十种实施方式中的任一种,本申请实施例第一方面的第十一种实施方式中,还可以实现不同传感器之间空间坐标系的映射,具体的:
多个传感器包括第一传感器和第二传感器,其中,第一传感器对应的空间坐标系为标准坐标系,第二传感器对应的空间坐标系为目标坐标系,该方法还可以包括:
处理设备根据融合检测信息确定多个标准点信息与多个目标点信息之间的映射关系,其中,融合检测信息为将多个阵型信息中同一目标物对应的检测信息融合得到的,标准点信息表示目标物体集合中各物体在标准坐标系中的位置信息,目标点信息表示目标物体集合中各物体在所述目标坐标系中的位置信息,其中,多个标准点信息与多个目标点信息一一对应;处理设备根据标准点信息与目标点信息之间的映射关系,确定标准坐标系与目标坐标系之间的映射关系。
在本申请实施例中,通过融合检测信息确定多个标准点信息与多个目标点信息之间的映射关系,并通过多个标准点信息与多个目标点信息之间的映射关系确定标准坐标系与目标坐标系之间的映射关系。本申请实施例所述的方法,只要能获取来自不同传感器的检测信息,即可实现不同传感器之间坐标系的映射。后续的目标阵型信息的确定,点信息映射等步骤都可以由处理设备自行实现,不需要人工标定与映射。通过处理设备匹配目标阵型信息,设备运算的准确性提升了点信息映射的准确性。同时,只要能获取来自不同传感器的检测信息,即可实现检测信息的融合以及坐标系的映射,避免了人工标定带来的场景限制,保证了检测信息融合的准确性与普适性。
结合第一方面,第一方面的第一种实施方式至第十一种实施方式中的任一种,本申请实施例第一方面的第十二种实施方式中,还可以实现不同传感器之间时间轴的对齐,具体的,该方法还可以包括:
处理设备根据对多个阵型信息中同一目标物对应的检测信息的融合结果,计算多个传感器的时间轴之间的时间差。
在本申请实施例中,通过对同一目标物的检测信息的融合结果,计算多个传感器的时间轴之间的时间差,就可以根据该时间差对齐不同传感器的时间轴。本申请实施例提供的时间轴对齐方法,只要能获取不同传感器的检测信息即可实现,不需要多个传感器在同一对时***中,扩展了不同传感器的时间轴对齐的应用场景,同时也扩大了信息融合的适用范围。
结合第一方面,第一方面的第一种实施方式至第十二种实施方式中的任一种,本申请实施例第一方面的第十三种实施方式中,还可以实现对传感器的纠错或筛选,具体的,多个传感器包括标准传感器和待测传感器,该方法还可以包括:
处理设备获取目标阵型信息在标准传感器对应的标准阵型信息;处理设备获取目标阵型信息在待测传感器对应的待测阵型信息;处理设备确定待测阵型信息与标准阵型信息的差异;处理设备根据前述差异和标准阵型信息,获取错误参数,其中,错误参数用于指示待测阵型信息的误差,或用于指示待测传感器的性能参数。
在本申请实施例中,将标准传感器作为检测的标准,根据待测阵型信息与标准阵型信息的差异获取错误参数。当该错误参数用于指示待测阵型信息的误差时,可以通过错误参数与标准阵型信息,将待测阵型信息中错误参数所对应的信息改正;当错误参数用于指示待测传感器的性能参数时,可以确定待测传感器的误检率等性能参数,实现对待测传感器的数据化分析,以实现对传感器的选择。
本申请第二方面提供了一种处理设备,该处理设备位于检测***中,该检测***还包括至少两个传感器,其中,至少两个传感器所获取的检测信息中包括至少两个传感器分别对相同的至少两个目标物的检测信息,该处理设备包括:处理器和收发器。
其中,收发器用于,从至少两个传感器获取至少两个检测信息,其中,至少两个传感器与至少两个检测信息一一对应。
其中,处理器用于:根据至少两个检测信息确定对应的至少两个阵型信息,其中,每个阵型信息用于描述对应传感器所检测到的物体之间的位置关系,其中,物体中包括前述目标物;根据至少两个阵型信息确定目标阵型信息,目标阵型信息与至少两个阵型信息中的每个阵型信息的重合度均高于预设阈值,目标阵型信息用于描述至少两个目标物之间的位置关系,目标阵型信息中包括每个目标物的阵位信息;根据每个目标物中任一目标物的阵位信息,将至少两个阵型信息中同一目标物对应的检测信息融合。
该处理设备用于执行前述第一方面的方法。
第二方面的有益效果参见第一方面,此处不再赘述。
本申请实施例第三方面提供了一种处理设备,该设备包括:处理器和与处理器耦合的存储器。存储器用于存储可执行指令,可执行指令用于指示处理器执行前述第一方面的方 法。
本申请实施例第四方面提供了一种计算机可读存储介质,该计算机可读存储介质中保存有程序,当所述计算机执行所述程序时,执行前述第一方面所述的方法。
本申请实施例第五方面提供了一种计算机程序产品,当该计算机程序产品在计算机上执行时,所述计算机执行前述第一方面所述的方法。
附图说明
图1a为多传感器的时间轴对齐的示意图;
图1b为多传感器的空间坐标系对齐的示意图;
图2为本申请实施例提供的匹配目标物的示意图;
图3a为本申请实施例提供的信息处理方法的一个***示意图;
图3b为本申请实施例提供的信息处理方法的一个应用场景示意图;
图4为本申请实施例提供的信息处理方法的一个流程示意图;
图5为本申请实施例提供的信息处理方法的一个特征示意图;
图6为本申请实施例提供的划线法的一个示意图;
图7为本申请实施例提供的信息处理方法的另一流程示意图;
图8为本申请实施例提供的信息处理方法的另一应用场景示意图;
图9为本申请实施例提供的信息处理方法的另一流程示意图;
图10为本申请实施例提供的信息处理方法的另一应用场景示意图;
图11为本申请实施例提供的信息处理方法的另一流程示意图;
图12为本申请实施例提供的信息处理方法的另一应用场景示意图;
图13为本申请实施例提供的信息处理方法的另一流程示意图;
图14为本申请实施例提供的信息处理方法的另一应用场景示意图;
图15为本申请实施例提供的信息处理方法的另一流程示意图;
图16为本申请实施例提供的信息处理方法的另一示意图;
图17为本申请实施例提供的信息处理方法的另一流程示意图;
图18为本申请实施例提供的信息处理方法的另一应用场景示意图;
图19为本申请实施例提供的信息处理方法的另一应用场景示意图;
图20为本申请实施例提供的处理设备的一个结构示意图;
图21为本申请实施例提供的处理设备的另一结构示意图。
具体实施方式
本申请实施例提供了一种信息处理方法以及相关设备,用于实现不同传感器所检测到的检测信息的融合,以提高检测信息融合的效率。
传感器可以对物体进行检测,针对同一物体,不同的传感器可以检测到不同的检测信息。例如,摄像头可以检测物体的形状、纹理等外观特征,雷达可以检测物体的位置与速度等运动信息。针对同一物体,若要获取多种信息,需要将来自不同传感器的检测信息融 合。
为了实现检测信息的融合,需要将不同传感器的时间轴与空间坐标系对齐。其中,时间轴的对齐需要传感器在同一对时***中,请参阅图1a,图1a为多传感器的时间轴对齐的示意图。对时***中的对时设备生成时间标识,并将时间标识传输给该对时***内的多个传感器。对时***内的多个传感器基于同一时间标识进行检测,就可以实现时间轴的对齐。
由于对时设备的时间标识只能在对时***内传输,对时***外的传感器无法接收时间标识,因此时间轴的对齐只能在同一对时***内实现,这一因素限制了检测信息融合的应用场景。
另一方面,空间坐标系的对齐需要空间标定实现。请参阅图1b,图1b为多传感器的空间坐标系对齐的示意图。空间标定需要确定实际空间中的标定点,通过人工标定该标定点在不同传感器画面中的位置,例如在传感器A的画面中标定标定点4,在传感器B的画面中标定对应的标定点4’,再人工确定同一标定点在不同传感器画面中位置的映射关系。为了保证映射关系的准确性,需要标定多个标定点,以实现对空间坐标系的完整映射。
由于空间标定需要人工实现,人的主观认知与实际的映射关系可能有偏差,并不一定能真实反映实际的映射关系。例如,图1b中所示的标定点4与标定点4’,在圆柱体中,无法找到与其他点有明显区别的标定点,对于不同画面标定的标定点实际上并不能反映同一个点,造成标定错误。除了圆柱形,其他任何不具备明显区别点的物体,例如球体等,都容易出现上述标定错误的情况。因此,人工标定的映射关系并不一定准确。空间标定不准确,在对多个传感器进行检测信息融合的过程中,可能会将现实中的同一目标物判定为不同的目标物,或将不同的目标物判定为同一目标物,这样融合出来的信息就是错误的数据。
在本申请实施例中,除了如图1b所示的,对两个摄像头的画面进行空间标定,也可以对不属于同一类型的多种传感器进行空间标定。例如对摄像头的画面和雷达的画面进行标定等。对于不同类型的传感器画面的标定,也会出现上述标定点标定错误的情况,此处不再赘述。
并且,人工标定的效率低下,空间标定需要针对多个标定点进行人工标定,在标定的过程中被检测的区域不能被使用,这就给实际操作带来限制。例如,若要对列车车道进行空间标定,人工标定通常需要在半天或一天的时间内占用列车车道。通常情况下,列车车道的调度并不允许出现如此长时间的占用。在这种情况下,就无法实现空间标定及检测信息的融合。
综上所述,当前对不同传感器时间轴的对齐,受限于对时***,当传感器不在同一对时***中就无法实现。当前对不同传感器空间坐标系的对齐,受限于人工标定的低效率以及低准确性,导致检测信息的融合容易出现错误,并且限制了可以实现融合的场景。
基于上述缺陷,本申请实施例提供了一种信息处理方法,通过来自多个传感器的检测信息,获取检测信息所显示的物体之间的阵型信息。通过匹配具有相似特征的目标阵型信息,确定目标阵型信息为不同传感器对同一物体集合的检测信息,从而将不同传感器的检 测信息融合。
本申请实施例所提供的方法,实际上是在不同传感器的画面中人工确定同一目标物的过程在设备上的重现。每个传感器都具有多个时间对应的多个画面,每个画面中所体现的目标物的数量,状态等信息不尽相同。面对如此多的信息,人眼无法直接捕捉到画面中的所有细节,只能先从整体上,在不同画面中分辨出同一目标物集合的画面。由于是在不同画面中确定同一目标物集合的多个画面,因此该过程也称为匹配目标物集合。
人眼匹配目标物集合的过程,需要有个抽象的过程。将画面中的其他细节都略去,只提取画面中目标物之间的位置关系,从而抽象出目标物之间的阵型信息。
为了更加清楚地描述抽象的过程,接下来将结合图2进行描述。请参阅图2,图2为本申请实施例提供的匹配目标物的示意图。如图2所示,在摄像头的画面,即检测信息A中,有5辆机动车形成了一个类似于数字“9”的形状。而在雷达的画面,即检测信息B中,有5个目标物也形成了类似于“9”的形状。那么就可以认为这两个画面中各自的5个目标物集合,具有相似的位置特征,即具有相似的阵型信息,可以认为两者是同一目标物集合在不同传感器画面中的体现。
匹配出了目标物集合,就可以在不同传感器的画面中,根据单个目标物在目标物集合中的位置,在不同传感器的画面中确定相同的单个目标物。
如图2所示,通过传感器A检测到的检测信息A中,阵型“9”底部的目标物为目标物A,则可以认为,通过传感器B检测到的检测信息B中,阵型“9”底部的目标物A’与目标物A为同一目标物。
示例地,传感器A可以是摄像头,传感器B可以是雷达。除了前述组合,传感器A与传感器B也可以是其他组合,例如传感器A是雷达,传感器B是ETC等,或传感器A和传感器为同一种传感器,例如雷达或摄像头等,此处不做限定。
在本申请实施例中,不限定传感器的数量,除了传感器A和传感器B,还可以通过更多的传感器获取更多的检测信息,分析这些检测信息中的同一目标物,此处不做限定。
前面描述了人是如何在不同传感器的画面中确定同一目标物的画面,将上述思路应用于设备中,就是本发明实施例的方案。具体的,本发明实施例的方案主要包括以下几个步骤:1.获取来自不同传感器的多个检测信息;2.根据多个检测信息确定对应的阵型信息;3.根据多个阵型信息确定具有相似特征的目标阵型信息;4.根据目标阵型信息中每个目标物的阵位信息将不同传感器针对同一目标物的检测信息融合。
请参阅图3a,图3a为本申请实施例提供的信息处理方法的一个***示意图。如图3a所示,该***为检测***,***中包括处理设备和多个传感器。以传感器A和传感器B为例,传感器A将检测到的检测信息A传输给处理设备,传感器B将检测到的检测信息B传输给处理设备。处理设备根据检测信息A和检测信息B获取目标物的融合信息。
值得注意的是,本申请所述的检测***中的设备之间,可以具有固定的连接状态,也可以不具有固定的连接状态,通过数据拷贝等形式实现数据传输。只要传感器的检测信息能传输给处理设备,则该传感器及处理设备就能称之为检测***,此处不做限定。例如,传感器A和传感器B可以分别获取检测信息,然后在一定时间内将检测信息A和检测信息 B分别拷贝至处理设备处,由处理设备对检测信息A和检测信息B进行处理。这种模式也可称为离线处理。
值得注意的是,图中仅以两个传感器为例,并不造成对本申请实施例及检测***中传感器数量的限定。
请参阅图3b,图3b为本申请实施例提供的信息处理方法的一个应用场景示意图。如图3b所示,本申请实施例提供的信息处理方法主要用于多传感器***中的信息融合。多传感器***可以接收来自多个传感器的检测信息,将来自多个传感器的检测信息融合。检测信息可以是来自电子不停车收费***(electronic toll collection,ETC)传感器的车牌,交易流水信息等。除了来自ETC传感器的上述信息,多传感器***还可以获取来自其他传感器的其他检测信息,例如来自摄像头的车牌,车型信息等,来自雷达的距离,速度信息等,此处不做限定。
通过本申请实施例提供的信息处理方法实现了检测信息的融合,融合结果可以应用于多种场景中,例如高速公路上的收费稽核,非现场治超,安全监测等。除了高速公路上的上述场景,融合结果还可以应用于其他场景中,例如城市路口上的全息路口,车辆汇入预警,行人预警等,或封闭道路上的入侵检测,自动泊车等,此处不做限定。
一.本申请实施例中的信息处理方法。
基于图3a所示的检测***,接下来将结合图4对本申请实施例所示的信息处理方法的步骤进行详细描述。请参阅图4,图4为本申请实施例提供的信息处理方法的一个流程示意图。该方法包括:
401.从传感器A处获取检测信息A。
可选的,传感器A所获取的检测信息A中,可以包括位置特征集。位置特征集包括多个位置特征,位置特征用于表示传感器A所检测到的物体,与该物体四周的物体之间的位置关系。例如,在传感器A为摄像头的情况下,检测信息为像素组成的画面,位置特征可以体现为像素之间的距离。除了像素之间的距离,位置特征还可以表现为其他形式,例如,像素之间的左右关系或前后关系等,此处不做限定。
在本申请实施例中,除了摄像头,传感器A还可以是其他类型的传感器,例如雷达,电子不停车收费***(electronic toll collection,ETC)传感器等,此处不做限定。针对不同类型的传感器,会有对应的位置特征,例如雷达的位置特征可以表现为物体之间的距离或物体之间的方向等,ETC的位置特征可以表现为车辆的车道信息与前后时序关系等,此处不做限定。
402.从传感器B处获取检测信息B。
可选的,传感器B所获取的检测信息B中,也可以包括位置特征集。对于传感器B、检测信息B以及位置特征的描述,参见步骤401中对传感器A、检测信息A以及位置特征的描述,此处不再赘述。
值得注意的是,在本申请实施例中,传感器A与传感器B既可以是同种类的传感器,也可以是不同种类的传感器。例如,传感器A与传感器B可以为角度不同的摄像头或不同的雷达,也可以传感器A为摄像头或雷达,传感器B为ETC等,此处不做限定。
值得注意的是,本申请实施例中传感器的数量并不限定为两个,传感器的数量可以为大于或等于2的任意整数,此处不做限定。传感器A与传感器B作为对监测***中传感器的举例,若检测***中包括更多的传感器,对于这些传感器的描述参见步骤401和步骤402中对传感器A与传感器B的描述,此处不再赘述。多个传感器的种类也不限定,可以是同种类的传感器,也可以是不同种类的传感器,此处不做限定。
403.根据检测信息A确定阵型信息A。
获取了检测信息A,处理设备就可以根据检测信息A确定阵型信息A,阵型信息A用于表示传感器A所检测到的物体之间的位置关系。
可选的,若检测信息A中包括位置特征集,则处理设备可以根据该位置特征集确定阵型信息A。具体的,根据位置特征集获取阵型信息A的过程,有多种方法,获取的阵型信息A也不同,为了更清晰地描述获取阵型信息A的不同方法,在后面的实施例中将会分类进行描述。具体过程参见图7至图17所示实施例,此处不再赘述。
404.根据检测信息B确定阵型信息B。
获取了检测信息B,处理设备就可以根据检测信息B确定阵型信息B,阵型信息B用于表示传感器B所检测到的物体之间的位置关系。具体的,物体之间的位置关系可以包括物体之间的左右位置关系,或物体之间的前后位置关系中的至少一项。
可选的,若检测信息B中包括位置特征集,则处理设备可以根据该位置特征集确定阵型信息B。具体的,确定阵型信息可以通过划线法或图像特征匹配法等方法实现,获取阵型信息B的过程,参见步骤403中获取阵型信息A的过程,此处不再赘述。
在本申请实施例中,步骤401和步骤402没有必然的先后关系,即步骤401可以在步骤402之前或之后执行,步骤401和步骤402也可以同时进行,此处不做限定。步骤403和步骤404也没有必然的先后关系,即步骤403可以在步骤404之前或之后执行,步骤403和步骤404也可以同时进行,只要步骤403在步骤401之后执行,步骤404在步骤402之后执行即可,此处不做限定。
在本申请实施例中,若获取了来自更多传感器的检测信息,则也要根据获取的检测信息确定对应的阵型信息,确定对应阵型信息的过程参见步骤403和步骤404的描述,此处不再赘述。
405.根据阵型信息A和阵型信息B确定目标阵型信息。
获取了阵型信息A和阵型信息B,就可以根据阵型信息A和阵型信息B确定目标阵型信息。其中,目标阵型信息与阵型信息A和阵型信息B的重合度都高于预设阈值,用于体现阵型信息A与阵型信息B中属于同一目标物集合的阵型信息。
在本申请实施例中,阵型信息可以有多种表现形式,判断重合度的标准也不尽相同。为了更加清楚地描述不同的阵型信息的获取过程与处理方式,后续将会结合图7至图17的实施例进行详细解释,此处不再赘述。
406.根据目标阵型信息中目标物的阵位信息将来自传感器A和传感器B的针对同一目标物的检测信息融合。
阵型信息中包括每个目标物的阵位信息,用于表示该目标物在目标物集合中的具*** 置。因此,可以根据目标物的阵位信息,确定同一目标物在不同传感器的检测信息中对应的目标,并将多个对应目标的检测信息融合。
在本申请实施例中,通过来自不同传感器的检测信息,分别确定传感器所检测到的物体之间的阵型信息,根据与每个阵型信息的重合度,确定目标阵型信息,从而就确定了目标物。由于目标阵型信息是不同传感器检测出的具有相似特征的阵型信息,反映了相同的目标物在不同传感器处检测到的信息,因此就可以根据目标阵型信息,确定目标阵型信息中所反映的任意物体在不同传感器处的检测结果之间的对应关系,根据该对应关系就能实现将不同传感器对同一物的检测结果融合。相较于人工标定的方法,本申请实施例通过阵型信息获取融合检测信息的方法,得到融合检测信息的效率可以大幅提升。
并且,在信息采集方面,本申请实施例的方法只需要提供不同传感器的检测信息即可,并不需要占用被观测的场地,扩展了检测信息融合的适用范围。
可选的,在步骤403和步骤404中,可以根据位置特征集确定对应的阵型信息,在步骤405中,就需要根据多个阵型信息确定目标阵型信息。在本申请实施例中,位置特征集有不同的形式,确定阵型信息的方式也有许多种,主要包括划线法和图像特征匹配法,接下来将分类进行描述。
在本申请实施例中,阵型信息可以包括三大类信息:1.物体之间的横向位置相对关系,例如物体之间的左右位置关系或左右间距等;2.物体之间纵向位置相对关系,例如物体的前后位置关系或前后间距等;3.物体自身的特征,例如长度,宽度,高度,形状等。
以图5为例,图5为本申请实施例提供的信息处理方法的一个特征示意图。如图5所示,阵型信息可以包括车辆之间的前后间距和左右间距,还可以包括各车辆的信息,例如车辆的型号,车牌号等信息,此处不做限定。
值得注意的是,图5仅以道路上的车辆为例,并不造成对传感器所检测物体的限定,传感器还可用于检测其他物体,例如行人、障碍物等,此处不做限定。
1.划线法。
对于人来说,阵型信息可以表现为一个整体的形状,例如图2所示实施例中的形状“9”。而对于设备来说,对于形状或者说图像的处理效率并没有对于数字处理的效率高。将阵型信息表现为连续或离散的数字的形式,可以大大提高数据处理的效率。
将整体的形状特征转化为数字特征,可以通过划线法实现。在不同传感器的画面中,都画一条基准线,获取物体触碰基准线的时序,位置等信息,就可以将形状特征转化为数字特征,便于处理设备的运算处理。
在本申请实施例中,物体触碰基准线的各种信息也称为触线信息。触线信息可以包括物体触碰基准线的时序信息,触碰点分区信息,触碰点位置信息,触碰时间间隔信息等,此处不做限定。
其中,时序信息表示传感器所检测到的物体,触碰基准线的前后时序,体现了物体之间的前后关系。
其中,触碰点分区信息表示物体触碰基准线的触碰点在该基准线中的分区信息。请参阅图6,图6为本申请实施例提供的划线法的一个示意图。在行车道路中,可以根据不同 的车道对基准线进行分区,例如图中的1车道为1区,2车道为2区,3车道为3区。
其中,触碰点位置信息表示物体触碰基准线的触碰点在该基准线中的位置信息。例如图6中1车道的第一辆车距离基准线左端点1.5米,3车道的第一辆车距离基准线左端点7.5米。
其中,触碰时间间隔信息表示物体触碰基准线的前后时间间隔。
其中,在阵型信息的三大类中,触碰点分区信息和触碰点位置信息可以归类于物体之间的横向位置相对关系,时序信息和触碰时间间隔信息可以归类于物体之间纵向位置相对关系。
通过上述各信息确定目标阵型信息,有多种的方法,接下来将分类进行描述:
1)根据时序信息和触碰点分区信息确定第一子序列。
请参阅图7,图7为本申请实施例提供的信息处理方法的一个流程示意图。该方法包括:
701.从传感器A(摄像头)处获取检测信息A。
以摄像头为例,在传感器A为摄像头的情况下,检测信息为像素组成的画面,位置特征集中的位置特征可以体现为像素之间的距离。除了像素之间的距离,位置特征还可以表现为其他形式,例如,像素之间的左右关系或前后关系等,此处不做限定。
在本申请实施例中,除了摄像头,传感器A还可以是其他类型的传感器,例如雷达,ETC传感器等,此处不做限定。针对不同类型的传感器,会有对应的位置特征,例如雷达的位置特征可以表现为物体之间的距离或物体之间的方向等,ETC的位置特征可以表现为车辆的车道信息与前后时序关系等,此处不做限定。
702.从传感器B(雷达)处获取检测信息B。
以雷达为例,在传感器B为雷达的情况下,检测信息为雷达检测到的物体在检测范围内的画面,位置特征集中的位置特征可以体现为物体之间的距离。除了物体之间的距离,位置特征还可以表现为其他形式,例如,物体之间的左右关系或前后关系等,此处不做限定。
在本申请实施例中,除了雷达,传感器B还可以是其他类型的传感器,例如摄像头,ETC传感器等,此处不做限定。针对不同类型的传感器,会有对应的位置特征,此处不做限定。
在本申请实施例中,传感器A和传感器B仅是对传感器的举例,并不造成对传感器的种类和数量的限定。
703.根据检测信息A获取物体像素触碰基准线的时序信息A与触碰点分区信息A。
由于检测信息A是由像素组成的画面,触线信息即为物体的像素触碰基准线的信息。处理设备可以根据检测信息A获取物体像素触碰基准线的时序信息A与触碰点分区信息A。
请参阅图8,图8为本申请实施例提供的信息处理方法的一个应用场景示意图,如图8所示,序号一栏表示各物体触碰基准线的前后顺序,即时序信息A;触碰点分区信息一栏表示各物体触碰基准线时,触碰点在基准线中的分区信息,即触碰点分区信息A,其中,1表示1车道,3表示3车道。
704.根据检测信息B获取物体触碰基准线的时序信息B与触碰点分区信息B。
由于检测信息B为雷达检测到的物体在检测范围内的画面,触线信息即为物体触碰基准线的信息。处理设备可以根据检测信息B获取物体触碰基准线的时序信息B与触碰点分区信息B。如图8所示,序号一栏表示各物体触碰基准线的前后顺序,即时序信息B;触碰点分区信息一栏表示各物体触碰基准线时,触碰点在基准线中的分区信息,即触碰点分区信息B,其中,1表示1车道,3表示3车道。
在本申请实施例中,步骤701与步骤702没有必然的先后顺序,步骤701可以在步骤702之前或之后执行,也可以步骤701与步骤702同时执行,此处不做限定。步骤703与步骤704也没有必然的先后顺序,步骤703可以在步骤704之前或之后执行,也可以步骤703与步骤704同时执行,只要步骤703在步骤701之后执行,步骤704在步骤702之后执行即可,此处不做限定。
705.根据时序信息A和触碰点分区信息A获取触碰分区序列A。
如图8所示,根据时序信息A,可以将触碰点分区信息A按照时序先后排列,获取触碰分区序列A。
706.根据时序信息B和触碰点分区信息B获取触碰分区序列B。
如图8所示,根据时序信息B,可以将触碰点分区信息B按照时序先后排列,获取触碰分区序列B。
在本申请实施例中,步骤705与步骤706没有必然的先后顺序,步骤705可以在步骤706之前或之后执行,也可以步骤705与步骤706同时执行,只要步骤705在步骤703之后执行,步骤706在步骤704之后执行即可,此处不做限定。
707.根据触碰分区序列A和触碰分区序列B获取第一子序列。
触碰分区序列A和触碰分区序列B本质上是两个数列,处理设备可以比对这两个数列,当发现两个数列中包括相同或重合度较高的序列片段时,可以认为该序列片段为两个序列的公共部分。在本申请实施例中,该序列片段也称为第一子序列。由于触碰分区序列体现了传感器所检测到的物体之间的位置关系,即物体之间的阵型信息。当两个数列中包括相同或重合度较高的序列片段,表示这两个数列中的该片段所对应的物体集合,具有相同的位置关系,即具有相同的阵型信息。不同的传感器检测到相同或相似的阵型信息,即可认为这两个传感器检测到的是同一个物体集合。
在本申请实施例中,第一子序列也称为目标阵型信息,表示多个传感器检测到的相同或相似的阵型信息。
具体的,由于传感器存在一定的漏检率,因此并不要求第一子序列与触碰分区序列A和触碰分区序列B中的片段完全重合,只要保证第一子序列与触碰分区序列A和触碰分区序列B的重合度均高于第一阈值即可。在本申请实施例中,重合度也称为相似度。具体的,第一阈值可以是90%,除了90%,第一阈值还可以是其他数值,例如95%,99%等,此处不做限定。
例如,图8中所示的触碰分区序列A和触碰分区序列B,均包含(3,3,1,3,1)的序列片段。处理设备可以将该片段作为第一子序列。此时,第一子序列与触碰分区序列A和触碰分区序列B的重合度均为100%。
可选的,可以通过LCS算法确定第一子序列。在本申请实施例中,可以通过LCS算法获取多个触碰分区序列的所有公共序列,从而实现对多个触碰分区序列的相同位置特征的匹配。由于LCS算法计算的是最长的公共子序列,因此,通过LCS算法计算出的第一子序列,可以包括与前述多个触碰分区序列的重合度均高于第一阈值的子序列中,长度最长的子序列。
在本申请实施例中,可以通过LCS算法确定出多个触碰分区序列的所有公共序列,从而匹配出所有具有相同位置特征的触碰分区序列的片段。若有多个片段为公共序列,在这些公共序列之中夹杂着一些非公共序列,就可以将这些夹杂在公共序列中的非公共序列标识出来。其中,非公共序列在不同传感器中体现了不同的位置关系。在这种情况下,可以认为公共序列中夹杂的非公共序列,其出现的原因为传感器的误检或漏检,从而对非公共序列容错,即,将非公共序列在不同传感器检测到的目标物对应,实现检测信息的融合。
在本申请实施例中,通过LCS算法确定出的第一子序列,可以包括与多个触碰分区序列的重合度均高于第一阈值的子序列中,长度最长的子序列。由于目标物之间的位置关系可能存在偶然性的相似,确定出的子序列长度越长,具有相似位置关系的可能性越低,就越能规避这种偶然性,通过LCS算法确定出最长的子序列,就能准确地确定出相同目标物集合的目标阵型信息。
例如,两个目标物的位置关系有可能存在偶然性的相似,但若将标准提升为十个目标物之间的位置关系具有高重合度,具有相似位置关系的十个目标物的可能性相较于具有相似位置关系的两个目标物的可能性将大大降低,因此若通过LCS算法确定出十个目标物的第一子序列,这十个目标物为不同传感器针对相同的十个目标物的检测结果的可能性更大,降低了匹配错误的可能性。
708.根据第一子序列中目标物的阵位信息,将来自传感器A和传感器B的针对同一目标物的检测信息融合。
第一子序列是由多个触碰点分区信息组成的,对于第一子序列中的每个触碰点分区信息,均可在触碰分区序列A和触碰分区序列B中找到对应的数据。例如触碰分区序列A中序号为4的触碰点分区信息,自身的触碰点分区信息为3,前后的触碰点分区信息均为1。
在本申请实施例中,触碰分区序列或第一子序列中的单个触碰点分区信息也称为阵位信息,表示单个目标物在目标物集合中的位置。
在本申请实施例中,自身的分区信息称为自身特征,前后或附近的分区信息称为周边特征。除了前后一个触碰点分区信息,周边特征也可包括附近更多的触碰点分区信息,此处不做限定。
在触碰分区序列B中也可以找到具有相同自身特征和周边特征的触碰点分区信息,即序号为13的那个触碰点分区信息。由于两个触碰点分区信息都在第一子序列中,且具有相同的自身特征与周边特征,可以认为两个触碰点分区信息反映了同一个物体。因此处理设备可以将序号4对应的检测信息,与序号13对应的检测信息融合,得到该目标物的融合信息。
例如,触碰分区序列A对应的摄像头,可以检测到序号4对应物体的大小,形状等外 观信息。具体到车辆上,可以检测到序号4对应车辆的型号,颜色,车牌等信息。触碰分区序列B对应的雷达,可以检测到序号13对应物体的移动速度等信息。具体到车辆上,可以检测到序号13对应车辆的车速,加速度等信息。处理设备可以将前述型号,颜色,车牌等信息与车速,加速度等信息融合,得到该车辆的融合信息。
在本申请实施例中,时序信息表示不同目标物触碰基准线的前后关系,触碰点分区信息表示不同目标物触碰基准线的左右关系,通过表示前后关系的时序信息与表示左右关系的触碰点分区信息,将多个目标物触碰基准线的位置关系,体现在触碰分区序列中。由于时序信息与触碰点分区信息均为具体的数值,触碰分区序列即为反映目标物之间位置关系的数值的集合。根据来自不同传感器的检测信息,获取对应的触碰分区序列。得到的多个触碰分区序列即为多个数值集合,确定数值集合的重合度满足预设阈值,只需要比对对应的数值即可,不需要进行复杂的运算,提升了匹配目标阵型信息的效率。
在本申请实施例中,除了根据时序信息和触碰点分区信息确定目标阵型信息,还可以根据时序信息和触碰时间间隔信息确定目标阵型信息。
2)根据时序信息和触碰时间间隔信息确定第二子序列。
请参阅图9,图9为本申请实施例提供的信息处理方法的一个流程示意图。该方法包括:
901.从传感器A(摄像头)处获取检测信息A。
902.从传感器B(雷达)处获取检测信息B。
对于步骤901和步骤902的描述参见图7所示实施例的步骤701和步骤702,此处不再赘述。
903.根据检测信息A获取物体像素触碰基准线的时序信息A与触碰时间间隔信息A。
由于检测信息A是由像素组成的画面,触线信息即为物体的像素触碰基准线的信息。处理设备可以根据检测信息A获取物体像素触碰基准线的时序信息A与触碰时间间隔信息A。
请参阅图10,图10为本申请实施例提供的信息处理方法的一个应用场景示意图,如图10所示,序号一栏表示各物体触碰基准线的前后顺序,即时序信息A;触碰时间间隔信息一栏表示各物体触碰基准线,与前一物体触碰基准线的时间差,即触碰时间间隔信息A,其中,触碰时间间隔信息以秒为单位。除了秒,触碰时间间隔信息也可以以毫秒作为单位,此处不作限定。
904.根据检测信息B获取物体触碰基准线的时序信息B与触碰时间间隔信息B。
由于检测信息B为雷达检测到的物体在检测范围内的画面,触线信息即为物体触碰基准线的信息。处理设备可以根据检测信息B获取物体触碰基准线的时序信息B与触碰时间间隔信息B。
如图10所示,序号一栏表示各物体触碰基准线的前后顺序,即时序信息B;触碰时间间隔信息一栏表示各物体触碰基准线,与前一物体触碰基准线的时间差,即触碰时间间隔信息B,其中,触碰时间间隔信息以秒为单位。除了秒,触碰时间间隔信息也可以以毫秒作为单位,此处不作限定。
在本申请实施例中,步骤901与步骤902没有必然的先后顺序,步骤901可以在步骤902之前或之后执行,也可以步骤901与步骤902同时执行,此处不做限定。步骤903与步骤904也没有必然的先后顺序,步骤903可以在步骤904之前或之后执行,也可以步骤903与步骤904同时执行,只要步骤903在步骤901之后执行,步骤904在步骤902之后执行即可,此处不做限定。
905.根据时序信息A和触碰时间间隔信息A获取触碰间隔序列A。
如图10所示,根据时序信息A,可以将触碰时间间隔信息A按照时序先后排列,获取触碰间隔序列A。
906.根据时序信息B和触碰时间间隔信息B获取触碰间隔序列B。
如图10所示,根据时序信息B,可以将触碰时间间隔信息B按照时序先后排列,获取触碰间隔序列B。
在本申请实施例中,步骤905与步骤906没有必然的先后顺序,步骤905可以在步骤906之前或之后执行,也可以步骤905与步骤906同时执行,只要步骤905在步骤903之后执行,步骤906在步骤904之后执行即可,此处不做限定。
907.根据触碰间隔序列A和触碰间隔序列B获取第二子序列。
触碰间隔序列A和触碰间隔序列B本质上是两个数列,处理设备可以比对这两个数列,当发现两个数列中包括相同或重合度较高的序列片段时,可以认为该序列片段为两个序列的公共部分。在本申请实施例中,该序列片段也称为第二子序列。由于触碰间隔序列体现了传感器所检测到的物体之间的位置关系,即物体之间的阵型信息。当两个数列中包括相同或重合度较高的序列片段,表示这两个数列中的该片段所对应的物体集合,具有相同的位置关系,即具有相同的阵型信息。不同的传感器检测到相同或相似的阵型信息,即可认为这两个传感器检测到的是同一个物体集合。
在本申请实施例中,第二子序列也称为目标阵型信息,表示多个传感器检测到的相同或相似的阵型信息。
具体的,由于传感器存在一定的漏检率,因此并不要求第二子序列与触碰间隔序列A和触碰间隔序列B中的片段完全重合,只要保证第二子序列与触碰间隔序列A和触碰间隔序列B的重合度均高于第二阈值即可。在本申请实施例中,重合度也称为相似度。具体的,第二阈值可以是90%,除了90%,第二阈值还可以是其他数值,例如95%,99%等,此处不做限定。
例如,图10中所示的触碰间隔序列A和触碰间隔序列B,均包含(2.0s,0.3s,1.9s,0.4s)的序列片段。处理设备可以将该片段作为第二子序列。此时,第二子序列与触碰间隔序列A和触碰间隔序列B的重合度均为100%。
可选的,可以通过LCS算法确定第二子序列。在本申请实施例中,可以通过LCS算法获取多个触碰间隔序列的所有公共序列,从而实现对多个触碰间隔序列的相同位置特征的匹配。由于LCS算法计算的是最长的公共子序列,因此,通过LCS算法计算出的第二子序列,可以包括与前述多个触碰间隔序列的重合度均高于第二阈值的子序列中,长度最长的子序列。
在本申请实施例中,可以通过LCS算法确定出多个触碰间隔序列的所有公共序列,从而匹配出所有具有相同位置特征的触碰间隔序列的片段。若有多个片段为公共序列,在这些公共序列之中夹杂着一些非公共序列,就可以将这些夹杂在公共序列中的非公共序列标识出来。其中,非公共序列在不同传感器中体现了不同的位置关系。在这种情况下,可以认为公共序列中夹杂的非公共序列,其出现的原因为传感器的误检或漏检,从而对非公共序列容错,即,将非公共序列在不同传感器检测到的目标物对应,实现检测信息的融合。
在本申请实施例中,通过LCS算法确定出的第二子序列,可以包括与多个触碰间隔序列的重合度均高于第二阈值的子序列中,长度最长的子序列。由于目标物之间的位置关系可能存在偶然性的相似,确定出的子序列长度越长,具有相似位置关系的可能性越低,就越能规避这种偶然性,通过LCS算法确定出最长的子序列,就能准确地确定出相同目标物集合的目标阵型信息。
例如,两个目标物的位置关系有可能存在偶然性的相似,但若将标准提升为十个目标物之间的位置关系具有高重合度,具有相似位置关系的十个目标物的可能性相较于具有相似位置关系的两个目标物的可能性将大大降低,因此若通过LCS算法确定出十个目标物的第一子序列,这十个目标物为不同传感器针对相同的十个目标物的检测结果的可能性更大,降低了匹配错误的可能性。
908.根据第二子序列中目标物的阵位信息,将来自传感器A和传感器B的针对同一目标物的检测信息融合。
第二子序列是由多个触碰时间间隔信息组成的,对于第二子序列中的每个触碰时间间隔信息,均可在触碰间隔序列A和触碰间隔序列B中找到对应的数据。例如触碰间隔序列A中序号为3的触碰时间间隔信息,自身的触碰时间间隔信息为0.3s,前后的触碰时间间隔信息分别为2.0s和1.9s。
在本申请实施例中,触碰间隔序列或第二子序列中的单个触碰时间间隔信息也称为阵位信息,表示单个目标物在目标物集合中的位置。
在本申请实施例中,自身的触碰时间间隔信息称为自身特征,前后或附近的触碰时间间隔信息称为周边特征。除了前后一个触碰时间间隔信息,周边特征也可包括附近更多的触碰时间间隔信息,此处不做限定。
在触碰间隔序列B中也可以找到具有相同自身特征和周边特征的触碰时间间隔信息,即序号为12的那个触碰时间间隔信息。由于两个触碰时间间隔信息都在第二子序列中,且具有相同的自身特征与周边特征,可以认为两个触碰时间间隔信息反映了同一个物体。因此处理设备可以将序号3对应的检测信息,与序号12对应的检测信息融合,得到该目标物的融合信息。
例如,触碰分区序列A对应的摄像头,可以检测到序号3对应物体的大小,形状等外观信息。具体到车辆上,可以检测到序号3对应车辆的型号,颜色,车牌等信息。触碰分区序列B对应的雷达,可以检测到序号12对应物体的移动速度等信息。具体到车辆上,可以检测到序号12对应车辆的车速,加速度等信息。处理设备可以将前述型号,颜色,车牌等信息与车速,加速度等信息融合,得到该车辆的融合信息。
在本申请实施例中,时序信息表示不同目标物触碰基准线的前后关系,触碰时间间隔信息表示不同目标物触碰基准线的前后时间间隔,通过表示前后关系的时序信息与表示前后时间间隔的触碰时间间隔信息,将多个目标物触碰基准线的位置关系,体现在触碰间隔序列中。由于时序信息与触碰时间间隔信息均为具体的数值,触碰间隔序列即为反映目标物之间位置关系的数值的集合。根据来自不同传感器的检测信息,获取对应的触碰间隔序列。得到的多个触碰间隔序列即为多个数值集合,确定数值集合的重合度满足预设阈值,只需要比对对应的数值即可,不需要进行复杂的运算,提升了匹配目标阵型信息的效率。
在本申请实施例中,除了根据前述两种方法确定目标阵型信息,还可以根据时序信息和触碰点位置信息确定目标阵型信息。
3)根据时序信息和触碰点位置信息确定第三子序列。
请参阅图11,图11为本申请实施例提供的信息处理方法的一个流程示意图。该方法包括:
1101.从传感器A(摄像头)处获取检测信息A。
1102.从传感器B(雷达)处获取检测信息B。
对于步骤1101和步骤1102的描述参见图7所示实施例的步骤701和步骤702,此处不再赘述。
1103.根据检测信息A获取物体像素触碰基准线的时序信息A与触碰点位置信息A。
由于检测信息A是由像素组成的画面,触线信息即为物体的像素触碰基准线的信息。处理设备可以根据检测信息A获取物体像素触碰基准线的时序信息A与触碰点位置信息A。触碰点位置信息A表示触碰点在基准线上的位置。具体的,触碰点位置信息A可以表示不同物体的触碰点之间的位置关系,具体可以表示触碰点之间的左右关系,从而就能体现物体之间的左右关系。
可选的,为了体现物体之间的左右位置关系,触碰点位置信息A可以表示触碰点与基准线上的基准点之间的距离,通过不同触碰点的距离体现触碰点之间的位置关系。本申请实施例将以触碰点与基准线左端点之间的距离为例,但并不造成对触碰点位置信息的限定,触碰点位置信息可以表示触碰点与基准线上任一点之间的位置关系,此处不作限定。
请参阅图12,图12为本申请实施例提供的信息处理方法的一个应用场景示意图,如图12所示,序号一栏表示各物体触碰基准线的前后顺序,即时序信息A;触碰点位置信息一栏表示各物体触碰基准线的触碰点,与基准线左端点之间的距离,即触碰点位置信息A。在本申请实施例中,触碰点位置信息可以表示触碰点与基准线上任一点之间的位置关系,此处不作限定。
1104.根据检测信息B获取物体触碰基准线的时序信息B与触碰点位置信息B。
由于检测信息B为雷达检测到的物体在检测范围内的画面,触线信息即为物体触碰基准线的信息。处理设备可以根据检测信息B获取物体触碰基准线的时序信息B与触碰点位置信息B。对于触碰点位置信息B的描述,参见步骤1103中对触碰点位置信息A的描述,此处不再赘述。
可选的,如图12所示,序号一栏表示各物体触碰基准线的前后顺序,即时序信息B; 触碰时间间隔信息一栏表示各物体触碰基准线的触碰点,与基准线左端点之间的距离,即触碰点位置信息B。在本申请实施例中,触碰点位置信息可以表示触碰点与基准线上任一点之间的位置关系,以体现不同触碰点之间的位置关系,此处不作限定。
在本申请实施例中,步骤1101与步骤1102没有必然的先后顺序,步骤1101可以在步骤1102之前或之后执行,也可以步骤1101与步骤1102同时执行,此处不做限定。步骤1103与步骤1104也没有必然的先后顺序,步骤1103可以在步骤1104之前或之后执行,也可以步骤1103与步骤1104同时执行,只要步骤1103在步骤1101之后执行,步骤1104在步骤1102之后执行即可,此处不做限定。
1105.根据时序信息A和触碰点位置信息A获取触碰位置序列A。
如图12所示,根据时序信息A,可以将触碰点位置信息A按照时序先后排列,获取触碰位置序列A。
1106.根据时序信息B和触碰点位置信息B获取触碰位置序列B。
如图12所示,根据时序信息B,可以将触碰点位置信息B按照时序先后排列,获取触碰位置序列B。
在本申请实施例中,步骤1105与步骤1106没有必然的先后顺序,步骤1105可以在步骤1106之前或之后执行,也可以步骤1105与步骤1106同时执行,只要步骤1105在步骤1103之后执行,步骤1106在步骤1104之后执行即可,此处不做限定。
1107.根据触碰位置序列A和触碰位置序列B获取第三子序列。
触碰位置序列A和触碰位置序列B本质上是两个数列,处理设备可以比对这两个数列,当发现两个数列中包括相同或重合度较高的序列片段时,可以认为该序列片段为两个序列的公共部分。在本申请实施例中,该序列片段也称为第三子序列。由于触碰位置序列体现了传感器所检测到的物体之间的位置关系,即物体之间的阵型信息。当两个数列中包括相同或重合度较高的序列片段,表示这两个数列中的该片段所对应的物体集合,具有相同的位置关系,即具有相同的阵型信息。不同的传感器检测到相同或相似的阵型信息,即可认为这两个传感器检测到的是同一个物体集合。
在本申请实施例中,第三子序列也称为目标阵型信息,表示多个传感器检测到的相同或相似的阵型信息。
具体的,由于传感器存在一定的漏检率,因此并不要求第三子序列与触碰位置序列A和触碰位置序列B中的片段完全重合,只要保证第三子序列与触碰位置序列A和触碰位置序列B的重合度均高于第三阈值即可。在本申请实施例中,重合度也称为相似度。具体的,第三阈值可以是90%,除了90%,第三阈值还可以是其他数值,例如95%,99%等,此处不做限定。
例如,图12中所示的触碰位置序列A和触碰位置序列B,均包含(7.5m,7.3m,1.5m,7.6m,1.3m)的序列片段。处理设备可以将该片段作为第三子序列。此时,第三子序列与触碰位置序列A和触碰位置序列B的重合度均为100%。
可选的,可以通过LCS算法确定第三子序列。在本申请实施例中,可以通过LCS算法获取多个触碰位置序列的所有公共序列,从而实现对多个触碰位置序列的相同位置特征的 匹配。由于LCS算法计算的是最长的公共子序列,因此,通过LCS算法计算出的第三子序列,可以包括与前述多个触碰位置序列的重合度均高于第二阈值的子序列中,长度最长的子序列。
1108.根据第三子序列中目标物的阵位信息,将来自传感器A和传感器B的针对同一目标物的检测信息融合。
第三子序列是由多个触碰点位置信息组成的,对于第三子序列中的每个触碰点位置信息,均可在触碰位置序列A和触碰位置序列B中找到对应的数据。例如触碰位置序列A中序号为2的触碰点位置信息,自身的触碰点位置信息为7.3m,前后的触碰点位置信息分别为7.5m和1.5m。
在本申请实施例中,触碰位置序列或第三子序列中的单个触碰点位置信息也称为阵位信息,表示单个目标物在目标物集合中的位置。
在本申请实施例中,自身的触碰点位置信息称为自身特征,前后或附近的触碰点位置信息称为周边特征。除了前后一个触碰点位置信息,周边特征也可包括附近更多的触碰点位置信息,此处不做限定。
在触碰位置序列B中也可以找到具有相同自身特征和周边特征的触碰点位置信息,即序号为11的那个触碰点位置信息。由于两个触碰点位置信息都在第三子序列中,且具有相同的自身特征与周边特征,可以认为两个触碰点位置信息反映了同一个物体。因此处理设备可以将序号2对应的检测信息,与序号11对应的检测信息融合,得到该目标物的融合信息。
例如,触碰位置序列A对应的摄像头,可以检测到序号2对应物体的大小,形状等外观信息。具体到车辆上,可以检测到序号2对应车辆的型号,颜色,车牌等信息。触碰位置序列B对应的雷达,可以检测到序号11对应物体的移动速度等信息。具体到车辆上,可以检测到序号11对应车辆的车速,加速度等信息。处理设备可以将前述型号,颜色,车牌等信息与车速,加速度等信息融合,得到该车辆的融合信息。
在本申请实施例中,可以通过LCS算法确定出多个触碰间隔序列的所有公共序列,从而匹配出所有具有相同位置特征的触碰间隔序列的片段。若有多个片段为公共序列,在这些公共序列之中夹杂着一些非公共序列,就可以将这些夹杂在公共序列中的非公共序列标识出来。其中,非公共序列在不同传感器中体现了不同的位置关系。在这种情况下,可以认为公共序列中夹杂的非公共序列,其出现的原因为传感器的误检或漏检,从而对非公共序列容错,即,将非公共序列在不同传感器检测到的目标物对应,实现检测信息的融合。
在本申请实施例中,通过LCS算法确定出的第三子序列,可以包括与多个触碰位置序列的重合度均高于第三阈值的子序列中,长度最长的子序列。由于目标物之间的位置关系可能存在偶然性的相似,确定出的子序列长度越长,具有相似位置关系的可能性越低,就越能规避这种偶然性,通过LCS算法确定出最长的子序列,就能准确地确定出相同目标物集合的目标阵型信息。
例如,两个目标物的位置关系有可能存在偶然性的相似,但若将标准提升为十个目标物之间的位置关系具有高重合度,具有相似位置关系的十个目标物的可能性相较于具有相 似位置关系的两个目标物的可能性将大大降低,因此若通过LCS算法确定出十个目标物的第一子序列,这十个目标物为不同传感器针对相同的十个目标物的检测结果的可能性更大,降低了匹配错误的可能性。
在本申请实施例中,触碰点位置信息表示不同目标物触碰基准线的左右关系,并且可以是连续的数值或数据。因此,基于该连续的数值或数据,就可以更准确的将目标物的阵型信息区别于其他非目标物的阵型信息,从而更加准确地实现针对同一目标物的检测信息的融合。
并且,可以通过该连续的数值或数据,分析或计算出目标物之间的运动趋势,除了运动趋势,还可以计算出其他信息,例如目标物的运动轨迹等,此处不做限定。
在本申请实施例中,除了确定对应的子序列,还可以将子序列结合起来提升阵型匹配的准确性。
4)根据第一子序列和第二子序列确定交集。
请参阅图13,图13为本申请实施例提供的信息处理方法的一个流程示意图。该方法包括:
1301.从传感器A(摄像头)处获取检测信息A。
1302.从传感器B(雷达)处获取检测信息B。
对于步骤1301和步骤1302的描述参见图7所示实施例的步骤701和步骤702,此处不再赘述。
1303.根据检测信息A获取物体像素触碰基准线的时序信息A,触碰点分区信息A和触碰时间间隔信息A。
由于检测信息A是由像素组成的画面,触线信息即为物体的像素触碰基准线的信息。处理设备可以根据检测信息A获取物体像素触碰基准线的时序信息A,触碰点分区信息A和触碰时间间隔信息A。
请参阅图14,图14为本申请实施例提供的信息处理方法的一个应用场景示意图,如图14所示,序号一栏表示各物体触碰基准线的前后顺序,即时序信息A;触碰点分区信息一栏表示各物体触碰基准线时,触碰点在基准线中的分区信息,即触碰点分区信息A,其中,1表示1车道,3表示3车道。触碰时间间隔信息一栏表示各物体触碰基准线,与前一物体触碰基准线的时间差,即触碰时间间隔信息A,其中,触碰时间间隔信息以秒为单位。除了秒,触碰时间间隔信息也可以以毫秒作为单位,此处不作限定。
1304.根据检测信息B获取物体触碰基准线的时序信息B,触碰点分区信息B和触碰时间间隔信息B。
由于检测信息B为雷达检测到的物体在检测范围内的画面,触线信息即为物体触碰基准线的信息。处理设备可以根据检测信息B获取物体触碰基准线的时序信息B,触碰点分区信息B和触碰时间间隔信息B。
如图14所示,序号一栏表示各物体触碰基准线的前后顺序,即时序信息B;触碰点分区信息一栏表示各物体触碰基准线时,触碰点在基准线中的分区信息,即触碰点分区信息B,其中,1表示1车道,3表示3车道。触碰时间间隔信息一栏表示各物体触碰基准线, 与前一物体触碰基准线的时间差,即触碰时间间隔信息B,其中,触碰时间间隔信息以秒为单位。除了秒,触碰时间间隔信息也可以以毫秒作为单位,此处不作限定。
在本申请实施例中,步骤1301与步骤1302没有必然的先后顺序,步骤1301可以在步骤1302之前或之后执行,也可以步骤1301与步骤1302同时执行,此处不做限定。步骤1303与步骤1304也没有必然的先后顺序,步骤1303可以在步骤1304之前或之后执行,也可以步骤1303与步骤1304同时执行,只要步骤1303在步骤1301之后执行,步骤1304在步骤1302之后执行即可,此处不做限定。
1305.根据时序信息A和触碰点分区信息获取触碰分区序列A,根据时序信息A和触碰时间间隔信息A获取触碰间隔序列A。
处理设备根据时序信息A和触碰点分区信息获取触碰分区序列A的步骤,参见图7所示实施例的步骤705,此处不再赘述。
处理设备根据时序信息A和触碰时间间隔信息A获取触碰间隔序列A的步骤,参见图9所示实施例的步骤905,此处不再赘述。
1306.根据时序信息B和触碰点分区信息获取触碰分区序列B,根据时序信息B和触碰时间间隔信息B获取触碰间隔序列B。
处理设备根据时序信息B和触碰点分区信息获取触碰分区序列B的步骤,参见图7所示实施例的步骤706,此处不再赘述。
处理设备根据时序信息B和触碰时间间隔信息B获取触碰间隔序列B的步骤,参见图9所示实施例的步骤906,此处不再赘述。
在本申请实施例中,步骤1305与步骤1306没有必然的先后顺序,步骤1305可以在步骤1306之前或之后执行,也可以步骤1305与步骤1306同时执行,只要步骤1305在步骤1303之后执行,步骤1306在步骤1304之后执行即可,此处不做限定。
1307.根据触碰分区序列A和触碰分区序列B获取第一子序列。
触碰分区序列A和触碰分区序列B本质上是两个数列,处理设备可以比对这两个数列,当发现两个数列中包括相同或重合度较高的序列片段时,可以认为该序列片段为两个序列的公共部分。在本申请实施例中,该序列片段也称为第一子序列。由于触碰分区序列体现了传感器所检测到的物体之间的位置关系,即物体之间的阵型信息。当两个数列中包括相同或重合度较高的序列片段,表示这两个数列中的该片段所对应的物体集合,具有相同的位置关系,即具有相同的阵型信息。不同的传感器检测到相同或相似的阵型信息,即可认为这两个传感器检测到的是同一个物体集合。
在本申请实施例中,第一子序列也称为目标阵型信息,表示多个传感器检测到的相同或相似的阵型信息。
具体的,由于传感器存在一定的漏检率,因此并不要求第一子序列与触碰分区序列A和触碰分区序列B中的片段完全重合,只要保证第一子序列与触碰分区序列A和触碰分区序列B的重合度均高于第一阈值即可。在本申请实施例中,重合度也称为相似度。具体的,第一阈值可以是90%,除了90%,第一阈值还可以是其他数值,例如95%,99%等,此处不做限定。
例如,图8中所示的触碰分区序列A和触碰分区序列B,均包含(3,3,1,3,1)的序列片段。处理设备可以将该片段作为第一子序列。此时,第一子序列与触碰分区序列A和触碰分区序列B的重合度均为100%。
1308.根据触碰间隔序列A和触碰间隔序列B获取第二子序列。
触碰间隔序列A和触碰间隔序列B本质上是两个数列,处理设备可以比对这两个数列,当发现两个数列中包括相同或重合度较高的序列片段时,可以认为该序列片段为两个序列的公共部分。在本申请实施例中,该序列片段也称为第二子序列。由于触碰间隔序列体现了传感器所检测到的物体之间的位置关系,即物体之间的阵型信息。当两个数列中包括相同或重合度较高的序列片段,表示这两个数列中的该片段所对应的物体集合,具有相同的位置关系,即具有相同的阵型信息。不同的传感器检测到相同或相似的阵型信息,即可认为这两个传感器检测到的是同一个物体集合。
在本申请实施例中,第二子序列也称为目标阵型信息,表示多个传感器检测到的相同或相似的阵型信息。
具体的,由于传感器存在一定的漏检率,因此并不要求第二子序列与触碰间隔序列A和触碰间隔序列B中的片段完全重合,只要保证第二子序列与触碰间隔序列A和触碰间隔序列B的重合度均高于第二阈值即可。在本申请实施例中,重合度也称为相似度。具体的,第二阈值可以是90%,除了90%,第二阈值还可以是其他数值,例如95%,99%等,此处不做限定。
例如,图10中所示的触碰间隔序列A和触碰间隔序列B,均包含(2.0s,0.3s,1.9s,0.4s)的序列片段。处理设备可以将该片段作为第二子序列。此时,第二子序列与触碰间隔序列A和触碰间隔序列B的重合度均为100%。
1309.确定第一子序列所对应的第一物体集合,与第二子序列所对应的第二物体集合的交集。
第一子序列(3,3,1,3,1)所指示的物体,在传感器A侧,序号为1至5,对应于传感器B侧的序号为10至14的物体。在本申请实施例中,与第一子序列对应的物体集合也称为第一物体集合。
第二子序列(2.0s,0.3s,1.9s,0.4s)所指示的物体,在传感器A侧,序号为2至5,对应于传感器B侧的序号为11至14的物体。在本申请实施例中,与第二子序列对应的物体集合也称为第二物体集合。
取两个物体集合的交集,即在传感器A侧,取序号1至5与序号2至5的物体的交集,即确定交集为序号2至5的目标物的集合。对应的,在传感器B侧,该交集即为序号11至14的目标物的集合。在本申请实施例中,第一物体集合与第二物体集合的交集也称为目标物体集合。
1310.根据交集中物体的阵位信息,将来自传感器A和传感器B的针对同一目标物的检测信息融合。
第一子序列是由多个触碰点分区信息组成的,对于第一子序列中的每个触碰点分区信息,均可在触碰分区序列A和触碰分区序列B中找到对应的数据。例如触碰分区序列A中 序号为4的触碰点分区信息,自身的触碰点分区信息为3,前后的触碰点分区信息均为1。
在本申请实施例中,触碰分区序列或第一子序列中的单个触碰点分区信息也称为阵位信息,表示单个目标物在目标物集合中的位置。
在本申请实施例中,自身的分区信息称为自身特征,前后或附近的分区信息称为周边特征。除了前后一个触碰点分区信息,周边特征也可包括附近更多的触碰点分区信息,此处不做限定。
在触碰分区序列B中也可以找到具有相同自身特征和周边特征的触碰点分区信息,即序号为13的那个触碰点分区信息。由于两个触碰点分区信息都在第一子序列中,且具有相同的自身特征与周边特征,可以认为两个触碰点分区信息反映了同一个物体。因此处理设备可以将序号4对应的检测信息,与序号13对应的检测信息融合,得到该目标物的融合信息。
例如,触碰分区序列A对应的摄像头,可以检测到序号4对应物体的大小,形状等外观信息。具体到车辆上,可以检测到序号4对应车辆的型号,颜色,车牌等信息。触碰分区序列B对应的雷达,可以检测到序号13对应物体的移动速度等信息。具体到车辆上,可以检测到序号13对应车辆的车速,加速度等信息。处理设备可以将前述型号,颜色,车牌等信息与车速,加速度等信息融合,得到该车辆的融合信息。
类似的,也可以将第二子序列中具有相同自身特征与周边特征的检测信息融合,具体参见前述根据第一子序列融合的过程,此处不再赘述。
对于第二子序列的自身特征与周边特征的描述,参见图9所示实施例的步骤908,此处不再赘述。
在本申请实施例中,通过第一子序列所对应的第一物体集合,与第二子序列所对应的第二物体集合,确定第一物体集合与第二物体集合的交集,并将该交集作为目标物体集合。该交集中的物体对应于第一子序列,即根据不同传感器的检测信息,都能获取相似的触碰分区信息;同时该交集中的物体对应于第二子序列,也就是说根据不同传感器的检测信息,同时具有相似的触碰时间间隔信息。若根据多个传感器的检测信息,可以获取相似的多种表示物***置关系的信息,则比只能获取相似的一种表示物***置关系的信息,检测信息所对应的物体集合为同一物体集合的可能性更高。因此,通过筛选多个子序列对应物体的交集,就可以更准确的将目标物的阵型信息区别于其他非目标物的阵型信息,从而更加准确地实现针对同一目标物的检测信息的融合。
在本申请实施例中,除了取第一子序列对应的物体与第二子序列对应的物体的交集,也可以取其他子序列对应的物体之间的交集,例如第一子序列与第三子序列各自对应的物体之间的交集,或第二子序列与第三子序列各自对应的物体之间的交集,或其他子序列所对应的物体,与第一至第三子序列中任一子序列所对应的物体之间的交集。其中,其他子序列也用于表示物体之间的位置关系,例如物体之间的距离或方向等,此处不做限定。通过取不同子序列所对应物体之间的交集,可以灵活的选取合适的子序列进行运算,提升了方案的可行性与灵活性。
在本申请实施例中,除了取两个子序列之各自对应物体之间的交集,也可以取更多子 序列各自对应物体之间的交集,例如取第一子序列、第二子序列和第三子序列各自对应物体之间的交集。取的子序列数量越多,说明根据多个传感器的检测信息,可以获取相似的表示物***置关系的信息的种类越多,检测信息所对应的物体集合为同一物体集合的可能性更高。因此,通过筛选多个子序列对应物体的交集,就可以更准确的将目标物的阵型信息区别于其他非目标物的阵型信息,从而更加准确地实现针对同一目标物的检测信息的融合。
在本申请实施例中,通过位置特征集获取触线信息,由于触线信息是物体触碰基准线的信息,触碰基准线可以获取触碰时间、触碰间隔、触碰位置等包括具体数值或具***置特征的数据。因此,通过多个目标物触线的具体数值或具***置特征,就可以获取触线数据的集合,例如多个触碰时间组成的数列、多个触碰间隔组成的数列或多个触碰位置组成的分布关系等。由于上述触线数据的集合均具有具体的数值或位置特征,不需要再进行其他的数据处理就可以直接运算,从而可以快速地确定出重合度符合预设阈值的目标阵型信息。
在本申请实施例中,除了通过划线法确定阵型信息,也可以通过其他方法确定,例如图像特征匹配法。
2.图像特征匹配法。
对于人来说,阵型信息可以表现为一个整体的形状。对于设备来说,这种抽象出来的整体的形状,可以通过图像特征表示。在本申请实施例中,通过整体的图像特征确定阵型信息的方法,称为图像特征匹配法。
基于图3a所示的检测***,接下来将结合图15对本申请实施例所示的信息处理方法的步骤进行详细描述。请参阅图15,图15为本申请实施例提供的信息处理方法的一个流程示意图。该方法包括:
1501.从传感器A(摄像头)处获取检测信息A。
1502.从传感器B(雷达)处获取检测信息B。
步骤1501和1502参见图7所示实施例的步骤701和702,此处不再赘述。
1503.根据检测信息A确定初始目标群分布图A。
由于检测信息A是由像素组成的画面,处理设备可以根据画面中的像素分辨不同的物体,并对物体标注特征点。并将各特征点组成的形状作为初始目标群分布图A。
具体的,特征点的标注可以遵循统一的规律,例如,对于车辆的标注,可以将车头的中心点作为特征点。除了车头中心点,还可以是其他点,例如车牌中心点等,此处不做限定。
示例地,请参阅图16,图16为本申请实施例提供的信息处理方法的一个应用场景示意图。如图16所示,对车牌中心点进行标注,并连接标注点,形成初始目标群分布图A,该分布图具有一个类似于数字“9”的形状。
可选的,可以通过尺度不变特征转换(scale-invariant feature transform,SIFT)算法提取对应的形状特征,从而获取初始目标群分布图A。
1504.根据检测信息B确定初始目标群分布图B。
由于检测信息B为雷达检测到的物体在检测范围内的画面,雷达检测到的物体在画面中具有标注信息,标注信息即代表对应的物体。处理设备可以将各标注信息在画面中形成的形状作为初始目标群分布图B。
示例地,如图16所示,将标注信息所在的位置连接,形成初始目标群分布图B,该分布图也具有一个类似于数字“9”的形状。
可选的,可以通过SIFT算法提取对应的形状特征,从而获取初始目标群分布图B。
1505.获取初始目标群分布图A的目标群分布图A和初始目标群分布图B的目标群分布图B。
处理设备可以通过视角变化算法,获取初始目标群分布图A的标准视角图,并将初始目标群分布图A的标准视角图作为目标群分布图A。同理,处理设备可以通过视角变化算法,获取初始目标群分布图B的标准视角图,并将初始目标群分布图B的标准视角图作为目标群分布图B。
示例地,如图16所示,将初始目标群分布图B的视角作为标准视角,对初始目标群分布图A进行视角变化,得到与目标群分布图B相同视角的目标群分布图A。
1506.根据目标群分布图A和目标群分布图B确定图像特征集。
目标群分布图A和目标群分布图B为两个形状,处理设备可以对比这两个形状的图像特征,当发现两个图像特征中包括相同或重合度较高的特征集合时,可以认为该特征集合为两个图像特征的公共部分。在本申请实施例中,该特征集合也称为图像特征集。由于图像特征体现了传感器所检测到的物体之间的位置关系,即物体之间的阵型信息。当两个图像特征中包括相同或重合度较高的特征集合,表示这两个图像特征中的该特征集合所对应的物体集合,具有相同的位置关系,即具有相同的阵型信息。不同的传感器检测到相同或相似的阵型信息,即可认为这两个传感器检测到的是同一个物体集合。
在本申请实施例中,图像特征集也称为目标阵型信息,表示多个传感器检测到的相同或相似的阵型信息。
具体的,由于传感器存在一定的漏检率,因此并不要求图像特征集与目标群分布图A和目标群分布图B中的特征完全重合,只要保证图像特征集与目标群分布图A和目标群分布图B的重合度均高于第三阈值即可。在本申请实施例中,重合度也称为相似度。具体的,第三阈值可以是90%,除了90%,第三阈值还可以是其他数值,例如95%,99%等,此处不做限定。
可选的,可以通过人脸识别算法或指纹识别算法匹配不同目标群分布图的图像特征集。
1507.根据图像特征集中目标物的阵位信息,将来自传感器A和传感器B的针对同一目标物的检测信息融合。
图像特征集是由多个标注信息或标注点组成的,对于图像特征集中的每个标注信息或标注点,均可在目标群分布图A和目标群分布图B中找到对应的数据。例如图16中,目标群分布图A中形状“9”底部的那个标注点。
在本申请实施例中,目标群分布图或图像特征集中的单个标注信息或标注点也称为阵位信息,表示单个目标物在目标物集合中的位置。
在目标群分布图B中也可以找到具有相同位置的标注信息,即目标群分布图B中形状“9”底部的那个标注信息。由于这两个标注信息和标注点都在图像特征集中,且具有相同的位置特征,可以认为这两个标注信息和标注点反映了同一个物体。因此处理设备可以将形状“9”底部的那个标注点所对应的检测信息,与形状“9”底部的那个标注信息所对应的检测信息融合,得到该目标物的融合信息。
例如,目标群分布图A对应的摄像头,可以检测到该物体的大小,形状等外观信息。具体到车辆上,可以检测到对应车辆的型号,颜色,车牌等信息。目标群分布图B对应的雷达,可以检测到该物体的移动速度等信息。具体到车辆上,可以检测到对应车辆的车速,加速度等信息。处理设备可以将前述型号,颜色,车牌等信息与车速,加速度等信息融合,得到该车辆的融合信息。
在本申请实施例中,根据来自不同传感器的检测信息获取对应的多个初始目标群分布图,并通过视角变化算法获取对应的多个目标群分布图,再获取多个目标群分布图的图像特征集,并将该图像特征集作为目标阵型信息。通过来源于多个传感器的多个目标群分布图,确定与多个目标群分布图的重合度均高于预设阈值的图像特征集。由于图像特征可以直观地反映图像中所显示物体之间的位置关系,因此通过多个目标群分布图确定图像特征集,可以直观地反映具有相似位置关系的检测结果,也就可以直观地将不同传感器对同一目标群的检测结果匹配出来,从而准确地实现检测信息的融合。
在本申请实施例中,也可以将图像特征匹配法与划线法结合,得到更准确的结果。
3.图像特征匹配法与划线法结合。
请参阅图17,图17为本申请实施例提供的信息处理方法的一个流程示意图。该方法包括:
1701.从传感器A(摄像头)处获取检测信息A。
1702.从传感器B(雷达)处获取检测信息B。
步骤1701和1702参见图7所示实施例的步骤701和702,此处不再赘述。
1703.根据检测信息A获取触线信息A。
在图6所示实施例中已说明,触线信息包括物体触碰基准线的时序信息,触碰点分区信息,触碰点位置信息,触碰时间间隔信息等。处理设备可以根据检测信息获取前述触线信息中的任意多种,例如,可以根据检测信息A获取时序信息A和触碰点分区信息A。对于时序信息A和触碰点分区信息A的获取过程,参见图7所示实施例的步骤703,此处不再赘述。
除了获取时序信息A和触碰点分区信息A,处理设备还可以获取其他的触线信息,例如如图9所示实施例中步骤903所示的时序信息A和触碰时间间隔信息A,或如图11所示实施例中步骤1103所示的时序信息A和触碰点位置信息A,或如图13所示实施例中步骤1303所示的时序信息A,触碰点分区信息A和触碰时间间隔信息A等,此处不做限定。
1704.根据检测信息B获取触线信息B。
与步骤1703相对应,处理设备根据检测信息A获取了哪些类型的触线信息,就对应的要根据检测信息B获取相同类型的触线信息,获取触线信息的过程,参见前述图7,图9, 图11或图13所示的实施例,此处不再赘述。
1705.根据触线信息A确定初始目标群分布图A。
物体触碰基准线只发生在一瞬间,因此触线信息可以反映检测信息的时刻。处理设备可以根据触线信息A所反映时刻的检测信息A,确定初始目标群分布图A。此处获取的初始目标群分布图A,反映了触线信息A所在时刻的阵型信息。获取初始目标群分布图A的过程,参见图15所示实施例的步骤1503,此处不再赘述。
1706.根据触线信息B确定初始目标群分布图B。
处理设备可以确定与触线信息A具有相同阵型信息的触线信息B,由于触线信息主要反映了物体集合的阵型信息,因此可以认为触线信息B与触线信息A相同。
处理设备可以根据触线信息B所反映时刻的检测信息B,确定初始目标群分布图B。此处获取的初始目标群分布图B,反映了触线信息B所在时刻的阵型信息。获取初始目标群分布图B的过程,参见图15所示实施例的步骤1504,此处不再赘述。
1707.获取初始目标群分布图A的目标群分布图A和初始目标群分布图B的目标群分布图B。
1708.根据目标群分布图A和目标群分布图B确定图像特征集。
1709.根据图像特征集中目标物的阵位信息,将来自传感器A和传感器B的针对同一目标物的检测信息融合。
步骤1707至1709参见图15所示实施例的步骤1505至1507,此处不再赘述。
在本申请实施例中,由于近似时间的图像之间相似度高,若不确定相同的时间,则在匹配来源于不同传感器的初始目标物分布图的时候,将会引入近似时间的初始目标群分布图的干扰,导致分布图匹配错误,图像特征集获取错误,从而将不同时刻的检测信息融合,造成检测信息融合错误。通过触线信息即可避免这种错误,具体的,通过触线信息确定多个初始目标群分布图,该多个初始目标群分布图具有相同的触线信息,表示该多个初始目标群分布图是在相同时间获取的,就能保证融合的检测信息是在同一时刻获取的,提升检测信息融合的准确性。
二、本申请实施例中的信息处理方法的应用。
本申请实施例所述的方法,不仅可以用于获取融合信息,还可以有其他的用途,例如实现不同传感器的空间坐标系的映射,实现不同传感器的时间轴的映射,对传感器的纠错或筛选等功能。
1.实现不同传感器的空间坐标系的映射。
具体的,多个传感器可以包括第一传感器和第二传感器,其中,第一传感器对应的空间坐标系为标准坐标系,第二传感器对应的空间坐标系为目标坐标系。为了实现不同传感器的空间坐标系的映射,在图7至图17所示实施例之后,还可以包括:
处理设备根据融合检测信息确定多个标准点信息与多个目标点信息之间的映射关系,其中,融合检测信息为将多个阵型信息中同一目标物对应的检测信息融合得到的。在本申请实施例中,也称为融合信息。其中,标准点信息表示目标物体集合中各物体在标准坐标系中的位置信息,目标点信息表示目标物体集合中各物体在所述目标坐标系中的位置信息, 其中,多个标准点信息与多个目标点信息一一对应。
确定映射关系之后,处理设备就可以根据标准点信息与目标点信息之间的映射关系,确定标准坐标系与目标坐标系之间的映射关系。
在本申请实施例中,通过融合检测信息确定多个标准点信息与多个目标点信息之间的映射关系,并通过多个标准点信息与多个目标点信息之间的映射关系确定标准坐标系与目标坐标系之间的映射关系。本申请实施例所述的方法,只要能获取来自不同传感器的检测信息,即可实现不同传感器之间坐标系的映射。后续的目标阵型信息的确定,点信息映射等步骤都可以由处理设备自行实现,不需要人工标定与映射。通过处理设备匹配目标阵型信息,设备运算的准确性提升了点信息映射的准确性。同时,只要能获取来自不同传感器的检测信息,即可实现检测信息的融合以及坐标系的映射,避免了人工标定带来的场景限制,保证了检测信息融合的准确性与普适性。
2.实现不同传感器的时间轴的映射。
处理设备根据对多个阵型信息中同一目标物对应的检测信息的融合结果,计算多个传感器的时间轴之间的时间差。通过该时间差可以实现不同传感器之间时间轴的映射。
在本申请实施例中,通过对同一目标物的检测信息的融合结果,计算多个传感器的时间轴之间的时间差,就可以根据该时间差对齐不同传感器的时间轴。本申请实施例提供的时间轴对齐方法,只要能获取不同传感器的检测信息即可实现,不需要多个传感器在同一对时***中,扩展了不同传感器的时间轴对齐的应用场景,同时也扩大了信息融合的适用范围。
3.对传感器的纠错或筛选。
具体的,多个传感器可以包括标准传感器和待测传感器,该方法还可以包括:
处理设备获取目标阵型信息在标准传感器对应的标准阵型信息;处理设备获取目标阵型信息在待测传感器对应的待测阵型信息;处理设备确定待测阵型信息与标准阵型信息的差异;处理设备根据前述差异和标准阵型信息,获取错误参数,其中,错误参数用于指示待测阵型信息的误差,或用于指示待测传感器的性能参数。
请参阅图18,图18为本申请实施例提供的信息处理方法的一个应用场景示意图,如图18所示,若传感器B误检出一个数据,如图中的v6a6数据,可以根据触碰分区序列A与触碰分区序列B的差异,确定序号15的数据是传感器B误检出来的。
如上所述,可以获取传感器的误检信息,从而计算传感器的误检率,以评估传感器的性能。
请参阅图19,图19为本申请实施例提供的信息处理方法的一个应用场景示意图,如图19所示,若传感器B漏检了一个数据,如图中的序号2所对应的3车道上的那个目标物,可以根据触碰分区序列A与触碰分区序列B的差异,确定序号10与序号11中间漏检了一个目标物。
如上所述,可以获取传感器的漏检信息,从而计算传感器的漏检率,以评估传感器的性能。
在本申请实施例中,将标准传感器作为检测的标准,根据待测阵型信息与标准阵型信 息的差异获取错误参数。当该错误参数用于指示待测阵型信息的误差时,可以通过错误参数与标准阵型信息,将待测阵型信息中错误参数所对应的信息改正;当错误参数用于指示待测传感器的性能参数时,可以确定待测传感器的误检率等性能参数,实现对待测传感器的数据化分析,以实现对传感器的选择。
三、本申请实施例中的信息处理方法对应的处理设备。
下面对本申请实施例中的处理设备进行描述。请参阅图20,图20是本申请实施例提供的一种处理设备的结构示意图。该处理设备2000位于检测***中,该检测***还包括至少两个传感器,其中,至少两个传感器所获取的检测信息中包括至少两个传感器分别对相同的至少两个目标物的检测信息,该处理设备2000可以包括处理器2001和收发器2002。
其中,收发器2002用于,从至少两个传感器获取至少两个检测信息,其中,至少两个传感器与至少两个检测信息一一对应。
其中,处理器2001用于:根据至少两个检测信息确定对应的至少两个阵型信息,其中,每个阵型信息用于描述对应传感器所检测到的物体之间的位置关系,其中,物体中包括前述目标物;根据至少两个阵型信息确定目标阵型信息,目标阵型信息与至少两个阵型信息中的每个阵型信息的重合度均高于预设阈值,目标阵型信息用于描述至少两个目标物之间的位置关系,目标阵型信息中包括每个目标物的阵位信息;根据每个目标物中任一目标物的阵位信息,将至少两个阵型信息中同一目标物对应的检测信息融合。
在一种可选的实施方式中,检测信息包括位置特征集,位置特征集包括至少两个位置特征,位置特征表示对应传感器检测到的物体,与物体四周的物体之间的位置关系。
在一种可选的实施方式中,处理器2001具体用于:根据至少两个位置特征集获取对应的至少两个触线信息,其中,至少两个触线信息中的每个触线信息用于描述对应传感器所检测到的物体触碰基准线的信息,至少两个触线信息与至少两个位置特征集一一对应;根据至少两个触线信息分别确定对应的至少两个阵型信息,至少两个触线信息与至少两个阵型信息一一对应。
在一种可选的实施方式中,触线信息包括对应传感器所检测到的物体触碰所述基准线的时序信息和触碰点分区信息,触碰点分区信息表示物体触碰基准线的触碰点在基准线中的分区信息;阵型信息包括触碰分区序列,触碰分区序列表示对应传感器所检测到的物体触碰基准线的分区位置的前后时序关系。
处理器2001具体用于:获取至少两个触碰分区序列的第一子序列,将第一子序列作为目标阵型信息,其中,第一子序列与至少两个触碰分区序列的重合度均高于第一阈值;根据每个目标物在所述第一子序列中对应的触碰点分区信息,将至少两个触碰分区序列中同一目标物对应的检测信息融合。
在一种可选的实施方式中,触线信息包括对应传感器所检测到的物体触碰基准线的时序信息和触碰时间间隔信息,触碰时间间隔信息表示物体触碰基准线的前后时间间隔;阵型信息包括触碰间隔序列,触碰间隔序列表示对应传感器所检测到的物体触碰基准线的时间间隔的分布。
处理器2001具体用于:获取至少两个触碰间隔序列的第二子序列,将第二子序列作为 目标阵型信息,其中,第二子序列与至少两个触碰间隔序列的重合度均高于第二阈值;根据每个目标物在第二子序列中对应的触碰时间分布信息,将至少两个触碰间隔序列中同一目标物对应的检测信息融合。
在一种可选的实施方式中,触线信息包括对应传感器所检测到的物体触碰基准线的所述时序信息,触碰点分区信息和所述触碰时间间隔信息,触碰点分区信息表示物体触碰基准线的触碰点在基准线中的分区信息,触碰时间间隔信息表示物体触碰所述基准线的前后时间间隔;阵型信息包括触碰分区序列和触碰间隔序列,触碰分区序列表示对应传感器所检测到的物体触碰基准线的分区位置的前后时序关系,触碰间隔序列表示对应传感器所检测到的物体触碰基准线的时间间隔的分布。
处理器2001具体用于:获取至少两个触碰分区序列的第一子序列,第一子序列与至少两个触碰分区序列的重合度均高于第一阈值;获取至少两个触碰间隔序列的第二子序列,第二子序列与至少两个触碰间隔序列的重合度均高于第二阈值;确定第一物体集合与第二物体集合的交集,将交集作为目标物体集合,其中,第一物体集合为第一子序列所对应的物体的集合,第二物体集合为第二子序列所对应的物体的集合;将目标物体集合的触碰分区序列和触碰间隔序列作为所述目标阵型信息。
在一种可选的实施方式中,阵型信息包括目标群分布图,目标群分布图表示物体之间的位置关系。
处理器2001具体用于:根据至少两个位置特征集,获取对应的至少两个初始目标群分布图,初始目标分布图表示对应传感器所检测到的物体之间的位置关系;通过视角变化算法,获取至少两个初始目标群分布图的标准视角图,将至少两个标准视角图作为对应的至少两个目标群分布图,其中,目标群分布图的阵位信息包括目标物的目标物分布信息,目标物分布信息表示目标物在对应传感器所检测到的物体中的位置;获取至少两个目标群分布图的图像特征集,将图像特征集作为所述目标阵型信息,其中,图像特征集与至少两个目标群分布图的重合度均高于第三阈值;根据每个目标物在图像特征集中对应的目标物分布信息,将至少两个目标群分布图中同一目标物对应的检测信息融合。
在一种可选的实施方式中,处理器2001还用于:根据至少两个位置特征集,获取图像特征集中的对应目标物的至少两个触线信息,其中,至少两个触线信息中的每个触线信息用于描述对应传感器所检测到的物体触碰基准线的信息,至少两个触线信息与至少两个位置特征集一一对应。
处理器2001具体用于,根据至少两个触线信息,获取对应的至少两个初始目标群分布图,其中,至少两个初始目标群分布图中的物体,具有相同的触线信息。
在一种可选的实施方式中,至少两个传感器包括第一传感器和第二传感器,第一传感器对应的空间坐标系为标准坐标系,第二传感器对应的空间坐标系为目标坐标系。
处理器2001还用于:根据将至少两个阵型信息中同一目标物对应的检测信息融合得到的融合检测信息,确定至少两个标准点信息与至少两个目标点信息之间的映射关系,标准点信息表示目标物体集合中各物体在标准坐标系中的位置信息,目标点信息表示各物体在目标坐标系中的位置信息,其中,至少两个标准点信息与至少两个目标点信息一一对应; 根据标准点信息与目标点信息之间的映射关系,确定标准坐标系与目标坐标系之间的映射关系。
在一种可选的实施方式中,处理器2001还用于,根据对至少两个阵型信息中同一目标物对应的检测信息的融合结果,计算至少两个传感器的时间轴之间的时间差。
在一种可选的实施方式中,至少两个传感器包括标准传感器和待测传感器。
处理器2001还用于:获取目标阵型信息在所述标准传感器对应的标准阵型信息;获取目标阵型信息在待测传感器对应的待测阵型信息;确定待测阵型信息与标准阵型信息的差异;根据差异和标准阵型信息,获取错误参数,错误参数用于指示待测阵型信息的误差,或用于指示待测传感器的性能参数。
该处理设备2000可以执行前述图4至图17所示实施例中处理设备所执行的操作,具体此处不再赘述。
请参阅图21,图21是本申请实施例提供的一种处理设备的结构示意图。该处理设备2100可以包括一个或一个以***处理器(central processing units,CPU)2101和存储器2105。该存储器2105中存储有一个或一个以上的应用程序或数据。
其中,存储器2105可以是易失性存储或持久存储。存储在存储器2105的程序可以包括一个或一个以上模块,每个模块可以包括对处理设备中的一系列指令操作。更进一步地,中央处理器2101可以设置为与存储器2105通信,在处理设备2100上执行存储器2105中的一系列指令操作。
处理设备2100还可以包括一个或一个以上电源2102,一个或一个以上有线或无线网络接口2103,一个或一个以上收发器接口2104,和/或,一个或一个以上操作***,例如Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM等。
该处理设备2100可以执行前述图4至图17所示实施例中处理设备所执行的操作,具体此处不再赘述。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的***,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的***,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。

Claims (25)

  1. 一种信息处理方法,其特征在于,所述方法应用于检测***中的处理设备,所述检测***还包括至少两个传感器,其中,所述至少两个传感器所获取的检测信息中包括所述至少两个传感器分别对相同的至少两个目标物的检测信息,所述方法包括:
    所述处理设备从所述至少两个传感器获取至少两个检测信息,其中,所述至少两个传感器与所述至少两个检测信息一一对应;
    所述处理设备根据所述至少两个检测信息确定对应的至少两个阵型信息,其中,每个阵型信息用于描述对应传感器所检测到的物体之间的位置关系,其中,所述物体包括所述目标物;
    所述处理设备根据所述至少两个阵型信息确定目标阵型信息,所述目标阵型信息与所述至少两个阵型信息中的每个阵型信息的重合度均高于预设阈值,所述目标阵型信息用于描述所述至少两个目标物之间的位置关系,所述目标阵型信息中包括每个目标物的阵位信息;
    所述处理设备根据所述每个目标物中任一目标物的阵位信息,将所述至少两个阵型信息中同一目标物对应的检测信息融合。
  2. 根据权利要求1所述的方法,其特征在于,所述检测信息包括位置特征集,所述位置特征集包括至少两个位置特征,所述位置特征表示对应传感器检测到的物体,与所述物体四周的物体之间的位置关系。
  3. 根据权利要求2所述的方法,其特征在于,所述处理设备根据所述至少两个检测信息确定对应的至少两个阵型信息,包括:
    所述处理设备根据至少两个位置特征集获取对应的至少两个触线信息,其中,所述至少两个触线信息中的每个触线信息用于描述对应传感器所检测到的物体触碰基准线的信息,所述至少两个触线信息与所述至少两个位置特征集一一对应;
    所述处理设备根据所述至少两个触线信息分别确定对应的所述至少两个阵型信息,所述至少两个触线信息与所述至少两个阵型信息一一对应。
  4. 根据权利要求3所述的方法,其特征在于,
    所述触线信息包括对应传感器所检测到的物体触碰所述基准线的时序信息和触碰点分区信息,所述触碰点分区信息表示所述物体触碰所述基准线的触碰点在所述基准线中的分区信息;
    所述阵型信息包括触碰分区序列,所述触碰分区序列表示对应传感器所检测到的物体触碰所述基准线的分区位置的前后时序关系;
    所述处理设备根据所述至少两个阵型信息确定目标阵型信息,包括:
    所述处理设备获取所述至少两个触碰分区序列的第一子序列,将所述第一子序列作为所述目标阵型信息,其中,所述第一子序列与所述至少两个触碰分区序列的重合度均高于第一阈值;
    所述处理设备根据所述每个目标物的阵位信息,将所述至少两个阵型信息中同一目标物对应的检测信息融合,包括:
    所述处理设备根据所述每个目标物在所述第一子序列中对应的触碰点分区信息,将所述至少两个触碰分区序列中同一目标物对应的检测信息融合。
  5. 根据权利要求3所述的方法,其特征在于,
    所述触线信息包括对应传感器所检测到的物体触碰所述基准线的时序信息和触碰时间间隔信息,所述触碰时间间隔信息表示所述物体触碰所述基准线的前后时间间隔;
    所述阵型信息包括触碰间隔序列,所述触碰间隔序列表示对应传感器所检测到的物体触碰所述基准线的时间间隔的分布;
    所述处理设备根据所述至少两个阵型信息确定目标阵型信息,包括:
    所述处理设备获取所述至少两个触碰间隔序列的第二子序列,将所述第二子序列作为所述目标阵型信息,其中,所述第二子序列与所述至少两个触碰间隔序列的重合度均高于第二阈值;
    所述处理设备根据所述每个目标物的阵位信息,将所述至少两个阵型信息中同一目标物对应的检测信息融合,包括:
    所述处理设备根据所述每个目标物在所述第二子序列中对应的触碰时间分布信息,将所述至少两个触碰间隔序列中同一目标物对应的检测信息融合。
  6. 根据权利要求3所述的方法,其特征在于,
    所述触线信息包括对应传感器所检测到的物体触碰所述基准线的所述时序信息,所述触碰点分区信息和所述触碰时间间隔信息,所述触碰点分区信息表示所述物体触碰所述基准线的触碰点在所述基准线中的分区信息,所述触碰时间间隔信息表示所述物体触碰所述基准线的前后时间间隔;
    所述阵型信息包括所述触碰分区序列和所述触碰间隔序列,所述触碰分区序列表示对应传感器所检测到的物体触碰所述基准线的分区位置的前后时序关系,所述触碰间隔序列表示对应传感器所检测到的物体触碰所述基准线的时间间隔的分布;
    所述处理设备根据所述至少两个阵型信息确定目标阵型信息,包括:
    所述处理设备获取至少两个触碰分区序列的所述第一子序列,所述第一子序列与所述至少两个触碰分区序列的重合度均高于所述第一阈值;
    所述处理设备获取至少两个触碰间隔序列的第二子序列,所述第二子序列与所述至少两个触碰间隔序列的重合度均高于所述第二阈值;
    所述处理设备确定第一物体集合与第二物体集合的交集,将所述交集作为目标物体集合,其中,所述第一物体集合为所述第一子序列所对应的物体的集合,所述第二物体集合为所述第二子序列所对应的物体的集合;
    所述处理设备将所述目标物体集合的触碰分区序列和触碰间隔序列作为所述目标阵型信息。
  7. 根据权利要求2所述的方法,其特征在于,所述阵型信息包括目标群分布图,所述目标群分布图表示物体之间的位置关系;
    所述处理设备根据所述至少两个检测信息确定对应的至少两个阵型信息,包括:
    所述处理设备根据至少两个位置特征集,获取对应的至少两个初始目标群分布图,所 述初始目标群分布图表示对应传感器所检测到的物体之间的位置关系;
    所述处理设备通过视角变化算法,获取所述至少两个初始目标群分布图的标准视角图,将至少两个标准视角图作为对应的至少两个目标群分布图,其中,所述目标群分布图的阵位信息包括目标物的目标物分布信息,所述目标物分布信息表示所述目标物在对应传感器所检测到的物体中的位置;
    所述处理设备根据所述至少两个阵型信息确定目标阵型信息,包括:
    所述处理设备获取所述至少两个目标群分布图的图像特征集,将所述图像特征集作为所述目标阵型信息,其中,所述图像特征集与所述至少两个目标群分布图的重合度均高于第三阈值;
    所述处理设备根据所述每个目标物的阵位信息,将所述至少两个阵型信息中同一目标物对应的检测信息融合,包括:
    所述处理设备根据所述每个目标物在所述图像特征集中对应的目标物分布信息,将所述至少两个目标群分布图中同一目标物对应的检测信息融合。
  8. 根据权利要求7所述的方法,其特征在于,所述方法还包括:
    所述处理设备根据至少两个位置特征集,获取所述位置特征集的对应目标物的至少两个触线信息,其中,所述至少两个触线信息中的每个触线信息用于描述对应传感器所检测到的物体触碰基准线的信息,所述至少两个触线信息与所述至少两个位置特征集一一对应;
    所述处理设备根据至少两个位置特征集,获取对应的至少两个初始目标群分布图,包括:
    所述处理设备根据所述至少两个触线信息,获取对应的至少两个初始目标群分布图,其中,所述至少两个初始目标群分布图中的物体,具有相同的触线信息。
  9. 根据权利要求1至8中任一项所述的方法,其特征在于,所述至少两个传感器包括第一传感器和第二传感器,所述第一传感器对应的空间坐标系为标准坐标系,所述第二传感器对应的空间坐标系为目标坐标系,所述方法还包括:
    所述处理设备根据将所述至少两个阵型信息中同一目标物对应的检测信息融合得到的融合检测信息,确定至少两个标准点信息与至少两个目标点信息之间的映射关系,所述标准点信息表示所述目标物体集合中各物体在所述标准坐标系中的位置信息,所述目标点信息表示所述目标物体集合中各物体在所述目标坐标系中的位置信息,其中,所述至少两个标准点信息与所述至少两个目标点信息一一对应;
    所述处理设备根据所述标准点信息与所述目标点信息之间的映射关系,确定所述标准坐标系与所述目标坐标系之间的映射关系。
  10. 根据权利要求1至9中任一项所述的方法,其特征在于,所述方法还包括:
    所述处理设备根据对所述至少两个阵型信息中同一目标物对应的检测信息的融合结果,计算所述至少两个传感器的时间轴之间的时间差。
  11. 根据权利要求1至10中任一项所述的方法,其特征在于,所述至少两个传感器包括标准传感器和待测传感器,所述方法还包括:
    所述处理设备获取所述目标阵型信息在所述标准传感器对应的标准阵型信息;
    所述处理设备获取所述目标阵型信息在所述待测传感器对应的待测阵型信息;
    所述处理设备确定所述待测阵型信息与所述标准阵型信息的差异;
    所述处理设备根据所述差异和所述标准阵型信息,获取错误参数,所述错误参数用于指示所述待测阵型信息的误差,或用于指示所述待测传感器的性能参数。
  12. 一种处理设备,其特征在于,所述处理设备位于检测***中,所述检测***还包括至少两个传感器,其中,所述至少两个传感器所获取的检测信息中包括所述至少两个传感器分别对相同的至少两个目标物的检测信息,所述处理设备包括:处理器和收发器;
    所述收发器用于,从所述至少两个传感器获取至少两个检测信息,其中,所述至少两个传感器与所述至少两个检测信息一一对应;
    所述处理器用于:
    根据所述至少两个检测信息确定对应的至少两个阵型信息,其中,每个阵型信息用于描述对应传感器所检测到的物体之间的位置关系,其中,所述物体包括所述目标物;
    根据所述至少两个阵型信息确定目标阵型信息,所述目标阵型信息与所述至少两个阵型信息中的每个阵型信息的重合度均高于预设阈值,所述目标阵型信息用于描述所述至少两个目标物之间的位置关系,所述目标阵型信息中包括每个目标物的阵位信息;
    根据所述每个目标物中任一目标物的阵位信息,将所述至少两个阵型信息中同一目标物对应的检测信息融合。
  13. 根据权利要求12所述的处理设备,其特征在于,所述检测信息包括位置特征集,所述位置特征集包括至少两个位置特征,所述位置特征表示对应传感器检测到的物体,与所述物体四周的物体之间的位置关系。
  14. 根据权利要求13所述的处理设备,其特征在于,所述处理器具体用于:
    根据至少两个位置特征集获取对应的至少两个触线信息,其中,所述至少两个触线信息中的每个触线信息用于描述对应传感器所检测到的物体触碰基准线的信息,所述至少两个触线信息与所述至少两个位置特征集一一对应;
    根据所述至少两个触线信息分别确定对应的所述至少两个阵型信息,所述至少两个触线信息与所述至少两个阵型信息一一对应。
  15. 根据权利要求14所述的处理设备,其特征在于,
    所述触线信息包括对应传感器所检测到的物体触碰所述基准线的时序信息和触碰点分区信息,所述触碰点分区信息表示所述物体触碰所述基准线的触碰点在所述基准线中的分区信息;
    所述阵型信息包括触碰分区序列,所述触碰分区序列表示对应传感器所检测到的物体触碰所述基准线的分区位置的前后时序关系;
    所述处理器具体用于:
    获取所述至少两个触碰分区序列的第一子序列,将所述第一子序列作为所述目标阵型信息,其中,所述第一子序列与所述至少两个触碰分区序列的重合度均高于第一阈值;
    根据所述每个目标物在所述第一子序列中对应的触碰点分区信息,将所述至少两个触碰分区序列中同一目标物对应的检测信息融合。
  16. 根据权利要求14所述的处理设备,其特征在于,
    所述触线信息包括对应传感器所检测到的物体触碰所述基准线的时序信息和触碰时间间隔信息,所述触碰时间间隔信息表示所述物体触碰所述基准线的前后时间间隔;
    所述阵型信息包括触碰间隔序列,所述触碰间隔序列表示对应传感器所检测到的物体触碰所述基准线的时间间隔的分布;
    所述处理器具体用于:
    获取所述至少两个触碰间隔序列的第二子序列,将所述第二子序列作为所述目标阵型信息,其中,所述第二子序列与所述至少两个触碰间隔序列的重合度均高于第二阈值;
    根据所述每个目标物在所述第二子序列中对应的触碰时间分布信息,将所述至少两个触碰间隔序列中同一目标物对应的检测信息融合。
  17. 根据权利要求14所述的处理设备,其特征在于,
    所述触线信息包括对应传感器所检测到的物体触碰所述基准线的所述时序信息,所述触碰点分区信息和所述触碰时间间隔信息,所述触碰点分区信息表示所述物体触碰所述基准线的触碰点在所述基准线中的分区信息,所述触碰时间间隔信息表示所述物体触碰所述基准线的前后时间间隔;
    所述阵型信息包括所述触碰分区序列和所述触碰间隔序列,所述触碰分区序列表示对应传感器所检测到的物体触碰所述基准线的分区位置的前后时序关系,所述触碰间隔序列表示对应传感器所检测到的物体触碰所述基准线的时间间隔的分布;
    所述处理器具体用于:
    获取至少两个触碰分区序列的所述第一子序列,所述第一子序列与所述至少两个触碰分区序列的重合度均高于所述第一阈值;
    获取至少两个触碰间隔序列的第二子序列,所述第二子序列与所述至少两个触碰间隔序列的重合度均高于所述第二阈值;
    确定第一物体集合与第二物体集合的交集,将所述交集作为目标物体集合,其中,所述第一物体集合为所述第一子序列所对应的物体的集合,所述第二物体集合为所述第二子序列所对应的物体的集合;
    将所述目标物体集合的触碰分区序列和触碰间隔序列作为所述目标阵型信息。
  18. 根据权利要求13所述的处理设备,其特征在于,所述阵型信息包括目标群分布图,所述目标群分布图表示物体之间的位置关系;
    所述处理器具体用于:
    根据至少两个位置特征集,获取对应的至少两个初始目标群分布图,所述初始目标分布图表示对应传感器所检测到的物体之间的位置关系;
    通过视角变化算法,获取所述至少两个初始目标群分布图的标准视角图,将至少两个标准视角图作为对应的至少两个目标群分布图,其中,所述目标群分布图的阵位信息包括目标物的目标物分布信息,所述目标物分布信息表示所述目标物在对应传感器所检测到的物体中的位置;
    所述处理器具体用于:
    获取所述至少两个目标群分布图的图像特征集,将所述图像特征集作为所述目标阵型信息,其中,所述图像特征集与所述至少两个目标群分布图的重合度均高于第三阈值;
    所述处理器具体用于:
    根据所述每个目标物在所述图像特征集中对应的目标物分布信息,将所述至少两个目标群分布图中同一目标物对应的检测信息融合。
  19. 根据权利要求18所述的处理设备,其特征在于,所述处理器还用于:
    根据至少两个位置特征集,获取所述图像特征集中的对应目标物的至少两个触线信息,其中,所述至少两个触线信息中的每个触线信息用于描述对应传感器所检测到的物体触碰基准线的信息,所述至少两个触线信息与所述至少两个位置特征集一一对应;
    所述处理器具体用于,根据所述至少两个触线信息,获取对应的至少两个初始目标群分布图,其中,所述至少两个初始目标群分布图中的物体,具有相同的触线信息。
  20. 根据权利要求12至19中任一项所述的处理设备,其特征在于,所述至少两个传感器包括第一传感器和第二传感器,所述第一传感器对应的空间坐标系为标准坐标系,所述第二传感器对应的空间坐标系为目标坐标系,所述处理器还用于:
    根据将所述至少两个阵型信息中同一目标物对应的检测信息融合得到的融合检测信息,确定至少两个标准点信息与至少两个目标点信息之间的映射关系,所述标准点信息表示所述目标物体集合中各物体在所述标准坐标系中的位置信息,所述目标点信息表示所述各物体在所述目标坐标系中的位置信息,其中,所述至少两个标准点信息与所述至少两个目标点信息一一对应;
    根据所述标准点信息与所述目标点信息之间的映射关系,确定所述标准坐标系与所述目标坐标系之间的映射关系。
  21. 根据权利要求12至20中任一项所述的处理设备,其特征在于,所述处理器还用于,根据对所述至少两个阵型信息中同一目标物对应的检测信息的融合结果,计算所述至少两个传感器的时间轴之间的时间差。
  22. 根据权利要求12至21中任一项所述的处理设备,其特征在于,所述至少两个传感器包括标准传感器和待测传感器,所述处理器还用于:
    获取所述目标阵型信息在所述标准传感器对应的标准阵型信息;
    获取所述目标阵型信息在所述待测传感器对应的待测阵型信息;
    确定所述待测阵型信息与所述标准阵型信息的差异;
    根据所述差异和所述标准阵型信息,获取错误参数,所述错误参数用于指示所述待测阵型信息的误差,或用于指示所述待测传感器的性能参数。
  23. 一种处理设备,其特征在于,包括:
    处理器和与所述处理器耦合的存储器;
    所述存储器存储所述处理器执行的可执行指令,所述可执行指令指示所述处理器执行权利要求1至11中任一项所述的方法。
  24. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中保存有程序,当所述计算机执行所述程序时,执行如权利要求1至11中任一项所述的方法。
  25. 一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上执行时,所述计算机执行如权利要求1至11中任一项所述的方法。
PCT/CN2021/131058 2021-02-27 2021-11-17 一种信息处理方法及相关设备 WO2022179197A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP21927621.9A EP4266211A4 (en) 2021-02-27 2021-11-17 INFORMATION PROCESSING METHOD AND ASSOCIATED DEVICE
JP2023550693A JP2024507891A (ja) 2021-02-27 2021-11-17 情報処理方法および関連デバイス
US18/456,150 US20230410353A1 (en) 2021-02-27 2023-08-25 Information processing method and related device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110221913.6A CN114972935A (zh) 2021-02-27 2021-02-27 一种信息处理方法及相关设备
CN202110221913.6 2021-02-27

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/456,150 Continuation US20230410353A1 (en) 2021-02-27 2023-08-25 Information processing method and related device

Publications (1)

Publication Number Publication Date
WO2022179197A1 true WO2022179197A1 (zh) 2022-09-01

Family

ID=82973145

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/131058 WO2022179197A1 (zh) 2021-02-27 2021-11-17 一种信息处理方法及相关设备

Country Status (5)

Country Link
US (1) US20230410353A1 (zh)
EP (1) EP4266211A4 (zh)
JP (1) JP2024507891A (zh)
CN (1) CN114972935A (zh)
WO (1) WO2022179197A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109444911A (zh) * 2018-10-18 2019-03-08 哈尔滨工程大学 一种单目相机和激光雷达信息融合的无人艇水面目标检测识别与定位方法
CN109615870A (zh) * 2018-12-29 2019-04-12 南京慧尔视智能科技有限公司 一种基于毫米波雷达和视频的交通检测***
CN109977895A (zh) * 2019-04-02 2019-07-05 重庆理工大学 一种基于多特征图融合的野生动物视频目标检测方法
CN111257866A (zh) * 2018-11-30 2020-06-09 杭州海康威视数字技术股份有限公司 车载摄像头和车载雷达联动的目标检测方法、装置及***
CN112305576A (zh) * 2020-10-31 2021-02-02 中环曼普科技(南京)有限公司 一种多传感器融合的slam算法及其***

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019198076A1 (en) * 2018-04-11 2019-10-17 Ionterra Transportation And Aviation Technologies Ltd. Real-time raw data- and sensor fusion
EP3702802A1 (en) * 2019-03-01 2020-09-02 Aptiv Technologies Limited Method of multi-sensor data fusion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109444911A (zh) * 2018-10-18 2019-03-08 哈尔滨工程大学 一种单目相机和激光雷达信息融合的无人艇水面目标检测识别与定位方法
CN111257866A (zh) * 2018-11-30 2020-06-09 杭州海康威视数字技术股份有限公司 车载摄像头和车载雷达联动的目标检测方法、装置及***
CN109615870A (zh) * 2018-12-29 2019-04-12 南京慧尔视智能科技有限公司 一种基于毫米波雷达和视频的交通检测***
CN109977895A (zh) * 2019-04-02 2019-07-05 重庆理工大学 一种基于多特征图融合的野生动物视频目标检测方法
CN112305576A (zh) * 2020-10-31 2021-02-02 中环曼普科技(南京)有限公司 一种多传感器融合的slam算法及其***

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4266211A4

Also Published As

Publication number Publication date
JP2024507891A (ja) 2024-02-21
CN114972935A (zh) 2022-08-30
EP4266211A1 (en) 2023-10-25
US20230410353A1 (en) 2023-12-21
EP4266211A4 (en) 2024-05-22

Similar Documents

Publication Publication Date Title
US10984261B2 (en) Systems and methods for curb detection and pedestrian hazard assessment
JP7157054B2 (ja) 整合画像及びlidar情報に基づいた車両ナビゲーション
CN106503653B (zh) 区域标注方法、装置和电子设备
US10317231B2 (en) Top-down refinement in lane marking navigation
CN109884618B (zh) 车辆的导航***、包括导航***的车辆和导航车辆的方法
KR101758576B1 (ko) 물체 탐지를 위한 레이더 카메라 복합 검지 장치 및 방법
US9123242B2 (en) Pavement marker recognition device, pavement marker recognition method and pavement marker recognition program
CN110619279B (zh) 一种基于跟踪的路面交通标志实例分割方法
CN111045000A (zh) 监测***和方法
CN111814752B (zh) 室内定位实现方法、服务器、智能移动设备、存储介质
CN110738150B (zh) 相机联动抓拍方法、装置以及计算机存储介质
JP6758160B2 (ja) 車両位置検出装置、車両位置検出方法及び車両位置検出用コンピュータプログラム
CN102542256B (zh) 对陷阱和行人进行前部碰撞警告的先进警告***
RU2635280C2 (ru) Устройство обнаружения трехмерных объектов
JP6552448B2 (ja) 車両位置検出装置、車両位置検出方法及び車両位置検出用コンピュータプログラム
RU2619724C2 (ru) Устройство обнаружения трехмерных объектов
Petrovai et al. A stereovision based approach for detecting and tracking lane and forward obstacles on mobile devices
CN114724104B (zh) 一种视认距离检测的方法、装置、电子设备、***及介质
Murray et al. Mobile mapping system for the automated detection and analysis of road delineation
WO2022179197A1 (zh) 一种信息处理方法及相关设备
CN110677491B (zh) 用于车辆的旁车位置估计方法
CN117392423A (zh) 基于激光雷达的目标物的真值数据预测方法、装置及设备
KR20190056775A (ko) 차량의 객체 인식 장치 및 방법
US20230091536A1 (en) Camera Placement Guidance
CN104931024B (zh) 障碍物检测装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21927621

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202337049231

Country of ref document: IN

ENP Entry into the national phase

Ref document number: 2021927621

Country of ref document: EP

Effective date: 20230719

WWE Wipo information: entry into national phase

Ref document number: 2023550693

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE