CN116561534B - Method and system for improving accuracy of road side sensor based on self-supervision learning - Google Patents

Method and system for improving accuracy of road side sensor based on self-supervision learning Download PDF

Info

Publication number
CN116561534B
CN116561534B CN202310837967.4A CN202310837967A CN116561534B CN 116561534 B CN116561534 B CN 116561534B CN 202310837967 A CN202310837967 A CN 202310837967A CN 116561534 B CN116561534 B CN 116561534B
Authority
CN
China
Prior art keywords
coordinate point
vector matrix
point vector
data
updated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310837967.4A
Other languages
Chinese (zh)
Other versions
CN116561534A (en
Inventor
李冬
柳俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Yingsai Intelligent Technology Co ltd
Original Assignee
Suzhou Yingsai Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Yingsai Intelligent Technology Co ltd filed Critical Suzhou Yingsai Intelligent Technology Co ltd
Priority to CN202310837967.4A priority Critical patent/CN116561534B/en
Publication of CN116561534A publication Critical patent/CN116561534A/en
Application granted granted Critical
Publication of CN116561534B publication Critical patent/CN116561534B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/40Means for monitoring or calibrating
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a method and a system for improving the accuracy of a road side sensor based on self-supervision learning, wherein the method comprises the following steps: installing a plurality of edge computing devices and a plurality of groups of standard components on a road of a target area, wherein each group of standard components comprises a plurality of standard road side sensors; for each group of data acquisition components, adjusting the working parameters of the data acquisition components; for each edge computing device, establishing an accuracy lifting model, generating a plurality of training samples based on historical data acquired by a plurality of groups of data acquisition components corresponding to the edge computing devices and historical data acquired by at least one group of standard components corresponding to the edge computing devices, and training the accuracy lifting model based on the plurality of training samples; for each edge computing device, based on the trained precision lifting model, the real-time data acquired by the plurality of groups of data acquisition components corresponding to the edge computing device are subjected to data calibration, and the method has the advantages of improving the precision of the existing road side sensor and reducing the cost of lifting reconstruction.

Description

Method and system for improving accuracy of road side sensor based on self-supervision learning
Technical Field
The invention relates to the field of data processing, in particular to a method and a system for improving accuracy of a road side sensor based on self-supervision learning.
Background
The road side perception is to use sensors such as cameras, millimeter wave radars, laser radars and the like and combine road side edge calculation, and the final purpose is to realize instant intelligent perception of traffic participants, road conditions and the like of the road section. The road side perception can expand the perception range of the automatic driving vehicle and the driver, and the integrated operation monitoring of people, vehicles, roads and clouds is realized through the V2X road cooperation technology, the abnormal road traffic is discovered at the first time, the intelligent applications such as the road cooperation, che Yun cooperation, lu Yun cooperation and the like are realized, the intelligent travel requirements of the automatic driving vehicle and the social vehicle are met, and meanwhile, the supervision mechanism can become more efficient and flexible, so that a supervision environment with faster response speed and more flexibility is established.
Existing roadside sensors (cameras, radars) have uneven performance levels and lower accuracy for actual traffic perception, however these devices are already arranged on the roadside and are difficult to replace at will. In the prior art, the recognition sensing precision is improved by improving the calculation power of the back-end server, the method has extremely high requirement on the subsequent calculation power, high cost and high network transmission requirement, and the back-end server needs higher-level maintenance and environment (air conditioner and the like).
Therefore, it is necessary to provide a method and a system for improving the accuracy of a road side sensor based on self-supervised learning, which are used for improving the accuracy of the existing road side sensor and reducing the improvement cost.
Disclosure of Invention
One of the embodiments of the present specification provides a method for improving accuracy of a roadside sensor based on self-supervised learning, the method comprising: installing a plurality of edge computing devices on a road of a target area, wherein each edge computing device corresponds to at least one group of data acquisition components, and the data acquisition components comprise a plurality of existing road side sensors; installing multiple groups of standard components on a road of the target area, wherein each group of standard components comprises a plurality of standard road side sensors, and each edge computing device corresponds to at least one group of standard components; for each group of data acquisition components, adjusting working parameters of the data acquisition components; for each edge computing device, establishing an accuracy lifting model, generating a plurality of training samples based on historical data acquired by a plurality of groups of data acquisition components corresponding to the edge computing devices and historical data acquired by at least one group of standard components corresponding to the edge computing devices, and training the accuracy lifting model based on the plurality of training samples; and for each edge computing device, carrying out data calibration on real-time data acquired by a plurality of groups of data acquisition components corresponding to the edge computing device based on the trained precision lifting model.
In some embodiments, adjusting the operating parameters of the data acquisition component includes: testing stability of a plurality of existing road side sensors included in the data acquisition assembly; testing the measuring ranges of a plurality of existing road side sensors included in the data acquisition component; testing output results of a plurality of existing road side sensors included in the data acquisition component; and adjusting the working parameters of the data acquisition component based on the stability, the output result and/or the measurement range.
In some embodiments, the data acquisition component includes at least an image acquisition device and a radar.
In some embodiments, the adjusting the operating parameter of the data acquisition component based on the stability, the output result, and/or the measurement range includes: and adjusting the angle and/or the position of the image acquisition device and the angle and/or the position of the radar based on the measurement range.
In some embodiments, the adjusting the angle and/or position of the image acquisition device and the angle and/or position of the radar based on the measurement range includes; for the image acquisition device, acquiring a preset acquisition range of the image acquisition device, determining an actual acquisition range of the image acquisition device, and adjusting the angle and/or the position of the image acquisition device based on the deviation between the preset acquisition range of the image acquisition device and the actual acquisition range of the image acquisition device; and for the radar, acquiring a preset acquisition range of the radar, determining an actual acquisition range of the radar, and adjusting the angle and/or the position of the radar based on the deviation between the preset acquisition range of the radar and the actual acquisition range of the radar.
In some embodiments, the adjusting the angle and/or position of the image acquisition device and the angle and/or position of the radar based on the measurement range includes: acquiring a first coordinate point vector matrix of a target object through the image acquisition device; acquiring a second coordinate point vector matrix of the target object through the radar; determining an error distance based on the first coordinate point vector matrix and the second coordinate point vector matrix; judging whether the angle and/or the position of the image acquisition device needs to be adjusted or not based on the error distance; and when judging that the angle and/or the position of the image acquisition device and the angle and/or the position of the radar need to be adjusted, repeatedly executing the adjustment of the angle and/or the position of one of the image acquisition device and the radar, acquiring an updated first coordinate point vector matrix and an updated second coordinate point vector matrix, and determining an updated error distance based on the updated first coordinate point vector matrix and the updated second coordinate point vector matrix until the updated error distance meets a preset error condition.
In some embodiments, the generating a plurality of training samples based on the historical data collected by the plurality of sets of data collection components corresponding to the edge computing device and the historical data collected by the at least one set of standard components corresponding to the edge computing device includes: for each historical time point, generating a training sample based on historical data acquired by the plurality of groups of data acquisition components at the historical time point; and generating a label of the training sample based on the historical data collected by the standard component at the historical time point.
In some embodiments, the training the precision lifting model based on the plurality of training samples comprises: based on the training samples and the labels corresponding to the training samples, training the precision lifting model, and when the trained precision lifting model meets preset training conditions, ending the training.
In some embodiments, the installing a plurality of edge computing devices on the road of the target area includes: acquiring the real-time requirement of data calibration; and determining the corresponding relation between the edge computing equipment and the data acquisition component based on the data calibration real-time requirement.
One of the embodiments of the present disclosure provides a system for improving accuracy of a roadside sensor based on self-supervised learning, including: the edge computing devices are installed on roads of the target area, wherein each edge computing device corresponds to a plurality of groups of data acquisition components, and each data acquisition component comprises a plurality of existing road side sensors; a plurality of sets of standard components mounted on roads of the target area, wherein each set of standard components comprises a plurality of standard road side sensors, and each edge computing device corresponds to at least one set of standard components; the edge computing equipment is used for adjusting the working parameters of the corresponding data acquisition component; the edge computing equipment is also used for establishing an accuracy lifting model, generating a plurality of training samples based on the historical data acquired by the plurality of groups of data acquisition components corresponding to the edge computing equipment and the historical data acquired by at least one group of standard components corresponding to the edge computing equipment, and training the accuracy lifting model based on the plurality of training samples; the edge computing equipment is also used for carrying out data calibration on real-time data acquired by the plurality of groups of data acquisition components corresponding to the edge computing equipment based on the trained precision lifting model.
Compared with the prior art, the method and the system for improving the accuracy of the road side sensor based on self-supervision learning have the following advantages:
on the basis of fully utilizing the existing road side sensor, a small number of multiple groups of standard components are introduced, so that the investment of new hardware is reduced as much as possible, the precision of the existing road side sensing system is improved, the cost investment is greatly reduced, and further, a plurality of edge computing devices are introduced, so that the real-time performance of data calibration is improved.
Drawings
The present specification will be further elucidated by way of example embodiments, which will be described in detail by means of the accompanying drawings. The embodiments are not limiting, in which like numerals represent like structures, wherein:
FIG. 1 is a block diagram of a system for improving roadside sensor accuracy based on self-supervised learning, as shown in accordance with some embodiments of the present description;
FIG. 2 is a flow chart of a method of improving roadside sensor accuracy based on self-supervised learning, as shown in accordance with some embodiments of the present description;
FIG. 3 is a flow chart illustrating adjusting an operating parameter of a data acquisition component according to some embodiments of the present description.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present specification, the drawings that are required to be used in the description of the embodiments will be briefly described below. It is apparent that the drawings in the following description are only some examples or embodiments of the present specification, and it is possible for those of ordinary skill in the art to apply the present specification to other similar situations according to the drawings without inventive effort. Unless otherwise apparent from the context of the language or otherwise specified, like reference numerals in the figures refer to like structures or operations.
It will be appreciated that "system," "apparatus," "unit" and/or "module" as used herein is one method for distinguishing between different components, elements, parts, portions or assemblies at different levels. However, if other words can achieve the same purpose, the words can be replaced by other expressions.
As used in this specification and the claims, the terms "a," "an," "the," and/or "the" are not specific to a singular, but may include a plurality, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
A flowchart is used in this specification to describe the operations performed by the system according to embodiments of the present specification. It should be appreciated that the preceding or following operations are not necessarily performed in order precisely. Rather, the steps may be processed in reverse order or simultaneously. Also, other operations may be added to or removed from these processes.
FIG. 1 is a block diagram of a system for improving roadside sensor accuracy based on self-supervised learning, according to some embodiments of the present description. In some embodiments, as shown in fig. 1, a system for improving roadside sensor accuracy based on self-supervised learning may include multiple edge computing devices and multiple sets of standard components.
And the edge computing devices are installed on the road of the target area, wherein each edge computing device corresponds to a plurality of groups of data acquisition components, and each data acquisition component comprises a plurality of existing road side sensors.
And a plurality of sets of standard components mounted on the road of the target area, wherein each set of standard components comprises a plurality of standard roadside sensors, and each edge computing device corresponds to at least one set of standard components.
The edge computing device is used for adjusting the working parameters of the corresponding data acquisition component.
The edge computing equipment is also used for establishing an accuracy lifting model, generating a plurality of training samples based on the historical data acquired by the plurality of groups of data acquisition components corresponding to the edge computing equipment and the historical data acquired by the at least one group of standard components corresponding to the edge computing equipment, and training the accuracy lifting model based on the plurality of training samples.
The edge computing device is also used for carrying out data calibration on real-time data acquired by the plurality of groups of data acquisition components corresponding to the edge computing device based on the trained precision lifting model.
For further description of the plurality of edge computing devices and the plurality of sets of standard components, reference may be made to FIG. 2 and its associated description, which is not repeated here.
FIG. 2 is a flow chart of a method of improving roadside sensor accuracy based on self-supervised learning, according to some embodiments of the present description. In some embodiments, the method of improving roadside sensor accuracy based on self-supervised learning may be performed by a system of improving roadside sensor accuracy based on self-supervised learning. As shown in fig. 2, a method for improving accuracy of a roadside sensor based on self-supervised learning may include the following steps.
At step 210, a plurality of edge computing devices are installed on a roadway in a target area.
Each edge computing device corresponds to at least one set of data acquisition components that includes a plurality of existing roadside sensors.
The existing roadside sensor may be a roadside sensor that is installed on a road of a target area and is required to be improved in accuracy.
The edge computing device may be a variety of forms of digital electronic computer devices, such as a laptop computer, a desktop computer, a workstation, a personal digital assistant, a server, a blade server, a mainframe computer, and other suitable computers. The edge computing device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
The edge computing device includes a computing unit that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) or a computer program loaded from a storage unit into a Random Access Memory (RAM). In the RAM, various programs and data required for the operation of the device may also be stored. The computing unit, ROM and RAM are connected to each other by a bus. An input/output (I/O) interface is also connected to the bus.
A plurality of components in an edge computing device are connected to an I/O interface, comprising: an input unit, an output unit, a storage unit, and a communication unit. The input unit may be any type of device capable of inputting information to the edge computing device, and may receive input numeric or character information and generate key signal inputs related to user settings and/or function control of the edge computing device. The output unit may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, video/audio output terminals, vibrators, and/or printers. Storage units may include, but are not limited to, magnetic disks, optical disks. The communication unit allows the edge computing device to exchange information/data with other devices over computer networks, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
The computing unit may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing units include, but are not limited to, central Processing Units (CPUs), graphics Processing Units (GPUs), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processors, controllers, microcontrollers, and the like.
In some embodiments, the data acquisition component includes at least an image acquisition device and a radar. It is understood that the data acquisition assembly may also include other roadside sensors, such as ultrasonic velocimetry, geomagnetic flow sensors, temperature and humidity sensors, and the like.
The arrangement mode of the edge computing equipment can be one per kilometer or one per 5 kilometers, and the edge computing equipment is designed according to the real-time requirement of the subsequent application of the perception data.
In some embodiments, the data calibration instantaneity requirements may be obtained first, and based on the data calibration instantaneity requirements, a correspondence between the edge computing device and the data acquisition component may be determined. It will be appreciated that the higher the data calibration real-time requirements, the fewer the number of data acquisition components corresponding to each edge computing device.
And 220, installing a plurality of groups of standard components on the road of the target area.
Each set of standard components includes a plurality of standard roadside sensors, with each edge computing device corresponding to at least one set of standard components. It will be appreciated that the accuracy of standard roadside sensors is higher than that of existing roadside sensors.
In some embodiments, an edge computing device may correspond to a standard component, and the standard component corresponding to the edge computing device may be used to perform precision improvement on all data acquisition components corresponding to the edge computing device.
Step 230, for each group of data acquisition components, adjusting the operating parameters of the data acquisition components.
In some embodiments, the operating parameters of the data acquisition component may include the angle and position of the data acquisition component, and the like. The operating parameters may also include other parameters, such as data acquisition frequency, etc.
In some embodiments, the edge computing device adjusts an operating parameter of the data acquisition component, which may include:
testing the stability of a plurality of existing road side sensors included in the data acquisition assembly;
the test data acquisition component comprises a plurality of measuring ranges of the existing road side sensors;
testing output results of a plurality of existing road side sensors included in the data acquisition component;
and adjusting the working parameters of the data acquisition component based on the stability, the output result and/or the measurement range.
In some embodiments, the edge computing device may adjust the angle and/or position of the image acquisition device and the angle and/or position of the radar based on the measurement range.
In some embodiments, for an image capture device, the edge computing device may acquire a preset capture range of the image capture device, determine an actual capture range of the image capture device, and adjust an angle and/or position of the image capture device based on a deviation between the preset capture range of the image capture device and the actual capture range of the image capture device.
In some embodiments, for a radar, an edge computing device may acquire a preset acquisition range of the radar, determine an actual acquisition range of the radar, and adjust an angle and/or position of the radar based on a deviation between the preset acquisition range of the radar and the actual acquisition range of the radar.
FIG. 3 is a flow chart illustrating adjusting the operating parameters of a data acquisition assembly according to some embodiments of the present disclosure, as shown in FIG. 2, in some embodiments, an edge computing device adjusting an angle and/or position of an image acquisition device and an angle and/or position of a radar based on a measurement range, may include:
acquiring a first coordinate point vector matrix of a target object through an image acquisition device;
acquiring a second coordinate point vector matrix of the target object through a radar;
determining an error distance based on the first coordinate point vector matrix and the second coordinate point vector matrix;
judging whether the angle and/or the position of the image acquisition device and the angle and/or the position of the radar need to be adjusted or not based on the error distance;
when the angle and/or the position of the image acquisition device and the angle and/or the position of the radar are judged to be required to be adjusted, the angle and/or the position of one of the image acquisition device and the radar is repeatedly adjusted, an updated first coordinate point vector matrix and an updated second coordinate point vector matrix are obtained, and an updated error distance is determined based on the updated first coordinate point vector matrix and the updated second coordinate point vector matrix until the updated error distance meets a preset error condition.
For example, the image acquisition device may acquire the first coordinate point vector matrix MA1 of the target object in real time by adopting the BEV mode of the transducer, the radar may adjust the angle and/or the position of the radar at the same time by adopting the second coordinate point vector matrix MB1 of the target object in real time, the edge computing device may stop adjusting until the updated error distance determined based on the updated first coordinate point vector matrix MA2 and the updated second coordinate point vector matrix MB2 is satisfied, if the error distance between the updated first coordinate point vector matrix MA2 and the updated second coordinate point vector matrix MB2 is smaller than the error distance between the first coordinate point vector matrix MA1 and the second coordinate point vector matrix MB1, then the angle and/or the position may be adjusted in the correct direction, otherwise the direction may be adjusted, the opposite direction may be needed until the updated error distance determined based on the updated first coordinate point vector matrix and the updated second coordinate point vector satisfies the preset error condition, the current sensor may be considered to have reached, then the current sensor may be considered to have reached, and then the current sensor may be mapped to the current sensor by adopting the updated first coordinate point vector and the updated second coordinate point vector matrix MB2, and the current sensor may be mapped to the current sensor.
The transducer is a neural network model based on Attention mechanism (Attention), and unlike traditional neural networks RNN and CNN, the transducer does not process data in serial order, but instead digs the relation and correlation of different elements in the sequence through the Attention mechanism, so that the transducer can adapt to inputs with different lengths and different structures.
BEV (Bird's Eye View) is a method of projecting three-dimensional environmental information onto a two-dimensional plane to reveal objects and features in the environment from a top View. In the field of autopilot, BEVs may help the system to better understand the surrounding environment, improving the accuracy of perception and decision making. In the environment sensing stage, the BEV can fuse multi-mode data such as a laser radar, a camera and the like on the same plane, so that the problems of shielding and overlapping among the data are solved, and the accuracy of object detection and tracking is improved.
The method for obtaining the first coordinate point vector matrix MA1 of the target object in real time by adopting the BEV mode of the transducer specifically includes the following steps: the multi-mode data such as radar and camera are fused into BEV format, necessary preprocessing operation such as data enhancement and normalization is carried out, for laser radar point cloud data, three-dimensional point cloud can be projected onto a two-dimensional plane, and then the plane is rasterized to generate a height map; for radar data, the distance and angle information can be converted into Cartesian coordinates and then rasterized on a BEV plane; for camera data, we can project the image data onto the BEV plane, generating a color or intensity map;
the transducer model may be used to extract features in multi-modal data (e.g., images, radar data, etc.) in BEV format, and by end-to-end training of such data, the transducer is able to automatically learn the inherent structure and interrelationships of such data, thereby effectively identifying and locating a target object (e.g., an obstacle, etc.) in an environment, generating a first coordinate point vector matrix MA1 of the target object.
For example, for the image acquisition device, the preset acquisition range of the image acquisition device is acquired, the actual acquisition range of the image acquisition device is determined, the angle and/or the position of the image acquisition device are adjusted based on the deviation between the preset acquisition range of the image acquisition device and the actual acquisition range of the image acquisition device until the image acquisition device is positioned at the optimal angle and/or the optimal position, after the angle and/or the position of the image acquisition device are fixed, the image acquisition device can acquire a first coordinate point vector matrix MA1 of a target object in real time by adopting a BEV mode of a transducer, the radar can also acquire a second coordinate point vector matrix MB1 of the target object in real time at the same time, the edge computing equipment can adjust the angle and/or the position of the radar, acquire an updated first coordinate point vector matrix MA2 and an updated second coordinate point vector MB2, if the error distance between the updated first coordinate point vector MA2 and the updated second coordinate point vector MB2 is smaller than the error distance between the first coordinate point vector MA1 and the second coordinate point vector MB1, then the angle and/or the position is adjusted in real time, otherwise, the adjustment direction can be continued until the error vector adjustment is required after the error adjustment is completed, and the error vector adjustment is completed.
Step 240, for each edge computing device, establishing an accuracy lifting model, generating a plurality of training samples based on the historical data collected by the plurality of groups of data collection components corresponding to the edge computing device and the historical data collected by the at least one group of standard components corresponding to the edge computing device, and training the accuracy lifting model based on the plurality of training samples.
In some embodiments, for each historical time point, training samples are generated based on historical data collected by the plurality of sets of data collection components at the historical time point; based on historical data collected by the standard component at historical time points, a label of the training sample is generated.
In some embodiments, for each edge computing device, the edge computing device may train the precision lifting model based on the training samples and the labels corresponding to the training samples, and when the trained precision lifting model meets the preset training conditions, the training is ended. The preset training condition may be at least one of convergence of the loss function result, iteration times smaller than a preset threshold value being larger than a preset iteration time threshold value, and the like.
Step 250, for each edge computing device, performing data calibration on real-time data acquired by multiple groups of data acquisition components corresponding to the edge computing device based on the trained precision lifting model.
It can be understood that the method for improving the accuracy of the road side sensor based on self-supervision learning fully utilizes the basis of the existing road side sensor, introduces a small number of groups of standard components, reduces the investment of new hardware as much as possible, improves the accuracy of the existing road side sensing system, greatly reduces the cost investment, further, introduces a plurality of edge computing devices, and improves the real-time performance of data calibration.
It should be noted that the above description of the method for improving accuracy of the roadside sensor based on self-supervised learning is only for illustration and description, and does not limit the scope of application of the present specification. Various modifications and variations of the method of improving accuracy of the roadside sensor based on self-supervised learning may be made by those skilled in the art under the guidance of the present specification. However, such modifications and variations are still within the scope of the present description.
While the basic concepts have been described above, it will be apparent to those skilled in the art that the foregoing detailed disclosure is by way of example only and is not intended to be limiting. Although not explicitly described herein, various modifications, improvements, and adaptations to the present disclosure may occur to one skilled in the art. Such modifications, improvements, and modifications are intended to be suggested within this specification, and therefore, such modifications, improvements, and modifications are intended to be included within the spirit and scope of the exemplary embodiments of the present invention.
Meanwhile, the specification uses specific words to describe the embodiments of the specification. Reference to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic is associated with at least one embodiment of the present description. Thus, it should be emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various positions in this specification are not necessarily referring to the same embodiment. Furthermore, certain features, structures, or characteristics of one or more embodiments of the present description may be combined as suitable.
Furthermore, the order in which the elements and sequences are processed, the use of numerical letters, or other designations in the description are not intended to limit the order in which the processes and methods of the description are performed unless explicitly recited in the claims. While certain presently useful inventive embodiments have been discussed in the foregoing disclosure, by way of various examples, it is to be understood that such details are merely illustrative and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements included within the spirit and scope of the embodiments of the present disclosure. For example, while the system components described above may be implemented by hardware devices, they may also be implemented solely by software solutions, such as installing the described system on an existing server or mobile device.
Likewise, it should be noted that in order to simplify the presentation disclosed in this specification and thereby aid in understanding one or more inventive embodiments, various features are sometimes grouped together in a single embodiment, figure, or description thereof. This method of disclosure, however, is not intended to imply that more features than are presented in the claims are required for the present description. Indeed, less than all of the features of a single embodiment disclosed above.
Finally, it should be understood that the embodiments described in this specification are merely illustrative of the principles of the embodiments of this specification. Other variations are possible within the scope of this description. Thus, by way of example, and not limitation, alternative configurations of embodiments of the present specification may be considered as consistent with the teachings of the present specification. Accordingly, the embodiments of the present specification are not limited to only the embodiments explicitly described and depicted in the present specification.

Claims (6)

1. A method for improving accuracy of a roadside sensor based on self-supervised learning, comprising:
installing a plurality of edge computing devices on a road of a target area, wherein each edge computing device corresponds to at least one group of data acquisition components, and the data acquisition components comprise a plurality of existing road side sensors;
installing multiple groups of standard components on a road of the target area, wherein each group of standard components comprises a plurality of standard road side sensors, and each edge computing device corresponds to at least one group of standard components;
for each group of data acquisition components, adjusting working parameters of the data acquisition components;
for each edge computing device, establishing an accuracy lifting model, generating a plurality of training samples based on historical data acquired by a plurality of groups of data acquisition components corresponding to the edge computing devices and historical data acquired by at least one group of standard components corresponding to the edge computing devices, and training the accuracy lifting model based on the plurality of training samples;
for each edge computing device, performing data calibration on real-time data acquired by a plurality of groups of data acquisition components corresponding to the edge computing device based on the trained precision lifting model;
adjusting the working parameters of the data acquisition assembly, comprising:
testing the measuring ranges of a plurality of existing road side sensors included in the data acquisition component;
based on the measurement range, adjusting the angle and/or position of an image acquisition device included in the data acquisition component and the angle and/or position of a radar;
the adjusting the angle and/or the position of the image acquisition device and the angle and/or the position of the radar based on the measurement range comprises the following steps:
for the image acquisition device, acquiring a preset acquisition range of the image acquisition device, determining an actual acquisition range of the image acquisition device, and adjusting the angle and/or the position of the image acquisition device based on the deviation between the preset acquisition range of the image acquisition device and the actual acquisition range of the image acquisition device;
for the radar, acquiring a preset acquisition range of the radar, determining an actual acquisition range of the radar, and adjusting the angle and/or the position of the radar based on the deviation between the preset acquisition range of the radar and the actual acquisition range of the radar;
the adjusting the angle and/or the position of the image acquisition device and the angle and/or the position of the radar based on the measurement range specifically comprises:
acquiring a first coordinate point vector matrix of a target object through the image acquisition device;
acquiring a second coordinate point vector matrix of the target object through the radar;
determining an error distance based on the first coordinate point vector matrix and the second coordinate point vector matrix;
judging whether the angle and/or the position of the image acquisition device needs to be adjusted or not based on the error distance;
when judging that the angle and/or the position of the image acquisition device and the angle and/or the position of the radar need to be adjusted, repeatedly executing adjustment of the angle and/or the position of one of the image acquisition device and the radar, acquiring an updated first coordinate point vector matrix and an updated second coordinate point vector matrix, and determining an updated error distance based on the updated first coordinate point vector matrix and the updated second coordinate point vector matrix until the updated error distance meets a preset error condition;
the method specifically comprises the following steps:
the image acquisition device acquires a first coordinate point vector matrix MA1 of a target object in real time by adopting a BEV mode of a transducer, the radar acquires a second coordinate point vector matrix MB1 of the target object in real time, the edge computing equipment adjusts the angle and/or the position of the radar to acquire an updated first coordinate point vector matrix MA2 and an updated second coordinate point vector matrix MB2, if the error distance between the updated first coordinate point vector matrix MA2 and the updated second coordinate point vector matrix MB2 is smaller than the error distance between the first coordinate point vector matrix MA1 and the second coordinate point vector matrix MB1, the angle and/or the position is adjusted in the correct direction, otherwise, the direction is adjusted in error, the adjustment is needed until the updated error distance determined based on the updated first coordinate point vector matrix and the updated second coordinate point vector matrix meets a preset error condition, the adjustment is stopped, the self-supervision learning data is adopted to map the error distance between the updated first coordinate point vector matrix MA2 and the updated second coordinate point vector matrix MB2, the self-supervision data is adopted to map the sensor data on the current sensor, and the sensor data on the sensor on the data on the side are mapped or other current sensor data on the side, and the sensor is directly used for acquiring the data on the sensor on the current sensor side.
2. The method for improving accuracy of a roadside sensor based on self-supervised learning of claim 1, wherein adjusting the operating parameters of the data acquisition assembly comprises:
testing stability of a plurality of existing road side sensors included in the data acquisition assembly;
testing output results of a plurality of existing road side sensors included in the data acquisition component;
and adjusting the working parameters of the data acquisition component based on the stability and/or the output result.
3. The method for improving accuracy of a roadside sensor based on self-supervised learning according to claim 1 or 2, wherein the generating a plurality of training samples based on the historical data collected by the plurality of sets of data collection components corresponding to the edge computing device and the historical data collected by the at least one set of standard components corresponding to the edge computing device comprises:
for each of the historical time points,
generating a training sample based on historical data acquired by the plurality of groups of data acquisition components at the historical time points;
and generating a label of the training sample based on the historical data collected by the standard component at the historical time point.
4. A method of improving accuracy of a roadside sensor based on self-supervised learning as set forth in claim 3, wherein said training said accuracy improvement model based on a plurality of said training samples comprises:
based on the training samples and the labels corresponding to the training samples, training the precision lifting model, and when the trained precision lifting model meets preset training conditions, ending the training.
5. A method of improving accuracy of a roadside sensor based on self-supervised learning as claimed in claim 1 or 2, wherein the installing a plurality of edge computation devices on the road of the target area comprises:
acquiring the real-time requirement of data calibration;
and determining the corresponding relation between the edge computing equipment and the data acquisition component based on the data calibration real-time requirement.
6. A system for improving accuracy of a roadside sensor based on self-supervised learning, comprising:
the edge computing devices are installed on roads of the target area, wherein each edge computing device corresponds to a plurality of groups of data acquisition components, and each data acquisition component comprises a plurality of existing road side sensors;
a plurality of sets of standard components mounted on roads of the target area, wherein each set of standard components comprises a plurality of standard road side sensors, and each edge computing device corresponds to at least one set of standard components;
the edge computing equipment is used for adjusting the working parameters of the corresponding data acquisition component;
the edge computing equipment is also used for establishing an accuracy lifting model, generating a plurality of training samples based on the historical data acquired by the plurality of groups of data acquisition components corresponding to the edge computing equipment and the historical data acquired by at least one group of standard components corresponding to the edge computing equipment, and training the accuracy lifting model based on the plurality of training samples;
the edge computing equipment is also used for carrying out data calibration on real-time data acquired by a plurality of groups of data acquisition components corresponding to the edge computing equipment based on the trained precision lifting model;
adjusting the working parameters of the data acquisition assembly, comprising:
testing the measuring ranges of a plurality of existing road side sensors included in the data acquisition component;
based on the measurement range, adjusting the angle and/or position of an image acquisition device included in the data acquisition component and the angle and/or position of a radar;
the adjusting the angle and/or the position of the image acquisition device and the angle and/or the position of the radar based on the measurement range comprises the following steps:
for the image acquisition device, acquiring a preset acquisition range of the image acquisition device, determining an actual acquisition range of the image acquisition device, and adjusting the angle and/or the position of the image acquisition device based on the deviation between the preset acquisition range of the image acquisition device and the actual acquisition range of the image acquisition device;
for the radar, acquiring a preset acquisition range of the radar, determining an actual acquisition range of the radar, and adjusting the angle and/or the position of the radar based on the deviation between the preset acquisition range of the radar and the actual acquisition range of the radar;
the adjusting the angle and/or the position of the image acquisition device and the angle and/or the position of the radar based on the measurement range specifically comprises:
acquiring a first coordinate point vector matrix of a target object through the image acquisition device;
acquiring a second coordinate point vector matrix of the target object through the radar;
determining an error distance based on the first coordinate point vector matrix and the second coordinate point vector matrix;
judging whether the angle and/or the position of the image acquisition device needs to be adjusted or not based on the error distance;
when judging that the angle and/or the position of the image acquisition device and the angle and/or the position of the radar need to be adjusted, repeatedly executing adjustment of the angle and/or the position of one of the image acquisition device and the radar, acquiring an updated first coordinate point vector matrix and an updated second coordinate point vector matrix, and determining an updated error distance based on the updated first coordinate point vector matrix and the updated second coordinate point vector matrix until the updated error distance meets a preset error condition;
the method specifically comprises the following steps:
the image acquisition device acquires a first coordinate point vector matrix MA1 of a target object in real time by adopting a BEV mode of a transducer, the radar acquires a second coordinate point vector matrix MB1 of the target object in real time, the edge computing equipment adjusts the angle and/or the position of the radar to acquire an updated first coordinate point vector matrix MA2 and an updated second coordinate point vector matrix MB2, if the error distance between the updated first coordinate point vector matrix MA2 and the updated second coordinate point vector matrix MB2 is smaller than the error distance between the first coordinate point vector matrix MA1 and the second coordinate point vector matrix MB1, the angle and/or the position is adjusted in the correct direction, otherwise, the direction is adjusted in error, the adjustment is needed until the updated error distance determined based on the updated first coordinate point vector matrix and the updated second coordinate point vector matrix meets a preset error condition, the adjustment is stopped, the self-supervision learning data is adopted to map the error distance between the updated first coordinate point vector matrix MA2 and the updated second coordinate point vector matrix MB2, the self-supervision data is adopted to map the sensor data on the current sensor, and the sensor data on the sensor on the data on the side are mapped or other current sensor data on the side, and the sensor is directly used for acquiring the data on the sensor on the current sensor side.
CN202310837967.4A 2023-07-10 2023-07-10 Method and system for improving accuracy of road side sensor based on self-supervision learning Active CN116561534B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310837967.4A CN116561534B (en) 2023-07-10 2023-07-10 Method and system for improving accuracy of road side sensor based on self-supervision learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310837967.4A CN116561534B (en) 2023-07-10 2023-07-10 Method and system for improving accuracy of road side sensor based on self-supervision learning

Publications (2)

Publication Number Publication Date
CN116561534A CN116561534A (en) 2023-08-08
CN116561534B true CN116561534B (en) 2023-10-13

Family

ID=87493237

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310837967.4A Active CN116561534B (en) 2023-07-10 2023-07-10 Method and system for improving accuracy of road side sensor based on self-supervision learning

Country Status (1)

Country Link
CN (1) CN116561534B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113779174A (en) * 2021-11-05 2021-12-10 华砺智行(武汉)科技有限公司 Method, system, equipment and medium for improving perception precision of roadside sensor
CN114167442A (en) * 2020-08-19 2022-03-11 北京万集科技股份有限公司 Information acquisition method and device, computer equipment and storage medium
CN114821507A (en) * 2022-05-18 2022-07-29 中国地质大学(北京) Multi-sensor fusion vehicle-road cooperative sensing method for automatic driving
CN114821505A (en) * 2022-05-09 2022-07-29 合众新能源汽车有限公司 Multi-view 3D target detection method, memory and system based on aerial view
CN116368355A (en) * 2021-09-05 2023-06-30 汉熵通信有限公司 Internet of things system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112835037B (en) * 2020-12-29 2021-12-07 清华大学 All-weather target detection method based on fusion of vision and millimeter waves

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114167442A (en) * 2020-08-19 2022-03-11 北京万集科技股份有限公司 Information acquisition method and device, computer equipment and storage medium
CN116368355A (en) * 2021-09-05 2023-06-30 汉熵通信有限公司 Internet of things system
CN113779174A (en) * 2021-11-05 2021-12-10 华砺智行(武汉)科技有限公司 Method, system, equipment and medium for improving perception precision of roadside sensor
CN114821505A (en) * 2022-05-09 2022-07-29 合众新能源汽车有限公司 Multi-view 3D target detection method, memory and system based on aerial view
CN114821507A (en) * 2022-05-18 2022-07-29 中国地质大学(北京) Multi-sensor fusion vehicle-road cooperative sensing method for automatic driving

Also Published As

Publication number Publication date
CN116561534A (en) 2023-08-08

Similar Documents

Publication Publication Date Title
CN109508580B (en) Traffic signal lamp identification method and device
CN110363058B (en) Three-dimensional object localization for obstacle avoidance using one-shot convolutional neural networks
CN113642633B (en) Method, device, equipment and medium for classifying driving scene data
US10943148B2 (en) Inspection neural network for assessing neural network reliability
CN107481292A (en) The attitude error method of estimation and device of vehicle-mounted camera
CN111458721B (en) Exposed garbage identification and positioning method, device and system
CN110135302B (en) Method, device, equipment and storage medium for training lane line recognition model
CN108711172B (en) Unmanned aerial vehicle identification and positioning method based on fine-grained classification
CN114092920B (en) Model training method, image classification method, device and storage medium
CN111222438A (en) Pedestrian trajectory prediction method and system based on deep learning
CN111491093A (en) Method and device for adjusting field angle of camera
CN115616937B (en) Automatic driving simulation test method, device, equipment and computer readable medium
CN111563450A (en) Data processing method, device, equipment and storage medium
CN113189989B (en) Vehicle intention prediction method, device, equipment and storage medium
CN112215120B (en) Method and device for determining visual search area and driving simulator
CN113052295B (en) Training method of neural network, object detection method, device and equipment
CN113326826A (en) Network model training method and device, electronic equipment and storage medium
CN114037645A (en) Coating defect detection method and device for pole piece, electronic equipment and readable medium
CN110097600B (en) Method and device for identifying traffic sign
CN115019060A (en) Target recognition method, and training method and device of target recognition model
CN109903308B (en) Method and device for acquiring information
Torres et al. Real-time human body pose estimation for in-car depth images
CN116561534B (en) Method and system for improving accuracy of road side sensor based on self-supervision learning
CN112509321A (en) Unmanned aerial vehicle-based driving control method and system for urban complex traffic situation and readable storage medium
CN112926415A (en) Pedestrian avoiding system and pedestrian monitoring method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant