CN109657593B - Road side information fusion method and system - Google Patents

Road side information fusion method and system Download PDF

Info

Publication number
CN109657593B
CN109657593B CN201811518421.8A CN201811518421A CN109657593B CN 109657593 B CN109657593 B CN 109657593B CN 201811518421 A CN201811518421 A CN 201811518421A CN 109657593 B CN109657593 B CN 109657593B
Authority
CN
China
Prior art keywords
ultrasonic sensor
ultrasonic
test image
coordinate system
drive test
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811518421.8A
Other languages
Chinese (zh)
Other versions
CN109657593A (en
Inventor
卢山
阙昊懿
徐赵文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Polytechnic
Original Assignee
Shenzhen Polytechnic
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Polytechnic filed Critical Shenzhen Polytechnic
Priority to CN201811518421.8A priority Critical patent/CN109657593B/en
Publication of CN109657593A publication Critical patent/CN109657593A/en
Application granted granted Critical
Publication of CN109657593B publication Critical patent/CN109657593B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/86Combinations of sonar systems with lidar systems; Combinations of sonar systems with systems not using wave reflection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

The invention discloses a road side information fusion method and a system, wherein the method comprises the following steps: the machine vision sensor acquires a drive test image in a preset range; the ultrasonic sensor continuously sends and receives ultrasonic signals and converts the ultrasonic signals into ultrasonic sensor data; transmitting the drive test image and the ultrasonic sensor data to a server through a wireless communication module; preprocessing a drive test image and denoising ultrasonic sensor data; acquiring projection points of an effective target on the drive test image pixel plane according to the denoised ultrasonic sensor data; and establishing an interested area around the projection point to finish the spatial fusion of the ultrasonic sensor data and the machine vision information. According to the method, the data of the ultrasonic sensor can be projected onto the plane of the image pixels, the region of interest is built around the projection points, the spatial fusion of the data of the ultrasonic sensor and the machine vision information is completed, and the accurate identification and detection of the drive test information are realized.

Description

Road side information fusion method and system
Technical Field
The invention relates to the technical field of data acquisition and processing, in particular to a road side information fusion method and system.
Background
At present, with the increasing penetration of the application of the internet of things in various industries, the internet of things of automobiles gradually becomes one of important applications. The Internet of vehicles is a real-time, accurate and efficient comprehensive transportation management and control system established by organically applying advanced sensor technology, communication technology, data processing technology, network technology, automatic control technology and the like to the whole transportation management system, and the industry is currently extremely valued abroad, and is considered to be a great application of the radio-frequency Internet of things technology.
The internet of vehicles includes the internet of vehicles-vehicles, which is the ultimate goal of achieving automatic driving of vehicles, and the internet of vehicles-vehicles, which must first be achieved. The vehicle-road information system is always an important field of intelligent traffic development. Internationally, the systems of IVHS in the united states, VICS in japan, and the like have achieved management and information service of intelligent traffic by establishing effective information communication between vehicles and roads. In China, the highway field with high informatization is the most important accessory for realizing the rapid landing of the Internet of vehicles.
The Internet of vehicles comprises a plurality of services oriented to the fields of public security, traffic, the Internet, personal users and the like, and the collected mass data needs to be processed concurrently. Therefore, the data middleware technology is fully utilized in the vehicle-to-network side information service platform, and the efficiency of mass data processing and sharing can be effectively improved.
However, at present, there is room for further improvement in computing power in information processing and fusion.
Disclosure of Invention
In view of the above, the invention provides a road side information fusion method and a system, and the method respectively senses road test information through an ultrasonic sensor and a visual sensor and provides a data fusion method, so that the recognition accuracy is improved.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
in a first aspect, the present invention provides: a road side information fusion method comprises the following steps:
in the running process of the vehicle, the machine vision sensor continuously acquires a drive test image within a preset range; the ultrasonic sensor continuously sends and receives ultrasonic signals, and converts the sent and received ultrasonic signals into ultrasonic sensor data;
transmitting the drive test image and the ultrasonic sensor data to a server through a wireless communication module;
preprocessing the drive test image and denoising the ultrasonic sensor data;
acquiring projection points of an effective target on the drive test image pixel plane according to the denoised ultrasonic sensor data;
and establishing an interested area around the projection point to finish the spatial fusion of the ultrasonic sensor data and the machine vision information.
In one embodiment, the wireless communication module is a 5G mobile communication network module.
In one embodiment, preprocessing the drive test image includes: and (5) image graying and filtering.
In one embodiment, denoising the ultrasonic sensor data includes:
acquiring a mathematical model f (t) =s (t) +r (t) of an ultrasonic echo signal containing noise, wherein s (t) is an attenuated ultrasonic echo signal received by a probe, and r (t) is all noise including structural noise;
selecting basic wavelet and calculating a scale function and a filtering function of the basic wavelet;
and convolving the ultrasonic echo signal mathematical model with a filtering function of the basic wavelet to obtain a denoised ultrasonic signal.
In one embodiment, obtaining the projection point of the effective target on the drive test image pixel plane according to the denoised ultrasonic sensor data includes:
obtaining a first conversion relation between a two-dimensional plane coordinate system of the ultrasonic sensor and a coordinate system of the machine vision sensor;
obtaining a second conversion relation between the machine vision sensor coordinate system and the pixel coordinate system;
and according to the first conversion relation and the second conversion relation, the conversion between the coordinate system of the ultrasonic sensor and the pixel coordinate is realized, and the projection point of the effective target detected by the ultrasonic sensor on the drive test image pixel plane is obtained.
In one embodiment, establishing a region of interest around the proxel to accomplish spatial fusion of ultrasonic sensor data and machine vision information, comprising:
and taking the projection point as a center, and establishing a region of interest which decreases with the increase of the effective target distance according to the inverted pyramid model.
In a second aspect, the present invention further provides a road side information fusion system, including:
the acquisition module is used for continuously acquiring the road test image in a preset range by the machine vision sensor in the running process of the vehicle; the ultrasonic sensor continuously sends and receives ultrasonic signals and converts the sent and received ultrasonic signals into ultrasonic sensor data;
the sending module is used for sending the drive test image and the ultrasonic sensor data to a server through the wireless communication module;
the processing module is used for preprocessing the drive test image and denoising the ultrasonic sensor data;
the projection module is used for acquiring projection points of the effective target on the drive test image pixel plane according to the denoised ultrasonic sensor data;
and the fusion module is used for establishing an interested area around the projection point and completing the spatial fusion of the ultrasonic sensor data and the machine vision information.
In one embodiment, in the transmitting module, the wireless communication module is a 5G mobile communication network module.
In one embodiment, the processing module comprises: a preprocessing submodule and a denoising submodule;
the denoising submodule includes:
an acquisition subunit, configured to acquire a mathematical model f (t) =s (t) +r (t) of an ultrasound echo signal containing noise, where s (t) is an attenuated ultrasound echo signal received by the probe, and r (t) is all noise including structural noise;
a selecting subunit, configured to select a basic wavelet and calculate a scale function and a filtering function of the basic wavelet;
and the convolution subunit is used for convolving the ultrasonic echo signal mathematical model with the filtering function of the basic wavelet to obtain a denoised ultrasonic signal.
In one embodiment, the projection module includes:
the first acquisition submodule is used for acquiring a first conversion relation between the two-dimensional plane coordinate system of the ultrasonic sensor and the coordinate system of the machine vision sensor;
the second acquisition submodule is used for acquiring a second conversion relation between the machine vision sensor coordinate system and the pixel coordinate system;
and the conversion sub-module is used for realizing the conversion between the ultrasonic sensor coordinate system and the pixel coordinate according to the first conversion relation and the second conversion relation, and obtaining the projection point of the effective target detected by the ultrasonic sensor on the drive test image pixel plane.
In one embodiment, the fusion module is specifically configured to build, based on the inverted-golden-tower model, a region of interest that decreases with increasing effective target distance, with the projection point as a center.
Compared with the prior art, the invention discloses an information fusion method based on ultrasonic waves and machine vision, which is used for sensing drive test information through an ultrasonic sensor and a visual sensor respectively and providing a data fusion method, so that the recognition accuracy is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the drawings provided without invasive labor for those skilled in the art.
Fig. 1 is a flowchart of a road side information fusion method provided by the invention.
Fig. 2 is a schematic diagram of the installation of the ultrasonic sensor and the machine vision sensor provided by the invention.
Fig. 3 is a schematic diagram of the working principle of the ultrasonic sensor provided by the invention.
Fig. 4 is a schematic diagram of the detection of the ultrasonic sensor provided by the invention.
Fig. 5 is a schematic structural diagram of a piezoelectric ultrasonic sensor according to the present invention.
Fig. 6 is a schematic diagram of a structure of a CCD image sensor according to the present invention.
Fig. 7 is a flowchart of denoising ultrasonic sensor data according to the present invention.
Fig. 8 is a flowchart of the method for acquiring the projection point of the effective target on the pixel plane of the drive test image by using the ultrasonic sensor data provided by the invention.
Fig. 9 is a block diagram of a road side information fusion device provided by the invention.
Fig. 10 is a block diagram of a processing module 93 provided in the present invention.
Fig. 11 is a block diagram of an ultrasonic and machine vision based information fusion device provided by the invention.
In the figure: 1-an ultrasonic sensor; 2-machine vision sensor.
Detailed Description
The following description of the embodiments of the present invention will be made with reference to the accompanying drawings, in which it is evident that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The embodiment of the invention discloses a road side information fusion method, which is shown by referring to FIG. 1 and comprises S101-S105;
s101, continuously acquiring a drive test image in a preset range by a machine vision sensor in the running process of a bicycle; the ultrasonic sensor continuously sends and receives ultrasonic signals and converts the sent and received ultrasonic signals into ultrasonic sensor data;
s102, transmitting the drive test image and the ultrasonic sensor data to a server through a wireless communication module;
s103, preprocessing the drive test image and denoising the ultrasonic sensor data;
s104, acquiring projection points of the effective target on the drive test image pixel plane according to the denoised ultrasonic sensor data;
and S105, establishing an interested area around the projection point, and completing the spatial fusion of the ultrasonic sensor data and the machine vision information.
Referring to fig. 2, the vehicle is provided with an ultrasonic sensor 1 and a machine vision sensor 2. The ultrasonic wave is a mechanical wave with the frequency higher than 20kHz, and the ultrasonic wave has the characteristics of high frequency, short wave length and small diffraction phenomenon. The most obvious characteristics of the method are good directivity, large penetrating power and obvious reflection and refraction generated when the method encounters a medium interface, and the method can be applied to detection of road test information, such as detection of roadbeds, road signs, indication boards, steep slopes, solid intersections, parallel roads with height differences, lanes, lane middle lines, lane side lines, road boundaries, pavement markers, signal lamps, sidewalks, road peripheral objects and the like.
Because the ultrasonic detection angle is too small, a vehicle needs to be provided with a plurality of ultrasonic detection angles at different angles, and besides, the ultrasonic detection device has the advantages that: (1) waterproof, dustproof, a small amount of silt can be shielded; (2) the probe with metal material can be well combined with the car body shell; (3) the minimum monitoring distance can reach 0.1-0.3 meter; also, for the more common 40KHz ultrasonic sensor, its ranging accuracy is about 1-3 cm (depending on the backend circuitry and server data processing capabilities).
Ultrasonic waves generally include longitudinal waves, transverse waves, and surface waves, and when an ultrasonic wave propagates in a medium, energy is gradually attenuated as the propagation distance increases. The attenuation of energy is determined by the diffusion, scattering and absorption of ultrasound waves.
Specifically, in this embodiment, the main technical parameters of the ultrasonic sensor are as follows:
voltage used: DC5V;
quiescent current: less than 2mA;
level output: high 5V;
level output: bottom 0V;
induction angle: no greater than 15 degrees;
detection distance: 50px-11250px, high accuracy can reach 5px;
a wiring mode port: VCC (power supply), trig (control terminal), echo (receiving terminal), GND (ground).
The working process is as follows: as shown in fig. 3-4;
(1) triggering ranging by using I/O, and giving a high level signal of at least 10 us;
(2) the module automatically sends 8 square waves of 40kHZ, and automatically detects whether a signal returns;
(3) there is a signal return, a high level is output through the I/O, and the duration of the high level is the time from the transmission of the ultrasonic wave to the return.
Test distance = (high level time sound speed (340M/S))/2.
In particular, in the present embodiment, the ultrasonic sensor may be a piezoelectric sensor, a magnetostrictive sensor, an electromagnetic sensor, or the like, and the present embodiment is not limited thereto.
Taking a piezoelectric ultrasonic sensor as an example:
as shown in fig. 5, the structure of the piezoelectric ultrasonic sensor is shown, and the piezoelectric ultrasonic sensor is operated by using the piezoelectric effect principle of piezoelectric materials. The common sensitive element materials mainly comprise piezoelectric crystals and piezoelectric ceramics.
Piezoelectric ultrasonic transducers are classified into two types, namely a generator (transmitting probe) and a receiver (receiving probe), according to the difference of the positive and negative piezoelectric effects, and can be classified into a straight probe, a surface wave probe, a lamb wave probe, a variable angle probe, a bimorph probe, a focusing probe, a water immersion probe, a water spray probe, a special probe, and the like according to the structure and the wave pattern used.
The piezoelectric ultrasonic generator converts high-frequency electric vibration into high-frequency mechanical vibration by utilizing the principle of inverse piezoelectric effect, so as to generate ultrasonic waves. Resonance occurs when the frequency of the applied alternating voltage is equal to the natural frequency of the piezoelectric material, and the ultrasonic wave generated at this time is strongest. Piezoelectric ultrasonic transducers can generate high frequency ultrasonic waves of tens of kilohertz to tens of megahertz with sound intensities up to tens of watts per square centimeter.
The piezoelectric ultrasonic receiver works by utilizing the principle of positive piezoelectric effect. When ultrasonic wave is applied to the piezoelectric wafer to cause the wafer to stretch, charges with opposite polarities are generated on the two surfaces of the wafer, and the charges are converted into voltages, amplified and sent to a measuring circuit, and finally recorded or displayed. The piezoelectric ultrasonic receiver has substantially the same structure as the ultrasonic generator, and sometimes the same sensor is used for both the generator and the receiver.
The ultrasonic sensor structure mainly comprises a piezoelectric wafer, an absorption block (damping block), a protective film and the like. The piezoelectric wafer is mostly in the shape of a circular plate, and the ultrasonic frequency is inversely proportional to the thickness thereof. Silver layers are plated on two sides of the piezoelectric wafer and serve as conductive polar plates, the bottom surface of the piezoelectric wafer is grounded, and the upper surface of the piezoelectric wafer is connected to the outgoing line. In order to avoid the sensor from being directly contacted with the measured piece to wear the piezoelectric wafer, a layer of protective film is adhered below the piezoelectric wafer. The absorption block is used for reducing the mechanical quality of the piezoelectric wafer and absorbing the energy of ultrasonic waves.
In step S101, the image acquisition unit of the machine vision sensor mainly comprises a CCD/CMOS camera, an optical system, an illumination system and an image acquisition card, and converts the optical image into a digital image, and transmits the digital image to the image processing unit. The image sensors commonly used are mainly two types of CCD image sensors and CMOS image sensors.
In this embodiment, a CCD image sensor is exemplified, and referring to fig. 6, the CCD image sensor is composed of three layers of a micro lens, a color filter, and a photosensitive element. Each photosensitive element of the CCD image sensor consists of a photoelectric diode and a storage unit for controlling adjacent charges, and is used for capturing photons, the photons are converted into electrons, the more the collected light is, the more the number of generated electrons is, the more the electronic signals are, the more easily recorded and the less easily lost, and the image details are more abundant.
In the implementation, data acquired by the machine vision sensor and the ultrasonic sensor in the running process of the vehicle are transmitted to the server through the wireless communication module, the server projects the data of the ultrasonic sensor onto the plane of the image pixels after processing, a region of interest is built around a projection point, space fusion of the data of the ultrasonic sensor and the machine vision information is completed, accurate identification and detection of road test information are completed, and powerful guarantee is provided for automatic obstacle avoidance of a subsequent vehicle.
The following describes the above steps in detail, respectively.
In one embodiment, in step S102, the wireless communication module is preferably a 5G mobile communication network, where the 5G has a higher rate and a wider bandwidth, and thus the 5G has higher reliability; the application of the method in the embodiment mainly utilizes the characteristic of lower time delay, and can meet the specific requirements of safe driving of the vehicle, such as automatic driving.
In one embodiment, in step S103, the drive test image is preprocessed, where the preprocessing is mainly divided into image graying and filtering. The image acquired by the CCD image sensor is a three-channel RGB color image, which is first grayed and converted from a color map to a gray map. For any pixel point I, the conversion formula is as follows:
I gray =I R ×0.299+I G ×0.587+I B ×0.114
after the gray level image is obtained, filtering processing is carried out on the gray level image so as to remove interference of noise-free lines in the image. For example, uniform filtering can be adopted, a template is given to the target pixel on the image, the template consists of a plurality of pixels adjacent to the target pixel, the average value of all pixels in the template is calculated, and then the average value is given to the current pixel point I (x Y) as the gray level g of the processed image at that point (x,y) I.e.
Figure SMS_1
M is the total number of pixels including the current pixel in the template, S represents the template area, I (m,n) Represents the gray value at point (m, n). In order to effectively filter interference information, especially line interference information, a slightly larger template may be selected: such as m= {36, 49,64}, etc.
In one embodiment, denoising the ultrasonic sensor data, referring to fig. 7, includes:
s701, acquiring a mathematical model f (t) =s (t) +r (t) of an ultrasonic echo signal containing noise, wherein S (t) is an attenuated ultrasonic echo signal received by a probe, and r (t) is all noise including structural noise;
s702, selecting a basic wavelet and calculating a scale function and a filtering function of the basic wavelet;
s703, convolving the ultrasonic echo signal mathematical model with a filtering function of the basic wavelet to obtain a denoised ultrasonic signal.
In this embodiment, the mathematical model of the ultrasonic echo signal with the defect can be expressed as
f(t)=s(t)+r(t) (1)
s (t) is the attenuated ultrasonic echo signal received by the probe; r (t) refers to all noise including structural noise, and is typically taken as r (t) =br n (t), b is the noise figure, r n And (t) is the structural noise.
Then, using Daubechies wavelet family as basic wavelet (also called mother wavelet), the filtering function is psi (t); according to the invention, dbN (N is wavelet order), and the rule and the size of the threshold value are selected according to the attenuation condition of the ultrasonic echo signal and the noise size; preferably, one of 4 to 6 stages is selected.
For continuous wavelet transform filter coefficient h 0 (n) and h 1 (n) solving a scale function
Figure SMS_2
And a filter function ψ (t). Let->
Figure SMS_3
ψ(t)、h 0 (n)、h 1 (n) the corresponding Fourier transforms are Φ (w), ψ (w), H, respectively 0 (W)、H 1 (W)。
Figure SMS_4
Figure SMS_5
Figure SMS_6
H 0 (z)=h 0 (0)+h 0 (1)z -1 +h 0 (2)z -2 +h 0 (3)z -3 +......+h 0 (n)z -n (5)
Obtaining H by the formula (5) 0 (Z) and then Z-transforming to obtain H 0 (W) from H 0 (W) and H 1 And (W) solving phi (W) and psi (W) to obtain a scale function and a filtering function. It is noted that the solution of the resolution form can be obtained from the formula (5) in a very small number of cases, and in general, the resolution solution cannot be obtained in most cases, only for h 0 (n) performing iterative numerical convolution operation to obtain
Figure SMS_7
After the scale function and the filter function are obtained, the solved filter function is corrected by the scale through the method (7). Scale function
Figure SMS_8
Is a low-pass function by +.>
Figure SMS_9
Can be found psi (t), psi (t) is +.>
Figure SMS_10
Is a shift weighting of (a): />
Figure SMS_11
k is from 2-2N to 1, N is different, and g is weight k The values of (2) are also different.
Finally, the basic wavelet function psi (t) is subjected to displacement tau, and then is subjected to inner product with the ultrasonic echo signal mathematical model x (t) under different scales alpha:
Figure SMS_12
the equivalent frequency domain is expressed as:
Figure SMS_13
and (3) obtaining a denoised ultrasonic signal after the operation of the formula (7).
In one embodiment, in the step S104, obtaining the projection point of the effective target on the drive test image pixel plane according to the denoised ultrasonic sensor data, referring to fig. 8, includes:
s801, obtaining a first conversion relation between a two-dimensional plane coordinate system of an ultrasonic sensor and a coordinate system of a machine vision sensor;
s802, obtaining a second conversion relation between the machine vision sensor coordinate system and the pixel coordinate system;
s803, according to the first conversion relation and the second conversion relation, conversion between the ultrasonic sensor coordinate system and the pixel coordinates is achieved, and projection points of the effective target detected by the ultrasonic sensor on the road test image pixel plane are obtained.
In this embodiment, spatial fusion of the ultrasonic sensor data and the machine vision detection information is completed by the conversion relationship between the ultrasonic sensor and the machine vision sensor, and various coordinate systems.
In one embodiment, establishing a region of interest around the proxel to accomplish spatial fusion of ultrasonic sensor data and machine vision information, comprising:
and taking the projection point as a center, and establishing a region of interest which decreases with the increase of the effective target distance according to the inverted pyramid model.
The data fusion module of the invention is divided into two parts: (1) first fusion: extracting R based on an ultrasonic sensor: the vehicle detection process is performed within R, where R is: definition R is a polygon mask defined based on the scene. The image mask returns 1 at the R polygon and 0 at the other locations shown in the figure. By defining R, the processing area of the video can be limited, thereby reducing the computation time and memory consumption required for network training.
Furthermore, the R region eliminates interference from the surrounding environment and focuses the treatment areas on the road together. Using image multiplication techniques, the areas outside the mask will return to 0, which can eliminate areas outside the lane. The ultrasonic sensor and the region of interest of the road test image are based on the identification of a classifier trained by a convolutional neural network to judge whether useful road test information (such as detecting roadbeds, road signs, indication boards, steep slopes, solid intersections, parallel roads with height differences, lanes, lane middle lines, lane side lines, road boundaries, pavement markers, signal lights, sidewalks, road surrounding objects and the like) exists in the region. The definition of the region of interest will therefore directly affect the fusion result.
(2) Second fusion: candidate region: the classifying process is to judge whether a certain detection window has drive test information or not, namely, the drive test information area and the drive test information-free area of the detection window are classified into two categories, and the positioning is to point out the specific position of an effective target in the image. The Fast-RCNN adopts image regionalization, extracts a plurality of candidate regions for training, and integrates the result obtained by region classification. And R fused with the ultrasonic sensor data reduces the original pixel, and the size of the dividing candidate region can be automatically adjusted according to the target distance measured by the ultrasonic sensor. If the distance is far, it is apparent that R is large, the region is enlarged, and if the target is near, the region is contracted.
Based on the same inventive concept, the present invention also provides a road side information fusion system, as shown in fig. 9, including:
an acquisition module 91, configured to continuously acquire a drive test image within a preset range by using a machine vision sensor during a self-vehicle running process; the ultrasonic sensor continuously sends and receives ultrasonic signals and converts the sent and received ultrasonic signals into ultrasonic sensor data;
a transmitting module 92, configured to transmit the drive test image and the ultrasonic sensor data to a server through a wireless communication module;
the processing module 93 is used for preprocessing the drive test image and denoising the ultrasonic sensor data;
the projection module 94 is configured to obtain a projection point of the effective target on the pixel plane of the drive test image according to the denoised data of the ultrasonic sensor;
and the fusion module 95 is used for establishing a region of interest around the projection point and completing the spatial fusion of the ultrasonic sensor data and the machine vision information.
In one embodiment, in the sending module 92, the wireless communication module is a 5G mobile communication network module.
In one embodiment, the processing module 93, referring to fig. 10, includes: a preprocessing submodule 931 and a denoising submodule 932;
the denoising submodule 931 includes:
an acquisition subunit 9311, configured to acquire a mathematical model f (t) =s (t) +r (t) of the ultrasound echo signal containing noise, where s (t) is an attenuated ultrasound echo signal received by the probe, and r (t) is all noise including structural noise;
a selecting subunit 9312, configured to select a basic wavelet and calculate a scale function and a filtering function of the basic wavelet;
the convolution subunit 9313 is configured to convolution the mathematical model of the ultrasonic echo signal with the filtering function of the basic wavelet to obtain a denoised ultrasonic signal.
In one embodiment, the projection module 94, as shown with reference to fig. 11, includes:
a first obtaining sub-module 941, configured to obtain a first conversion relationship between the two-dimensional plane coordinate system of the ultrasonic sensor and the coordinate system of the machine vision sensor;
a second obtaining sub-module 942, configured to obtain a second conversion relationship between the machine vision sensor coordinate system and the pixel coordinate system;
and the conversion sub-module 943 is configured to implement conversion between the coordinate system of the ultrasonic sensor and the pixel coordinate according to the first conversion relationship and the second conversion relationship, so as to obtain a projection point of the effective target detected by the ultrasonic sensor on the pixel plane of the drive test image.
In one embodiment, the fusion module 95 is specifically configured to build, based on the inverted-golden-tower model, a region of interest that decreases with increasing effective target distance, with the proxels as a center.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (7)

1. A road side information fusion method is characterized by comprising the steps of,
in the running process of the automobile, the machine vision sensor continuously acquires a drive test image in a preset range; the ultrasonic sensor continuously sends and receives ultrasonic signals and converts the sent and received ultrasonic signals into ultrasonic sensor data;
transmitting the drive test image and the ultrasonic sensor data to a server through a wireless communication module;
preprocessing the drive test image and denoising the ultrasonic sensor data;
acquiring a projection point of an effective target on the drive test image pixel plane according to the denoised ultrasonic sensor data, wherein the projection point comprises the following steps:
obtaining a first conversion relation between a two-dimensional plane coordinate system of the ultrasonic sensor and a coordinate system of the machine vision sensor;
obtaining a second conversion relation between the machine vision sensor coordinate system and the pixel coordinate system;
according to the first conversion relation and the second conversion relation, conversion between the ultrasonic sensor coordinate system and the pixel coordinate is realized, and a projection point of an effective target detected by the ultrasonic sensor on the drive test image pixel plane is obtained;
establishing an interested area around the projection point to complete the spatial fusion of the ultrasonic sensor data and the machine vision information, and comprising the following steps: establishing an interested region which decreases with the increase of the effective target distance according to the inverted pyramid model by taking the projection point as the center;
extracting R based on an ultrasonic sensor: the vehicle detection process is performed within R, where R is: definition R is a polygon mask defined based on the scene; the image mask returns 1 in the R polygon and returns 0 in other positions; by defining R, the processing area of the video can be limited, so that the calculation time and the memory consumption required by network training are reduced;
the R region eliminates interference from the surrounding environment and focuses the treatment areas on the road together; using image multiplication techniques, the area outside the mask will return to 0 to eliminate areas outside the lane; the classifier trained by the convolutional neural network is used for identifying and judging whether useful road test information (detecting roadbeds, road signs, indication boards, steep slopes, solid intersections, parallel roads with height differences, lanes, lane middle lines, lane side lines, road boundaries, pavement markers, signal lamps, sidewalks and road surrounding objects) exists in the region based on the ultrasonic sensor and the region of interest of the road test image.
2. The method of claim 1, wherein the wireless communication module is a 5G mobile communication network module.
3. The method of claim 1, wherein preprocessing the road side information fusion image comprises: and (5) image graying and filtering.
4. The roadside information fusion method according to claim 1, wherein denoising the ultrasonic sensor data comprises:
acquiring a mathematical model f (t) =s (t) +r (t) of an ultrasonic echo signal containing noise, wherein s (t) is an attenuated ultrasonic echo signal received by a probe, and r (t) is all noise including structural noise;
selecting basic wavelet and calculating a scale function and a filtering function of the basic wavelet;
and convolving the ultrasonic echo signal mathematical model with a filtering function of the basic wavelet to obtain a denoised ultrasonic signal.
5. A roadside information fusion system, comprising:
the acquisition module is used for continuously acquiring the road test image in a preset range by the machine vision sensor in the running process of the vehicle; the ultrasonic sensor continuously sends and receives ultrasonic signals and converts the sent and received ultrasonic signals into ultrasonic sensor data;
the sending module is used for sending the drive test image and the ultrasonic sensor data to a server through the wireless communication module;
the processing module is used for preprocessing the drive test image and denoising the ultrasonic sensor data;
the projection module is used for acquiring projection points of the effective target on the drive test image pixel plane according to the denoised ultrasonic sensor data; comprising the following steps:
obtaining a first conversion relation between a two-dimensional plane coordinate system of the ultrasonic sensor and a coordinate system of the machine vision sensor;
obtaining a second conversion relation between the machine vision sensor coordinate system and the pixel coordinate system;
according to the first conversion relation and the second conversion relation, conversion between the ultrasonic sensor coordinate system and the pixel coordinate is realized, and a projection point of an effective target detected by the ultrasonic sensor on the drive test image pixel plane is obtained;
the fusion module is used for establishing an interested area around the projection point and completing the spatial fusion of the ultrasonic sensor data and the machine vision information, and comprises the following steps: establishing an interested region which decreases with the increase of the effective target distance according to the inverted pyramid model by taking the projection point as the center;
extracting R based on an ultrasonic sensor: the vehicle detection process is performed within R, where R is: definition R is a polygon mask defined based on the scene; the image mask returns 1 in the R polygon and returns 0 in other positions; by defining R, the processing area of the video can be limited, so that the calculation time and the memory consumption required by network training are reduced;
the R region eliminates interference from the surrounding environment and focuses the treatment areas on the road together; using image multiplication techniques, the area outside the mask will return to 0 to eliminate areas outside the lane; the classifier trained by the convolutional neural network is used for identifying and judging whether useful road test information (detecting roadbeds, road signs, indication boards, steep slopes, solid intersections, parallel roads with height differences, lanes, lane middle lines, lane side lines, road boundaries, pavement markers, signal lamps, sidewalks and road surrounding objects) exists in the region based on the ultrasonic sensor and the region of interest of the road test image.
6. The roadside information fusion system according to claim 5 wherein the processing module comprises: a preprocessing submodule and a denoising submodule;
the denoising submodule includes:
an acquisition subunit, configured to acquire a mathematical model f (t) =s (t) +r (t) of an ultrasound echo signal containing noise, where s (t) is an attenuated ultrasound echo signal received by the probe, and r (t) is all noise including structural noise;
a selecting subunit, configured to select a basic wavelet and calculate a scale function and a filtering function of the basic wavelet;
and the convolution subunit is used for convolving the ultrasonic echo signal mathematical model with the filtering function of the basic wavelet to obtain a denoised ultrasonic signal.
7. The roadside information fusion system according to claim 5 wherein the projection module comprises:
the first acquisition submodule is used for acquiring a first conversion relation between the two-dimensional plane coordinate system of the ultrasonic sensor and the coordinate system of the machine vision sensor;
the second acquisition submodule is used for acquiring a second conversion relation between the machine vision sensor coordinate system and the pixel coordinate system;
and the conversion sub-module is used for realizing the conversion between the coordinate system of the ultrasonic sensor and the pixel coordinate according to the first conversion relation and the second conversion relation, and obtaining the projection point of the effective target detected by the ultrasonic sensor on the drive test image pixel plane.
CN201811518421.8A 2018-12-12 2018-12-12 Road side information fusion method and system Active CN109657593B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811518421.8A CN109657593B (en) 2018-12-12 2018-12-12 Road side information fusion method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811518421.8A CN109657593B (en) 2018-12-12 2018-12-12 Road side information fusion method and system

Publications (2)

Publication Number Publication Date
CN109657593A CN109657593A (en) 2019-04-19
CN109657593B true CN109657593B (en) 2023-04-28

Family

ID=66113836

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811518421.8A Active CN109657593B (en) 2018-12-12 2018-12-12 Road side information fusion method and system

Country Status (1)

Country Link
CN (1) CN109657593B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110287832A (en) * 2019-06-13 2019-09-27 北京百度网讯科技有限公司 High-Speed Automatic Driving Scene barrier perception evaluating method and device
CN112535476B (en) * 2020-12-01 2022-11-22 业成科技(成都)有限公司 Fall detection system and method thereof
CN113362395A (en) * 2021-06-15 2021-09-07 上海追势科技有限公司 Sensor fusion-based environment sensing method
CN113963060B (en) * 2021-09-22 2022-03-18 腾讯科技(深圳)有限公司 Vehicle information image processing method and device based on artificial intelligence and electronic equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107609522A (en) * 2017-09-19 2018-01-19 东华大学 A kind of information fusion vehicle detecting system based on laser radar and machine vision

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574329B (en) * 2013-10-09 2018-03-09 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic fusion of imaging method, ultrasonic fusion of imaging navigation system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107609522A (en) * 2017-09-19 2018-01-19 东华大学 A kind of information fusion vehicle detecting system based on laser radar and machine vision

Also Published As

Publication number Publication date
CN109657593A (en) 2019-04-19

Similar Documents

Publication Publication Date Title
CN109657593B (en) Road side information fusion method and system
US11620835B2 (en) Obstacle recognition method and apparatus, storage medium, and electronic device
CN103852754B (en) The method of AF panel in flight time (TOF) measuring system
WO2019179417A1 (en) Data fusion method and related device
Sun et al. On-road vehicle detection: A review
US8908038B2 (en) Vehicle detection device and vehicle detection method
CN102879786B (en) Detecting and positioning method and system for aiming at underwater obstacles
CN103852067B (en) The method for adjusting the operating parameter of flight time (TOF) measuring system
Taraba et al. Utilization of modern sensors in autonomous vehicles
CN111247525A (en) Lane detection method and device, lane detection equipment and mobile platform
Kim et al. Vision-based real-time obstacle segmentation algorithm for autonomous surface vehicle
Chen et al. Vision-based distance estimation for multiple vehicles using single optical camera
CN116071712A (en) Vehicle sensor occlusion detection
Wang et al. On the application of cameras used in autonomous vehicles
KR20130001760A (en) Supervised terrain classification method using variable block
CN108711283A (en) The night monitoring to park cars
Balcerek et al. Classification of road surfaces using convolutional neural network
CN201266436Y (en) Multi-video united detection device for mobile vehicle
Wijesoma et al. A laser and a camera for mobile robot navigation
CN115131756A (en) Target detection method and device
CN115144407A (en) Portable vehicle-mounted road disease detection device and detection method
CN205142411U (en) Image capturing device of backing a car and system thereof
TWI843116B (en) Moving object detection method, device, electronic device and storage medium
CN217505691U (en) Portable vehicle-mounted road disease detection equipment
JP7254967B2 (en) Information processing device, sensing device, moving object, and information processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant