CN112562061A - Driving vision enhancement system and method based on laser radar image - Google Patents

Driving vision enhancement system and method based on laser radar image Download PDF

Info

Publication number
CN112562061A
CN112562061A CN202011367786.2A CN202011367786A CN112562061A CN 112562061 A CN112562061 A CN 112562061A CN 202011367786 A CN202011367786 A CN 202011367786A CN 112562061 A CN112562061 A CN 112562061A
Authority
CN
China
Prior art keywords
road
coordinates
coordinate system
dimensional
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011367786.2A
Other languages
Chinese (zh)
Inventor
刘力源
吴宏涛
王俊骅
姚晓峰
孟颖
何琨
孟泽彬
牛秉青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Shanxi Transportation Technology Research and Development Co Ltd
Original Assignee
Tongji University
Shanxi Transportation Technology Research and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University, Shanxi Transportation Technology Research and Development Co Ltd filed Critical Tongji University
Priority to CN202011367786.2A priority Critical patent/CN112562061A/en
Publication of CN112562061A publication Critical patent/CN112562061A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Graphics (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Hardware Design (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a driving vision enhancement system and method based on a laser radar image. The invention uses an unmanned aerial vehicle to load high-precision laser radar equipment, constructs a road linear three-dimensional calculation model by scanning point clouds of a road environment, scans a road traffic state, generates a parallel traffic environment by combining edge calculation with a road linear model and vehicle-mounted real-time high-precision GPS positioning data in the vehicle running process, constructs a road and vehicle lane sideline perspective view of a front observation road section based on a driver viewpoint and a vehicle surrounding traffic environment generation and early warning method, and the perspective view is projected on a projection plane at a fixed position in front of the driver in real time through a head-up display HUD device, so that the effects of overlapping the perspective view imaging and the actual road sideline and updating the road condition in real time under the observation visual angle of the driver are realized, therefore, the defect of visual information acquisition of a driver under unknown conditions, risk environments or poor conditions such as foggy days and nights is overcome, and the functions of navigation and early warning for driving of the driver are achieved.

Description

Driving vision enhancement system and method based on laser radar image
Technical Field
The invention belongs to the technical field of intelligent driving, and particularly relates to a driving vision enhancement system and method based on a laser radar image.
Background
Dangerous driving behaviors of drivers due to unknown environments are one of important causes of traffic accidents. Especially, under the conditions of strange roads and abnormal traffic environments such as fog, heavy rain, night and the like, the recognition and visibility of the driving environment are poor, the visibility is reduced, the visual range is shortened, the physiological and psychological stress of a driver is increased, and the road and traffic information obtained by vision is easy to be lost or deviated. When the linear characteristic of the road changes, the driver often makes mistakes in judging the front road direction and corresponding operations, and traffic accidents such as vehicle rear-end collision, road side guardrail collision, departure of a route from a roadway and the like easily occur, so that serious consequences are caused. According to statistics, traffic accidents under adverse weather conditions account for about 15% of the total number of road accidents every year, and the death rate accounts for more than 47%; about 70% of drivers are in excessive psychological stress when entering the fog region, and about 85% of drivers feel fatigue when driving in the fog. Therefore, how to improve driving conditions in adverse driving environments and improve the perception ability of drivers for road information is an urgent need to develop safe driving safeguard measures or safe auxiliary driving technologies for unknown environments and abnormal environments to improve driving safety in these environments.
At present, there are two main categories of technical measures for reducing or preventing traffic accidents under unknown conditions or risk scenes, except for temporarily closing a highway to cancel traffic: an intelligent guidance system based on road facilities and a safety auxiliary driving system based on vehicle.
The intelligent induction system based on the road facilities constructs an intelligent electronic induction system of the expressway by adopting various facilities such as monitoring, detecting, communication, calculation, display, control centers and the like, and carries out intelligent control and induction on vehicles entering an abnormal environment (foggy days or construction) range in the aspects of flow management, speed control, vehicle distance control and the like. The system has the problems that the construction cost and the maintenance cost are high, and the system is generally only used for important expressway construction or sections with multiple abnormal weather conditions and is difficult to be completely paved for construction.
The vehicle-mounted safety auxiliary driving system is a new technology rapidly developed in recent years, and has the core ideas that the sensing capability of a driver is enhanced by utilizing various sensors in an auxiliary mode, the information of the road environment in front of the driver and surrounding vehicles is detected by utilizing various sensors such as a thermal infrared imager, a CCD, a laser radar and a millimeter wave radar and is provided for the driver in an image form in real time, and the sensing capability of the driver on the driving environment is enhanced. At present, night vision auxiliary driving technology based on infrared thermal imaging, surrounding vehicle information detection technology based on millimeter wave radar or laser radar, image defogging technology based on a monocular infrared camera and the like are mainly available. At present, the above technologies based on a single sensor cannot well meet the requirements of accuracy, reliability and environmental adaptability of the system, and have a large gap from practical application; meanwhile, in the aspect of multi-sensor fusion, due to the fact that various devices are complex and high in cost, research and application of the multi-sensor fusion technology are further and deeply explored and developed, and the advantages of the technology are difficult to fully show in a short time; and thirdly, the sensors and the laser radar equipment loaded on the vehicle have certain range limitation, and an early warning system with guidance suggestions cannot be generated from a wider range and a full range, so that the early warning and display effects are poor.
Disclosure of Invention
The present invention is directed to overcoming the above-mentioned deficiencies in the prior art and providing a system and method for driving vision enhancement based on lidar images.
The technical scheme is as follows:
a system for laser radar image based driving vision enhancement, comprising: data acquisition module, data storage module, GPS orientation module, control processing module and new line display module, wherein:
the data acquisition module scans road and vehicle environments in real time through a laser radar device loaded with an unmanned aerial vehicle, and generates an editable database through edge calculation; meanwhile, receiving real-time acquired data, and constructing a road linear three-dimensional calculation model and a database for parallel running of vehicles; the GPS positioning module receives a positioning signal sent by a global satellite positioning system and outputs high-precision point position coordinates to the control processing module; the control processing module receives the point location coordinates, calls a road linear three-dimensional calculation model corresponding to the current position from the data storage module, calculates the coordinates of the side line characteristic points of the road lane in a certain range in front of the driver, generates and outputs a perspective view to the head-up display module in real time through coordinate conversion, and simultaneously matches a database coordinate system according to the positions and running conditions of other vehicles in the database and updates the coordinate system in real time; the heads-up display module receives the perspectives and utilizes the HUD to project the perspectives in real time onto a projection surface located a distance forward from the driver.
The system also comprises a power supply control module which provides a 12V power supply for the data acquisition module, the storage module, the GPS positioning module, the control processing module and the head-up display module.
A driving vision enhancement method based on laser radar images comprises the following steps:
scanning a road in front of a vehicle and a traffic environment in real time by using laser radar equipment loaded on an unmanned aerial vehicle to form a road in front of vehicle operation and a vehicle operation real-time update database;
collecting road vehicle point cloud data by using an unmanned aerial vehicle, and constructing a road linear three-dimensional calculation model and a traffic flow operation database;
determining the coordinates (x) of the driver's current viewpoint in the local coordinate system using GPS positioning data and a three-dimensional calculation model of road alignment in a database0,y0,h0);
According to the coordinate (x) of the current viewpoint of the driver in the local coordinate system0,y0,h0) Determining three-dimensional coordinates (x, y, z) of feature points on the sideline of the roadway of the road section observed in front of the driver in a local coordinate system by using the road linear three-dimensional calculation model corresponding to the current position in the database;
according to the coordinate (x) of the current viewpoint of the driver in the local coordinate system0,y0,h0) Dynamically generating a road perspective view of an observation road section in real time according to the three-dimensional coordinates (x, y, z) of the feature points on the side line of the roadway in a local coordinate system;
according to the coordinate (x) of the current viewpoint of the driver in the local coordinate system0,y0,h0) Three-dimensional coordinates (x) of real-time running characteristic points of the front vehicle in a local coordinate systemn,yn,zn) Dynamically generating the traffic flow condition of the observed road section in real time;
and dynamically displaying the road perspective in real time through the HUD.
Further, the road and traffic environment database obtained by scanning with the laser radar device loaded on the unmanned aerial vehicle specifically includes:
(1) scanning and collecting geometric linear shape of a road and running data information of a front vehicle in real time;
(2) constructing a three-dimensional coordinate calculation model of any point on a road central line;
(3) constructing vehicle running data of a lane ahead of the vehicle driving, and completing calculation in real time;
(4) and constructing a three-dimensional coordinate calculation model of any point on the road.
Further, the step of constructing a database of the road alignment and surrounding environment three-dimensional calculation model specifically comprises:
(1) scanning the point cloud data of the road and the environment ahead of the vehicle by an unmanned aerial vehicle;
(2) constructing a three-dimensional coordinate calculation model of any point on a road center line in real time based on point cloud data acquired by an unmanned aerial vehicle;
(3) real-time road domain vehicle data are collected based on the unmanned aerial vehicle, a three-dimensional coordinate model on a lane is matched, and a driving environment traffic flow real-time operation computer three-dimensional model is constructed.
Further, the coordinates (x) of the current viewpoint of the driver in the local coordinate system are determined using the GPS positioning data and the road linear three-dimensional calculation model in the database0,y0,h0) The method specifically comprises the following steps:
(1) determining the plane coordinate (x) of the driver's current viewpoint from the vehicle-mounted GPS positioning data0,y0);
(2) From the plane coordinates (x) of the current viewpoint0,y0) And the road linear three-dimensional calculation model corresponding to the current position in the database inversely calculates the road mileage stake number s corresponding to the current viewpoint0And a support distance w0(i.e., the planar distance from the viewpoint to the center line of the road, the same applies below);
(3) according to road mileage stake number s0And a support distance w0Determining the elevation h of the driver's viewpoint0
Further, the coordinate (x) of the current viewpoint of the driver in the local coordinate system0,y0,h0) The real-time dynamic generation of the road perspective view of the observation road section according to the three-dimensional coordinates (x, y, z) of the feature points on the side line of the roadway in the local coordinate system specifically comprises the following steps:
(1) converting the coordinates (X, y, z) of the characteristic points on the sideline of the roadway in a local coordinate system into three-dimensional rectangular coordinates (X) of a visual axise,Ye,Ze);
(2) Will look at the three-dimensional rectangular coordinate (X)e,Ye,Ze) Conversion to plane rectangular coordinates (x) of projection surfacec,yc);
(3) The plane rectangular coordinate (x) of the projection planec,yc) Geometric space (x ') projected to HUD display screen'p,y’p) In (1), converting to image coordinates (x)p,yp) And the image coordinates of a plurality of points are connected to form a road perspective view.
Further, the coordinates (X, y, z) of the characteristic point on the side line of the roadway in the local coordinate system are converted into three-dimensional rectangular coordinates (X) of the visual axise,Ye,Ze) The method specifically comprises the following steps:
1) constructing a visual axis three-dimensional rectangular coordinate system (X)e,Ye,Ze);
2) Coordinates (x) in the local coordinate system from the current viewpoint0,y0,h0) Calculating the corresponding local coordinate (x) of the current principal points,ys,zs);
3) According to the local coordinate (x) corresponding to the current principal points,ys,zs) Determining conversion parameters between two three-dimensional coordinate systems of a local coordinate system and a visual axis coordinate system;
4) converting the coordinates (X, y, z) of the characteristic point on the boundary line of the roadway in the local coordinate system into three-dimensional rectangular coordinates (X) of the visual axis by using the determined conversion parameterse,Ye,Ze)。
Further, the coordinates (X, y, z) of the running characteristic point of the front vehicle in the local coordinate system, which are obtained by scanning on the lane side line, are converted into three-dimensional rectangular coordinates (X) presented on the visual axise,Ye,Ze) Of (X)n,Yn,Zn) The method specifically comprises the following steps:
1) coordinates (x) in the local coordinate system from the current viewpoint0,y0,h0) Calculating the corresponding local coordinate (x) of the current principal points,ys,zs);
2) According to the local coordinate (x) corresponding to the current principal points,ys,zs) Determining conversion parameters between two three-dimensional coordinate systems of a local coordinate system and a visual axis coordinate system, and converting the conversion parameters into relative distances in the visual axis coordinate system according to the distance between a vehicle and the vehicle by scanning images of the unmanned aerial vehicle;
3) converting the coordinates (X, y, z) of the characteristic points of the traffic vehicles in the local coordinate system into three-dimensional rectangular coordinates (X) of the visual axis by using the conversion parameters and the distance parameters determined abovee,Ye,Ze) In (X)n,Yn,Zn)。
The driving vision enhancement system and method based on the laser radar image in the embodiment of the invention have the following beneficial effects:
1) according to the invention, a road linear space three-dimensional model is constructed by using horizontal and vertical data of a driving road, road data and traffic data are collected in real time by combining an unmanned aerial vehicle, and the spatial position relationship between a driver and the road is determined by fusing high-precision GPS positioning data, so that the direction of a front observation road section and a traffic running condition perspective view based on the position of a driver viewpoint are generated in real time, the size and the imaging position of the perspective view are accurately controlled according to the distance between the position of a HUD projection plane and the position of the driver viewpoint, and finally, the effects of observing the road linear perspective view from the view angle of the driver to be overlapped with an actual road side line and. The visual information provided by the navigation system constructed by the invention is beneficial to the driver to master the linear trend of the road section observed in front, and the vehicle is prevented from colliding with the side guardrail of the road and running out of the road to cause collision accidents. Meanwhile, early warning can be provided for the vehicle in real time by analyzing the running condition of the front traffic flow. The method can be used in the environment with low visibility, poor sight distance or abnormal traffic flow operation such as unknown roads, foggy days, nights and dark bends.
2) Based on the HUD technology which is widely applied at present, the invention projects a perspective curve with colors and real-time traffic flow conditions, and realizes visual enhancement on the real forward line trend and vehicle running conditions under the conditions of unknown roads or low visibility. The method provided by the invention is simple and feasible in technology, has a good auxiliary effect on safe driving of the automobile in a dangerous driving environment, and beneficially ensures the safe driving of the automobile.
Drawings
FIG. 1 is a functional block diagram of a laser radar image-based driving vision enhancement system according to an embodiment of the present invention;
fig. 2 is a flowchart of a driving vision enhancement method based on a lidar image according to an embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present invention, the following describes a driving vision enhancement system and method based on lidar images in detail with reference to the embodiments. The following examples are intended to illustrate the invention only and are not intended to limit the scope of the invention.
The invention provides a driving vision enhancement system based on a laser radar image and a method thereof, which relate to the technical field of unmanned aerial vehicle technology, laser radar technology and automobile navigation, aim at the traffic safety problems of unclear traffic environment, abnormal external environment, accident occurrence and the like, such as unclear road conditions, low visibility, small driving sight distance, secondary accident risk, easy vehicle driving-out lane sideline, easy-to-rush-into-road side guardrail, sidewalk and the like, use an unmanned aerial vehicle as a carrier, load high-precision laser radar equipment, form road plane, longitudinal section and cross section data information by scanning point clouds of the road environment to construct a road linear three-dimensional calculation model, scan road traffic states, combine the road linear model by edge calculation and vehicle-mounted real-time high-precision GPS positioning data in the vehicle driving process to generate a parallel traffic environment, construct a front observation road section road vehicle sideline perspective view based on a driver viewpoint and a traffic ring around the vehicle The environment generating and early warning method is characterized in that the perspective view is projected on a projection plane at a fixed position in front of a driver in real time through the head-up display HUD device, so that the effects of overlapping the imaging of the perspective view and the actual road sideline and updating the road condition in real time under the observation visual angle of the driver are realized, the defect of visual information acquisition of the driver under unknown conditions, risk environments or bad conditions such as foggy days and nights is overcome, and the functions of navigation and early warning for the driver are achieved.
Example 1
Referring to fig. 1, an embodiment of the present invention provides a driving vision enhancement system based on a laser radar image, including a data acquisition module, a data storage module, a GPS positioning module, a control processing module, and a head-up display module, where the data acquisition module scans a road and a vehicle environment in real time through a laser radar device loaded with an unmanned aerial vehicle, and generates an editable database through edge calculation; simultaneously receiving real-time acquired global vehicle data, and constructing a road linear three-dimensional calculation model and data of vehicle parallel operation; the GPS positioning module receives a positioning signal sent by a global satellite positioning system and outputs high-precision point position coordinates to the control processing module; the control processing module receives the point location coordinates, calls a road linear three-dimensional calculation model corresponding to the current position from the data storage module, calculates the coordinates of the side line characteristic points of the road lane in a certain range in front of the driver, generates and outputs a perspective view to the head-up display module in real time through coordinate conversion, and simultaneously matches a database coordinate system according to the positions and running conditions of other vehicles in the database and realizes real-time updating; the heads-up display module receives the perspectives and utilizes the HUD to project the perspectives in real time onto a projection surface located a distance forward from the driver. The system also comprises a power supply control module which provides a 12V power supply for the data acquisition module, the storage module, the GPS positioning module, the control processing module and the head-up display module.
Based on the same inventive concept, the embodiment of the invention also provides a driving vision enhancement method based on the laser radar image, and the implementation of the method refers to the implementation of the system, and repeated parts are not repeated. The method comprises the following steps:
the method comprises the following steps: the road and traffic environment database obtained by scanning by using the laser radar equipment loaded on the unmanned aerial vehicle specifically comprises the following steps:
(1) scanning and collecting geometric linear shape of a road and running data information of a front vehicle in real time;
(2) constructing a three-dimensional coordinate calculation model of any point on a road central line;
(3) constructing vehicle running data of a lane ahead of the vehicle driving, and completing calculation in real time;
(4) and constructing a three-dimensional coordinate calculation model of any point on the road. .
Step two: the method for constructing the database of the road linear three-dimensional calculation model by using the road horizontal and vertical data specifically comprises the following steps:
(1) collecting or collecting geometric linear data information of the sorted roads;
(2) based on the comparison of the memory data and the unmanned aerial vehicle collected data, a three-dimensional coordinate calculation model of any point on the road center line is constructed in real time;
(3) and establishing a three-dimensional coordinate calculation model of any point on the road roadway based on the comparison of the memory data and the unmanned aerial vehicle collected data.
(4) Real-time traffic flow data are collected based on the unmanned aerial vehicle, a three-dimensional coordinate model on a lane is matched, and a driving environment traffic flow real-time operation model is constructed.
Step three: determining the coordinates (x) of the driver's current viewpoint in the local coordinate system using GPS positioning data and a three-dimensional calculation model of road alignment in a database0,y0,h0) The method specifically comprises the following steps:
(1) determining the plane coordinate (x) of the driver's current viewpoint from the vehicle-mounted GPS positioning data0,y0);
(2) From the plane coordinates (x) of the current viewpoint0,y0) And the road linear three-dimensional calculation model corresponding to the current position in the database inversely calculates the road mileage stake number s corresponding to the current viewpoint0And a support distance w0
(3) According to road mileage stake number s0And a support distance w0Determining the elevation h of the driver's viewpoint0
Step four: according to the coordinate (x) of the current viewpoint of the driver in the local coordinate system0,y0,h0) The real-time dynamic generation of the road perspective view of the observation road section according to the three-dimensional coordinates (x, y, z) of the feature points on the side line of the roadway in the local coordinate system specifically comprises the following steps:
(1) converting the coordinates (X, y, z) of the characteristic points on the sideline of the roadway in a local coordinate system into three-dimensional rectangular coordinates (X) of a visual axise,Ye,Ze);
(2) Will look at the three-dimensional rectangular coordinate (X)e,Ye,Ze) Conversion to plane rectangular coordinates (x) of projection surfacec,yc);
(3) The plane rectangular coordinate (x) of the projection planec,yc) Geometric space (x ') projected to HUD display screen'p,y’p) In (1), converting to image coordinates (x)p,yp) And the image coordinates of a plurality of points are connected to form a road perspective view.
Step five: converting the coordinates (X, y, z) of the characteristic points on the sideline of the roadway in a local coordinate system into three-dimensional rectangular coordinates (X) of a visual axise,Ye,Ze) The method specifically comprises the following steps:
(1) constructing a visual axis three-dimensional rectangular coordinate system (X)e,Ye,Ze);
(2) Coordinates (x) in the local coordinate system from the current viewpoint0,y0,h0) Calculating the corresponding local coordinate (x) of the current principal points,ys,zs);
(3) According to the local coordinate (x) corresponding to the current principal points,ys,zs) Determining conversion parameters between two three-dimensional coordinate systems of a local coordinate system and a visual axis coordinate system;
(4) converting the coordinates (X, y, z) of the characteristic point on the boundary line of the roadway in the local coordinate system into three-dimensional rectangular coordinates (X) of the visual axis by using the determined conversion parameterse,Ye,Ze)。
Step six: converting the coordinates (X, y, z) of the running characteristic points of the front vehicle in the local coordinate system, which are obtained by scanning on the lane boundary line, into three-dimensional rectangular coordinates (X) presented on the visual axise,Ye,Ze) Of (X)n,Yn,Zn) The method specifically comprises the following steps:
(1) coordinates (x) in the local coordinate system from the current viewpoint0,y0,h0) Calculating the corresponding local coordinate (x) of the current principal points,ys,zs);
(2) According to the local coordinate (x) corresponding to the current principal points,ys,zs) Determining conversion parameters between two three-dimensional coordinate systems of a local coordinate system and a visual axis coordinate system, and simultaneouslyConverting the scanned image of the unmanned aerial vehicle into a relative distance in a visual axis coordinate system according to the distance between the vehicle and the vehicle;
(3) converting the coordinates (X, y, z) of the characteristic points of the traffic vehicles in the local coordinate system into three-dimensional rectangular coordinates (X) of the visual axis by using the conversion parameters and the distance parameters determined abovee,Ye,Ze) In (X)n,Yn,Zn)。
Step seven: and dynamically displaying the road perspective in real time through the HUD.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The present invention is not limited to the above-described examples, and various changes can be made without departing from the spirit and scope of the present invention within the knowledge of those skilled in the art.

Claims (9)

1. A system for laser radar image based driving vision enhancement, comprising: data acquisition module, data storage module, GPS orientation module, control processing module and new line display module, wherein:
the data acquisition module scans road and vehicle environments in real time through a laser radar device loaded with an unmanned aerial vehicle, and generates an editable database through edge calculation; meanwhile, receiving real-time acquired data, and constructing a road linear three-dimensional calculation model and a database for parallel running of vehicles; the GPS positioning module receives a positioning signal sent by a global satellite positioning system and outputs high-precision point position coordinates to the control processing module; the control processing module receives the point location coordinates, calls a road linear three-dimensional calculation model corresponding to the current position from the data storage module, calculates the coordinates of the side line characteristic points of the road lane in a certain range in front of the driver, generates and outputs a perspective view to the head-up display module in real time through coordinate conversion, and simultaneously matches a database coordinate system according to the positions and running conditions of other vehicles in the database and updates the coordinate system in real time; the heads-up display module receives the perspectives and utilizes the HUD to project the perspectives in real time onto a projection surface located a distance forward from the driver.
2. The lidar image based driving vision enhancement system of claim 1, further comprising a power control module providing 12V power to the data acquisition module, the memory module, the GPS location module, the control processing module, and the heads-up display module.
3. A driving vision enhancement method based on laser radar images is characterized by comprising the following steps:
scanning a road in front of a vehicle and a traffic environment in real time by using laser radar equipment loaded on an unmanned aerial vehicle to form a road in front of vehicle operation and a vehicle operation real-time update database;
collecting road vehicle point cloud data by using an unmanned aerial vehicle, and constructing a road linear three-dimensional calculation model and a traffic flow operation database;
determining the coordinates (x) of the driver's current viewpoint in the local coordinate system using GPS positioning data and a three-dimensional calculation model of road alignment in a database0,y0,h0);
According to the coordinate (x) of the current viewpoint of the driver in the local coordinate system0,y0,h0) Determining three-dimensional coordinates (x, y, z) of feature points on the sideline of the roadway of the road section observed in front of the driver in a local coordinate system by using the road linear three-dimensional calculation model corresponding to the current position in the database;
according to the coordinate (x) of the current viewpoint of the driver in the local coordinate system0,y0,h0) Dynamically generating a road perspective view of an observation road section in real time according to the three-dimensional coordinates (x, y, z) of the feature points on the side line of the roadway in a local coordinate system;
according to the coordinate (x) of the current viewpoint of the driver in the local coordinate system0,y0,h0) Three-dimensional coordinates (x) of real-time running characteristic points of the front vehicle in a local coordinate systemn,yn,zn) Dynamically generating the traffic flow condition of the observed road section in real time;
and dynamically displaying the road perspective in real time through the HUD.
4. The lidar image based driving vision enhancement method of claim 3, wherein the step of obtaining the road and traffic environment database by scanning using a lidar device mounted on an unmanned aerial vehicle comprises:
(1) scanning and collecting geometric linear shape of a road and running data information of a front vehicle in real time;
(2) constructing a three-dimensional coordinate calculation model of any point on a road central line;
(3) constructing vehicle running data of a lane ahead of the vehicle driving, and completing calculation in real time;
(4) and constructing a three-dimensional coordinate calculation model of any point on the road.
5. The lidar image-based driving vision enhancement method according to claim 4, wherein the constructing of the database of the road alignment and surrounding environment three-dimensional computation model specifically comprises:
(1) scanning the point cloud data of the road and the environment ahead of the vehicle by an unmanned aerial vehicle;
(2) constructing a three-dimensional coordinate calculation model of any point on a road center line in real time based on point cloud data acquired by an unmanned aerial vehicle;
(3) real-time road domain vehicle data are collected based on the unmanned aerial vehicle, a three-dimensional coordinate model on a lane is matched, and a driving environment traffic flow real-time operation computer three-dimensional model is constructed.
6. The lidar image-based driving vision enhancement method according to claim 5, wherein the coordinates (x) of the driver's current viewpoint in the local coordinate system are determined using the GPS positioning data and the three-dimensional calculation model of the road alignment in the database0,y0,h0) The method specifically comprises the following steps:
(1) determining the plane coordinate (x) of the driver's current viewpoint from the vehicle-mounted GPS positioning data0,y0);
(2) From the plane coordinates (x) of the current viewpoint0,y0) Back-calculating current viewpoint by road linear three-dimensional calculation model corresponding to current position in databaseCorresponding road mileage stake number s0And a support distance w0(i.e., the planar distance from the viewpoint to the center line of the road, the same applies below);
(3) according to road mileage stake number s0And a support distance w0Determining the elevation h of the driver's viewpoint0
7. The lidar image-based driving vision enhancement method according to claim 6, wherein the coordinate (x) in the local coordinate system according to the current viewpoint of the driver0,y0,h0) The real-time dynamic generation of the road perspective view of the observation road section according to the three-dimensional coordinates (x, y, z) of the feature points on the side line of the roadway in the local coordinate system specifically comprises the following steps:
(1) converting the coordinates (X, y, z) of the characteristic points on the sideline of the roadway in a local coordinate system into three-dimensional rectangular coordinates (X) of a visual axise,Ye,Ze);
(2) Will look at the three-dimensional rectangular coordinate (X)e,Ye,Ze) Conversion to plane rectangular coordinates (x) of projection surfacec,yc);
(3) The plane rectangular coordinate (x) of the projection planec,yc) Geometric space (x ') projected to HUD display screen'p,y’p) In (1), converting to image coordinates (x)p,yp) And the image coordinates of a plurality of points are connected to form a road perspective view.
8. The lidar image-based driving vision enhancement method according to claim 7, wherein coordinates (X, y, z) of the feature point on the side line of the roadway in the local coordinate system are converted into three-dimensional rectangular coordinates (X) of the visual axise,Ye,Ze) The method specifically comprises the following steps:
1) constructing a visual axis three-dimensional rectangular coordinate system (X)e,Ye,Ze);
2) Coordinates (x) in the local coordinate system from the current viewpoint0,y0,h0) Calculating the corresponding local coordinate (x) of the current principal points,ys,zs);
3) According to the current principal point correspondenceLocal coordinate (x)s,ys,zs) Determining conversion parameters between two three-dimensional coordinate systems of a local coordinate system and a visual axis coordinate system;
4) converting the coordinates (X, y, z) of the characteristic point on the boundary line of the roadway in the local coordinate system into three-dimensional rectangular coordinates (X) of the visual axis by using the determined conversion parameterse,Ye,Ze)。
9. The lidar image-based driving vision enhancement method according to claim 8, wherein coordinates (X, y, z) of a front vehicle operation characteristic point scanned on a lane boundary in a local coordinate system are converted into three-dimensional rectangular coordinates (X, y, z) appearing on a visual axise,Ye,Ze) Of (X)n,Yn,Zn) The method specifically comprises the following steps:
1) coordinates (x) in the local coordinate system from the current viewpoint0,y0,h0) Calculating the corresponding local coordinate (x) of the current principal points,ys,zs);
2) According to the local coordinate (x) corresponding to the current principal points,ys,zs) Determining conversion parameters between two three-dimensional coordinate systems of a local coordinate system and a visual axis coordinate system, and converting the conversion parameters into relative distances in the visual axis coordinate system according to the distance between a vehicle and the vehicle by scanning images of the unmanned aerial vehicle;
3) converting the coordinates (X, y, z) of the characteristic points of the traffic vehicles in the local coordinate system into three-dimensional rectangular coordinates (X) of the visual axis by using the conversion parameters and the distance parameters determined abovee,Ye,Ze) In (X)n,Yn,Zn)。
CN202011367786.2A 2020-11-28 2020-11-28 Driving vision enhancement system and method based on laser radar image Pending CN112562061A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011367786.2A CN112562061A (en) 2020-11-28 2020-11-28 Driving vision enhancement system and method based on laser radar image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011367786.2A CN112562061A (en) 2020-11-28 2020-11-28 Driving vision enhancement system and method based on laser radar image

Publications (1)

Publication Number Publication Date
CN112562061A true CN112562061A (en) 2021-03-26

Family

ID=75046649

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011367786.2A Pending CN112562061A (en) 2020-11-28 2020-11-28 Driving vision enhancement system and method based on laser radar image

Country Status (1)

Country Link
CN (1) CN112562061A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114355946A (en) * 2022-01-07 2022-04-15 哈尔滨工业大学 Vehicle driving guide system
CN117008122A (en) * 2023-08-04 2023-11-07 江苏苏港智能装备产业创新中心有限公司 Method and system for positioning surrounding objects of engineering mechanical equipment based on multi-radar fusion

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114355946A (en) * 2022-01-07 2022-04-15 哈尔滨工业大学 Vehicle driving guide system
CN117008122A (en) * 2023-08-04 2023-11-07 江苏苏港智能装备产业创新中心有限公司 Method and system for positioning surrounding objects of engineering mechanical equipment based on multi-radar fusion

Similar Documents

Publication Publication Date Title
US10502955B2 (en) Head-up display device, navigation device, and display method
US10600250B2 (en) Display system, information presentation system, method for controlling display system, computer-readable recording medium, and mobile body
CN109624974B (en) Vehicle control device, vehicle control method, and storage medium
JP3619628B2 (en) Driving environment recognition device
CN106909152B (en) Automobile-used environmental perception system and car
CN110531376B (en) Obstacle detection and tracking method for port unmanned vehicle
US11248925B2 (en) Augmented road line detection and display system
EP2461305B1 (en) Road shape recognition device
CN110356325B (en) Urban traffic passenger vehicle blind area early warning system
Hu et al. UV-disparity: an efficient algorithm for stereovision based scene analysis
CN103455144B (en) Vehicle-mounted man-machine interaction system and method
CN102685516A (en) Active safety type assistant driving method based on stereoscopic vision
CN110065494B (en) Vehicle anti-collision method based on wheel detection
CN106324618B (en) Realize the method based on laser radar detection lane line system
CN117441113A (en) Vehicle-road cooperation-oriented perception information fusion representation and target detection method
KR20220134754A (en) Lane Detection and Tracking Techniques for Imaging Systems
CN110491156A (en) A kind of cognitive method, apparatus and system
JP3857698B2 (en) Driving environment recognition device
Moras et al. Drivable space characterization using automotive lidar and georeferenced map information
CN112562061A (en) Driving vision enhancement system and method based on laser radar image
CN116564116A (en) Intelligent auxiliary driving guiding system and method driven by digital twin
CN116337102A (en) Unmanned environment sensing and navigation method based on digital twin technology
González-Jorge et al. Evaluation of driver visibility from mobile lidar data and weather conditions
CN113884090A (en) Intelligent platform vehicle environment sensing system and data fusion method thereof
CN108981740B (en) Blind driving navigation system and method under low visibility condition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination