CN116106931A - Wading early warning method and related device - Google Patents

Wading early warning method and related device Download PDF

Info

Publication number
CN116106931A
CN116106931A CN202111320824.3A CN202111320824A CN116106931A CN 116106931 A CN116106931 A CN 116106931A CN 202111320824 A CN202111320824 A CN 202111320824A CN 116106931 A CN116106931 A CN 116106931A
Authority
CN
China
Prior art keywords
water accumulation
depth
early warning
coordinate
wading
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111320824.3A
Other languages
Chinese (zh)
Inventor
刘畅
伍朝晖
黄韬
李国维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202111320824.3A priority Critical patent/CN116106931A/en
Publication of CN116106931A publication Critical patent/CN116106931A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/93Sonar systems specially adapted for specific applications for anti-collision purposes
    • G01S15/931Sonar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

The embodiment of the application provides a wading early warning method and device, relates to the technical field of intelligent driving, and is used for acquiring water accumulation information of a water accumulation area. The wading early warning method comprises the following steps: acquiring an optical image and a depth image of a ponding region, wherein the optical image and the depth image are respectively obtained by a calibrated optical range finder and an acoustic range finder; determining a maximum water accumulation depth of the water accumulation region based on the optical image and the depth image; judging whether the maximum water accumulation depth is larger than the safe wading depth; and if the maximum ponding depth is greater than the safe wading depth, sending out early warning prompt information. Due to the adoption of the optical range finder and the acoustic range finder, the vehicle body can acquire the ponding information of the ponding area under the condition that the vehicle body does not enter the ponding area, so that the driving strategy is safer and more efficient.

Description

Wading early warning method and related device
Technical Field
The application relates to the technical field of intelligent driving, in particular to a method for early warning in a wading way and related equipment thereof.
Background
Under rainfall weather, a large amount of accumulated water may exist on the lane, and under the condition that the accumulated water depth of the road in front cannot be predicted by a vehicle owner, a driving vehicle has a great potential safety hazard on the accumulated water road.
In the prior art, the following two types of schemes are mainly used for detecting the ponding depth of the road in front.
In one prior art solution, a detection device for measuring the depth of water is installed at the bottom of the vehicle, and the current wading depth can be measured only when the detection device enters the water surface; however, when the vehicle knows the depth of the accumulated water in the front area, the vehicle may enter a deeper water hole, so that flameout accidents occur and the running safety of the vehicle is affected;
in another existing technical scheme, a real-time information transmission technology in a car networking system is adopted to realize mutual coordination and cooperation of people, cars and road infrastructure. For example, a sensor for measuring the water depth is arranged on the road surface or below the road surface in advance, and when a vehicle reaches a nearby area, wading depth information of a front ponding road surface is transmitted to the vehicle in real time through a network, so that the aim of early warning is fulfilled. However, this solution is too costly, resulting in a limited range of applications.
Disclosure of Invention
The application provides a wading early warning method, which well solves the problem that the maximum water accumulation depth of a water accumulation area within a preset distance in front is obtained under the condition that a vehicle body does not enter the water accumulation area. In order to achieve the above purpose, the present application provides the following technical solutions:
In a first aspect, the present application provides a wading early warning method, including the steps of:
acquiring an optical image and a depth image of a ponding region, wherein the optical image and the depth image are respectively obtained by a calibrated optical range finder and an acoustic range finder;
determining a maximum water accumulation depth of the water accumulation region based on the optical image and the depth image;
judging whether the maximum water accumulation depth is larger than the safe wading depth;
and if the maximum ponding depth is greater than the safe wading depth, sending out early warning prompt information.
Because the traditional method can only acquire the water accumulation depth of the front water accumulation area under the condition that the vehicle body part enters the water accumulation area, the method is already used for knowing the water accumulation depth. The mode that the three-dimensional coordinate information of ponding region was confirmed through optical range finder and degree of depth range finder is adopted in this application to confirm the biggest ponding degree of depth of ponding region, can accomplish more safe high-efficient, provides more driving decision time for the driver.
Calibrating the optical distance measuring instrument and the acoustic distance measuring instrument, and converting the two distance measuring instruments into a unified world coordinate system to ensure the accuracy of measured values; meanwhile, in the running process of the vehicle, due to vibration and other reasons, the position of the range finder and the original position can be offset, so that the optical range finder and the acoustic range finder are calibrated, and the three-dimensional coordinate information of a water accumulation area can be acquired more accurately.
In one possible implementation, determining three-dimensional coordinate information of the water accumulation region based on the optical image and the depth image of the water accumulation region includes:
determining three-dimensional coordinate information of the ponding region based on the optical image and the depth image;
and obtaining the maximum water accumulation depth of the water accumulation area based on the three-dimensional coordinate information of the water accumulation area.
The optical image can well reflect the three-dimensional coordinate information of the water surface pixel points of the water accumulation area, and the depth image can penetrate through the water surface of the water accumulation area to obtain the three-dimensional coordinate information of the road surface pixel points. The maximum even depth of the ponding region can be obtained through the water surface pixel points and the pavement pixel points.
In one possible implementation, determining three-dimensional coordinate information of the water accumulation region based on the optical image and the depth image includes:
based on the optical image, a water surface pixel point coordinate set is determined and is recorded as T= { (X) 1 t,Y 1 t,Z 1 t)…(X i t,Y i t,Z i t)};
Based on the depth image, a road surface pixel point coordinate set is determined and is recorded as D= { (X) 1 d,Y 1 d,Z 1 d)…(X j d,Y j d,Z j d)};
The three-dimensional coordinate information of the water accumulation area comprises the water surface pixel point coordinate set and the road surface pixel point coordinate set, wherein the three-dimensional coordinate is a coordinate system comprising an X axis, a Y axis and a Z axis, the coordinates in the water surface pixel point coordinate set and the coordinates in the road surface pixel point coordinate set have a one-to-one correspondence, the one-to-one correspondence is used for indicating that the X and Y coordinate values of the water surface pixel point and the road surface pixel point are the same in the three-dimensional coordinate system, and the Z coordinate values are different.
In one possible implementation method, obtaining the maximum water accumulation depth of the water accumulation region based on the three-dimensional coordinate information of the water accumulation region includes:
subtracting two coordinate points with the one-to-one correspondence from the water surface pixel point coordinate set T and the road surface pixel point coordinate set D to obtain a coordinate difference set K= { (0, deltaZ) 1 )…(0,0,ΔZ k )};
And determining the maximum water accumulation depth according to the coordinate difference set K, wherein the maximum water accumulation depth is the maximum value of Z coordinates in the coordinate difference set K.
In one possible implementation method, the method further includes:
based on the optical image of the water accumulation region, determining a space coordinate set F= { (X) of pixel points positioned at the water accumulation edge 1 f,Y 1 f,Z 1 f)…(X i f,Y i f,Z i f)};
And determining at least one piece of wading information in the maximum water accumulation width, the maximum wading distance or the water accumulation area according to the space coordinate set F.
In one possible implementation, the alert message includes a combination of one or more of visual alert, audible alert, light alert, text alert, etc.
In one possible implementation, the visual pre-warning includes a combination of at least one or more of a water accumulation area, a water accumulation width, and a water accumulation depth with a visual effect.
In a second aspect, an embodiment of the present application further provides a wading early warning device, including:
the sensing module is used for acquiring an optical image and a depth image of the ponding region, wherein the optical image and the depth image are respectively acquired by a calibrated optical range finder and an acoustic range finder;
the processing module is used for determining the maximum ponding depth of the ponding region based on the optical image and the depth image;
the judging module is used for judging whether the maximum water accumulation depth is larger than the safe wading depth;
and the early warning module is used for sending out early warning prompt information if the maximum ponding depth is larger than the safe wading depth.
In a third aspect, an embodiment of the present application further provides a wading early warning device, including:
at least one processor;
at least one memory storing a computer program which, when executed by a processor, implements any one of the image processing methods as in the first aspect.
In a fourth aspect, embodiments of the present application provide a computer program product comprising: computer program code which, when run, causes a processor to perform any of the methods of the first aspect.
In a fifth aspect, embodiments of the present application provide a vehicle comprising a travel system, a sensing system, a control system, and a computer system, wherein the computer system is configured to perform any of the methods of the first aspect.
Drawings
Fig. 1 is a schematic functional block diagram of a vehicle 100 according to an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of wading early warning provided in the embodiment of the present application;
fig. 3 is a schematic flow chart of another wading early warning provided in an embodiment of the present application;
FIG. 4 is a schematic view of one possible range finder installation location provided in an embodiment of the present application;
fig. 5 is a schematic diagram of a vehicle body coordinate system according to an embodiment of the present application;
fig. 6a is a schematic diagram of a correspondence relationship between a water surface pixel point and a road surface pixel point according to an embodiment of the present application;
FIG. 6b is a schematic diagram of calculating a water accumulation depth according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of a visual early warning provided in an embodiment of the present application;
fig. 8 is a schematic structural diagram of a wading early warning device according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of another wading early warning device according to an embodiment of the present disclosure;
fig. 10 is a schematic hardware structure diagram of a wading early warning device provided in an embodiment of the present application.
Detailed Description
Embodiments of the present invention are described below with reference to the accompanying drawings in the embodiments of the present invention. The terminology used in the description of the embodiments of the invention herein is for the purpose of describing particular embodiments of the invention only and is not intended to be limiting of the invention.
Embodiments of the present application are described below with reference to the accompanying drawings. As one of ordinary skill in the art can appreciate, with the development of technology and the appearance of new scenes, the technical solutions provided in the embodiments of the present application are applicable to similar technical problems.
The terms first, second and the like in the description and in the claims of the present application and in the above-described figures, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances and are merely illustrative of the manner in which the embodiments of the application described herein have been described for objects of the same nature. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of elements is not necessarily limited to those elements, but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the application provides a wading early warning method and a related device thereof, which can obtain the information of a ponding area within a preset distance in front under the condition that a vehicle does not enter the ponding area, reserve enough driving response time for a driver, and provide a basis for a subsequent driving decision of the driver. The wading early warning method and the related device can also provide judgment information for automatic driving. In the path planning of automatic driving, in order to make the driving strategy safer and more efficient, the vehicle needs to make a decision on the information based on the water accumulation area, for example, avoid the water accumulation area or pass through the water accumulation area, so that the vehicle runs along the optimal track.
The vehicle is represented as a motor vehicle traveling on a lane. In the present embodiment, the present embodiment is explained by taking a vehicle as an example only, but the present embodiment is not limited to a motor vehicle traveling on a lane. For example, a head-mounted device, a smart terminal, a robot, or the like may be used, which is not limited herein. Similarly, the lanes in the embodiments of the present application are not limited to traffic scenes, but may be non-traffic scenes, such as scenes in which the logistics robot delivers goods in a campus; or in the field, the user equipment head mounted device scenario.
Some terms of art appearing in the examples of the present application are explained below:
optical image: also known as analog images (analog images), refer to images that vary continuously in gray scale and color. In general, an optical image is an image obtained by using an optical photographing system and using a photosensitive film as a medium. For example, visible black and white or full color photographs, color infrared photographs, multiband photographic photographs, and thermal infrared photographic photographs, all belong to the optical images.
Wading depth: i.e., a safe maximum wading depth, refers to the maximum depth through which the athletic topic can wade under safe driving conditions, i.e., without damaging the athletic subject. Taking the vehicle as an example, the safe water depth can be one of the exhaust port ground clearance, the half-height ground contact distance of the wheels, the door frame ground clearance and the engine air inlet ground clearance. The safe wading depths of vehicles of different models can be different.
Calibrating: and under the condition that the world coordinate system and the pixel coordinate system of the calibration control point are known, establishing a mapping relation from the world coordinate system to the pixel coordinate system. After the mapping relation is obtained, the user can reversely push the world coordinate system of the specific point from the pixel coordinate system of the specific point, and the user can perform measurement operation according to the world coordinate system. In the embodiment of the application, the optical distance meter and the acoustic distance meter are calibrated, so that the 2D color image and the 3D depth image acquired by the two distance meters are unified under the same coordinate system, namely, the position of a certain ponding area in a world coordinate system can be obtained under the condition that the coordinates of the pixel points of the ponding area are known. It should be noted that, the technical effect of the embodiments of the present application can be achieved by replacing the world coordinate system with any coordinate system that reflects the three-dimensional spatial relationship between the vehicle body and the surrounding environment, for example, the vehicle body coordinate system.
Visual early warning: taking a vehicle as an example, when a water accumulation area appears in the front of the vehicle within a preset distance, the vision early warning prompts a user by means of one of a central control display screen, a head up display (head up display) or a dashboard display, wherein the water accumulation information comprises at least one of a maximum water accumulation depth, a maximum water accumulation width, a maximum water accumulation length or a water accumulation area.
World coordinate system: the absolute coordinate system reflecting the object in the objective three-dimensional world, also called the objective coordinate system, is a three-dimensional rectangular coordinate system, with the unit being the unit of length. Taking an automobile as an example, the origin of the coordinate system can be at the central point of the locomotive, and the X axis, the Y axis and the Z axis are mutually perpendicular, wherein the X axis is horizontally right parallel to the road surface, the Y axis points to the running direction, and the Z axis points to the road surface. The points in the coordinate system can be represented by (X t ,Y t ,Z t ) Expressed in meters.
Vehicle body coordinate system: the coordinate system used to describe the relationship between the surrounding environment of the vehicle and the vehicle may also be referred to as the whole vehicle coordinate or the own vehicle coordinate system. Some vehicle body coordinate systems which use the center of gravity as the origin and extend from the center of gravity are vehicle body coordinate systems, and vehicle body coordinate systems which use imu as the definition are vehicle body coordinate systems which use imu position as the origin. The position of the origin can be customized according to different scene requirements.
Image coordinate system: the image coordinate system may also be referred to as an optical image coordinate system, the origin of coordinates being the top left corner vertex of the image plane, the X-axis and the Y-axis being parallel to the X-axis and the Y-axis, respectively, of the image physical coordinate system. Points in the image pixel coordinate system are denoted by (u, v). The pixel coordinate system is an image coordinate system in units of pixels.
Point cloud coordinate system: take lidar as an example. Multi-line lidar can basically be seen as a high-speed laser range finder that binds multiple lidars together at an angle and rotates continuously. The lidar may emit laser light in multiple directions in the vertical direction at substantially the same time. The distance of the laser radar from the object surface can be calculated from the time of flight (TOF) of the reflected laser light in the air. The laser beams vertically distributed rotate together with the upper body, thereby completing 360-degree scanning of the environment. A large number of data points are plotted in a 3-dimensional space, forming a Cloud-like distribution, known as a laser Point Cloud (Point Cloud), the coordinates of each laser Point Cloud being available (X d ,Y d ,Z d ) And (3) representing.
Fig. 1 is a functional block diagram illustration of a vehicle 100 provided in an embodiment of the present application. The vehicle 100 may be configured in a fully or partially autonomous mode. For example: the vehicle 100 may acquire environmental information of its surroundings through the perception system 120 and derive an automatic driving strategy based on analysis of the surrounding environmental information to achieve full automatic driving, or present the analysis result to the user to achieve partial automatic driving.
Vehicle 100 may include various subsystems such as an information system 110, a perception system 120, a decision control system 130, a drive system 140, a computing platform 150, and a power supply 160. Alternatively, vehicle 100 may include more or fewer subsystems, and each subsystem may include multiple components. In addition, each of the subsystems and components of the vehicle 100 may be interconnected by wire or wirelessly.
In some embodiments, information system 110 may include a communication system 111, a touch screen 112, a microphone 113, a speaker 114, and a navigation system 115.
The communication system 111 may comprise a wireless communication system that may communicate wirelessly with one or more devices directly or via a communication network. For example, the wireless communication system may use 3G cellular communication, such as CDMA, EVD0, GSM/GPRS, or 4G cellular communication, such as LTE. Or 5G cellular communication. The wireless communication system may communicate with a wireless local area network (wireless local area network, WLAN) using WiFi. In some embodiments, the wireless communication system may communicate directly with the device using an infrared link, bluetooth, or ZigBee. Other wireless protocols, such as various vehicle communication systems, for example, wireless communication systems may include one or more dedicated short-range communication (dedicated short range communications, DSRC) devices, which may include public and/or private data communications between vehicles and/or roadside stations.
Touch screen 112, microphone 113 and speaker 114 may comprise an entertainment system upon which a user may listen to a broadcast in a vehicle and play music; or the mobile phone is communicated with the vehicle, and the screen throwing of the mobile phone is realized on the touch screen. In some cases, the user's voice signal may be acquired through a microphone and certain controls of the vehicle 100 by the user may be implemented based on analysis of the user's voice signal, such as adjusting the temperature within the vehicle, etc. In other cases, music may be played to the user through sound. In this embodiment of the present application, the accumulated water information within the preset distance in front sent by the early warning module may be fed back to the user through the touch screen 112, the microphone 113 and the speaker 114.
The navigation system 115 may include map services provided by map providers to provide navigation of travel routes for the vehicle 100, and the navigation system 115 may be used with the vehicle's global positioning system 121, inertial measurement unit 122. The map service provided by the map provider may be a two-dimensional map or a high-precision map.
The perception system 120 may include several types of sensors that sense information about the environment surrounding the vehicle 100. For example, the sensing system 120 may include a global positioning system 121 (the global positioning system may be a GPS system, or may be a beidou system or other positioning system), an inertial measurement unit (inertial measurement unit, IMU) 122, a laser radar 123, a millimeter wave radar 124, an ultrasonic radar 125, a camera device 126, and a range finder 127, wherein the camera device 126 may also be referred to as a camera. The sensing system 120 may also include sensors (e.g., in-vehicle air quality monitors, fuel gauges, oil temperature gauges, etc.) of the internal systems of the monitored vehicle 100. Sensor data from one or more of these sensors may be used to detect objects and their corresponding characteristics (location, shape, direction, speed, etc.). Such detection and identification are key functions for safe operation of the vehicle 100. In the embodiment of the present application, the optical distance meter is a camera 126, and the acoustic distance meter is an acoustic distance meter 127
The global positioning system 121 may be used to estimate the geographic location of the vehicle 100.
The inertial measurement unit 122 is used to sense changes in the position and orientation of the vehicle 100 based on inertial acceleration. In some embodiments, inertial measurement unit 122 may be a combination of an accelerometer and a gyroscope.
Lidar 123 may utilize a laser to sense objects in the environment in which vehicle 100 is located. In some embodiments, lidar 123 may include one or more laser sources, a laser scanner, and one or more detectors, among other system components.
Millimeter-wave radar 124 may utilize radio signals to sense objects within the surrounding environment of vehicle 100. In some embodiments, in addition to sensing an object, the radar 126 may be used to sense the speed and/or heading of the object.
The ultrasonic radar 125 may utilize ultrasonic signals to sense objects around the vehicle 100.
The camera 126 may be used to capture image information of the surrounding environment of the vehicle 100. The image capturing device 126 may include a monocular camera, a binocular camera, a structured light camera, a panoramic camera, etc., and the image information obtained by the image capturing device 126 may include still images or video stream information. In the embodiment of the present application, the image capturing device 126 is an optical distance meter, and is configured to obtain a color image of the ponding area within a preset distance in front.
The distance meter 127 is used to measure the distance. The rangefinder may include a laser rangefinder, an ultrasonic rangefinder, an infrared rangefinder, and the like. In the embodiment of the present application, taking an ultrasonic distance meter as an example, the principle of ultrasonic distance measurement is to measure the time difference of reflecting back ultrasonic waves after encountering an obstacle by using the propagation speed of ultrasonic waves in the air as known, and calculate the actual distance from the emission point to the obstacle. Note that the speed of the ultrasonic wave is temperature-dependent, and if the temperature does not change much, the speed of the ultrasonic wave can be regarded as unchanged. If the temperature variation is large, it can be corrected by a temperature compensation method.
The depth image and the color image may be acquired by a sensor disposed in the perception system 120, for example, by the lidar 123 and the camera device 126, respectively.
The decision control system 130 includes a computing system 131 that makes an analytical decision based on information acquired by the perception system 120, and the decision control system 130 further includes a vehicle controller 132 that controls the powertrain of the vehicle 100, and a steering system 133, throttle 134, and braking system 135 for controlling the vehicle 100.
The computing system 131 may be operable to process and analyze various information acquired by the perception system 120 in order to identify targets, objects, and/or features in the environment surrounding the vehicle 100. The object may include a pedestrian or an animal and the object and/or feature may include traffic signals, road boundaries, and obstacles. The computing system 131 may use object recognition algorithms, in-motion restoration structure (Structure from Motion, SFM) algorithms, video tracking, and the like. In some embodiments, computing system 131 may be used to map an environment, track objects, estimate the speed of objects, and so forth. The computing system 131 may analyze the acquired various information and derive a control strategy for the vehicle.
The vehicle controller 132 may be configured to coordinate control of the power battery and the engine 141 of the vehicle to enhance the power performance of the vehicle 100.
Steering system 133 is operable to adjust the heading of vehicle 100. For example, in one embodiment may be a steering wheel system.
The throttle 134 is used to control the operating speed of the engine 141 and thus the speed of the vehicle 100.
The brake system 135 is used to control the deceleration of the vehicle 100. The braking system 135 may use friction to slow the wheels 144. In some embodiments, the braking system 135 may convert the kinetic energy of the wheels 144 into electrical current. The braking system 135 may take other forms to slow the rotational speed of the wheels 144 to control the speed of the vehicle 100.
The drive system 140 may include components that provide powered movement of the vehicle 100. In one embodiment, the drive system 140 may include an engine 141, an energy source 142, a transmission 143, and wheels 144. The engine 141 may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations, such as a hybrid engine of a gasoline engine and an electric motor, or a hybrid engine of an internal combustion engine and an air compression engine. Engine 141 converts energy source 142 into mechanical energy.
Examples of energy sources 142 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electricity. The energy source 142 may also provide energy to other systems of the vehicle 100.
The transmission 143 may transmit mechanical power from the engine 141 to the wheels 144. The transmission 143 may include a gearbox, a differential, and a driveshaft. In one embodiment, the transmission 143 may also include other devices, such as clutches. Wherein the drive shaft may comprise one or more axles that may be coupled to one or more wheels 144.
Some or all of the functions of the vehicle 100 are controlled by the computing platform 150. Computing platform 150 may include at least one processor 151, and processor 151 may execute instructions 153 stored in a non-transitory computer readable medium such as memory 152. In some embodiments, computing platform 150 may also be a plurality of computing devices that control individual components or subsystems of vehicle 100 in a distributed manner.
Processor 151 may be any conventional processor, such as a commercially available CPU. Alternatively, processor 151 may also include, for example, an image processor (Graphic Process Unit: GPU), a field programmable gate array (Field Programmable Gate Array: FPGA), a System On Chip (SOC), an application specific Integrated Chip (Application Specific Integrated Circuit: ASIC), or a combination thereof. Although FIG. 1 functionally illustrates a processor, memory, and other elements of computer 110 in the same block, it will be understood by those of ordinary skill in the art that the processor, computer, or memory may in fact comprise a plurality of processors, computers, or memories that may or may not be stored within the same physical housing. For example, the memory may be a hard disk drive or other storage medium located in a different housing than computer 110. Thus, references to a processor or computer will be understood to include references to a collection of processors or computers or memories that may or may not operate in parallel. Rather than using a single processor to perform the steps described herein, some components, such as the steering component and the retarding component, may each have their own processor that performs only calculations related to the component-specific functions.
In various aspects described herein, the processor may be located remotely from the vehicle and in wireless communication with the vehicle. In other aspects, some of the processes described herein are performed on a processor disposed within the vehicle and others are performed by a remote processor, including taking the necessary steps to perform a single maneuver.
In some embodiments, memory 152 may contain instructions 153 (e.g., program logic) that instructions 153 may be executed by processor 151 to perform various functions of vehicle 100. The memory 152 may also contain additional instructions, including instructions to send data to, receive data from, interact with, and/or control one or more of the infotainment system 110, the perception system 120, the decision control system 130, and the drive system 140.
In addition to instructions 153, the memory 152 may also store data such as road maps, route information, vehicle position, direction, speed, and other such vehicle data, as well as other information. Such information may be used by the vehicle 100 and the computing platform 150 during operation of the vehicle 100 in autonomous, semi-autonomous, and/or manual modes.
The computing platform 150 may control the functions of the vehicle 100 based on inputs received from various subsystems (e.g., the drive system 140, the perception system 120, and the decision control system 130). For example, computing platform 150 may utilize inputs from decision control system 130 in order to control steering system 133 to avoid obstacles detected by perception system 120. In some embodiments, computing platform 150 is operable to provide control over many aspects of vehicle 100 and its subsystems.
In this embodiment, the computing platform 150 may obtain the depth image and the color image from the perception system 120, perform a repair process on the color image using the depth information in the depth image to obtain an enhanced color image, and in particular, the implementation of the repair process may be stored in the memory 152 in the form of software, and the processor 151 invokes the instructions 153 in the memory 152 to perform the repair process. After the enhanced color image is acquired, the computing platform 150 may output the enhanced color image to other systems for further processing, such as to the infotainment system 110 for viewing by a driver, or to the decision control system 130 for relevant decision processing.
Alternatively, one or more of these components may be mounted separately from or associated with vehicle 100. For example, the memory 152 may exist partially or completely separate from the vehicle 100. The above components may be communicatively coupled together in a wired and/or wireless manner.
Alternatively, the above components are just an example, and in practical applications, components in the above modules may be added or deleted according to actual needs, and fig. 1 should not be construed as limiting the embodiments of the present application.
An autonomous car traveling on a road, such as the vehicle 100 above, may identify objects within its surrounding environment to determine adjustments to the current speed. The object may be another vehicle, a traffic control device, or another type of object. In some examples, each identified object may be considered independently and based on its respective characteristics, such as its current speed, acceleration, spacing from the vehicle, etc., may be used to determine the speed at which the autonomous car is to adjust.
Alternatively, the vehicle 100 or a sensing and computing device associated with the vehicle 100 (e.g., computing system 131, computing platform 150) may predict the behavior of the identified object based on characteristics of the identified object and the state of the surrounding environment (e.g., traffic, rain, ice on a road, etc.). Alternatively, each identified object depends on each other's behavior, so all of the identified objects can also be considered together to predict the behavior of a single identified object. The vehicle 100 is able to adjust its speed based on the predicted behavior of the identified object. In other words, an autonomous car is able to determine what steady state the vehicle will need to adjust to (e.g., accelerate, decelerate, or stop) based on the predicted behavior of the object. In this process, the speed of the vehicle 100 may also be determined in consideration of other factors, such as the lateral position of the vehicle 100 in the road on which it is traveling, the curvature of the road, the proximity of static and dynamic objects, and so forth.
In addition to providing instructions to adjust the speed of the autonomous vehicle, the computing device may also provide instructions to modify the steering angle of the vehicle 100 so that the autonomous vehicle follows a given trajectory and/or maintains safe lateral and longitudinal distances from objects in the vicinity of the autonomous vehicle (e.g., cars in adjacent lanes on the roadway).
The vehicle 100 may be a car, a truck, a motorcycle, a bus, a mower, a construction equipment, a trolley, a golf cart, a train, or the like, and the embodiment of the present application is not particularly limited.
Application scenarios of the embodiments of the present application:
in the embodiment of the application, the vehicle represents an object running on a lane, is not limited to a motor vehicle running on a road, and can be widely applied to other devices. For example, the embodiment of the application can also be applied to robots and the like in industrial scenes or logistics scenes; the method can also be applied to intelligent wearable equipment or intelligent terminals containing the module, and is not limited herein.
Note that the embodiments of the present application are described by taking traffic scenes as examples, and are caused to limit the application scope of the embodiments of the present application. Therefore, the "vehicle, self-vehicle or vehicle body" in the embodiments of the present application may be replaced by a robot, a smart wearable device or a smart terminal device, which is not limited herein.
The wading early warning unit 200 is configured to collect wading early warning information of a certain distance in front of a vehicle, and particularly in rainy weather, when the vehicle passes through a wading road section, whether the vehicle enters a wading area needs to be determined in advance through the wading early warning information, and the wading early warning unit is independent of external devices. However, under the condition of lacking wading early warning information, the wading area is blindly accessed, so that the risk of personal safety and property loss exists. In the scheme in the prior art, the wading early warning unit cannot provide wading information for a vehicle entering a wading area in advance.
In the scheme of the prior art, under the condition that a vehicle needs to wade into a water accumulation area, the information of the water accumulation area can be obtained, so that the vehicle is too deep in wading when knowing the information of the water accumulation area, and traffic accidents are caused. If the depth of the water accumulation area exceeds the safe wading depth of the vehicle, the vehicle may be flameout, and life and property losses are caused.
Based on various defects in the prior art, the embodiment of the application provides a wading early warning method, the problems are solved by providing the wading early warning method and the wading early warning device, and the wading early warning method provided by the embodiment of the application is described next.
Referring to fig. 2, fig. 2 is a schematic flow chart of wading early warning provided in an embodiment of the present application. The method comprises the following steps:
s310: and calibrating the ranging sensors, and unifying the environmental data collected by all the ranging sensors under the same coordinate system.
The ranging sensor comprises an optical ranging instrument and an acoustic ranging instrument, reference is made to the figure, and fig. 4 is a schematic diagram of coordinates of the ranging sensor according to an embodiment of the present application. Before the calibration step is not completed, the installation positions and the orientations of the ranging sensor 1 and the ranging sensor 2 are different, so that the positions of the same target pixel point in the coordinate systems of the ranging sensors are different.
The two independent ranging sensor coordinate systems can be associated through the step S310, so that the environmental data collected by the two independent ranging sensor coordinate systems are unified on the same coordinate system, that is, the coordinates corresponding to the pixel points of the two sensors reflecting the same physical world position are the same.
It should be noted that the embodiment of the present application is only schematically illustrated in fig. 4, and does not limit the types, the number and the installation positions of the ranging sensors. The plurality of ranging sensors may be mounted inside or outside the vehicle facing the traveling direction. For example, the distance measuring sensor may be mounted on the rear surface area of the inside mirror, may be mounted on the roof area, or may be mounted on the inside windshield area, and is not limited thereto. It should be noted that the plurality of ranging sensors may be respectively installed in different areas, which is not limited herein. For example, an optical rangefinder is mounted to the rear area of the interior mirror and an acoustic rangefinder is mounted at the front bumper.
Taking the distance measuring sensor 1 as an acoustic distance measuring instrument for example, the acoustic distance measuring instrument can be a sonar distance measuring instrument, an ultrasonic distance measuring instrument or other sensors capable of penetrating the water surface to complete the distance measuring task. Taking an ultrasonic range finder as an example, the obtained point cloud exists in a three-dimensional ultrasonic coordinate system, and the coordinates are expressed as (X S ,Y S ,Z S ) The unit is meter. And carrying out rigid transformation on the three-dimensional ultrasonic coordinate system to obtain the coordinates of the point cloud in a world coordinate system or a vehicle body coordinate system.
Taking the distance measuring sensor 2 as an example, the optical distance measuring device is described as an optical distance measuring device, and the optical distance measuring device can be composed of two or more optical cameras. Taking a binocular vision camera as an example, the coordinate systems of optical images acquired by the two cameras are pixel coordinate systems (pixel coordinate system), the coordinate expressions are (u, v), the units are pixels, the upper left corner of the optical image is taken as an origin, and the horizontal coordinate u and the vertical coordinate v are the row and the column of the image respectively.
And calibrating the binocular camera to obtain the internal parameters and the external parameters of the optical range finder respectively. The internal parameters refer to parameters related to the characteristics of the camera, such as the focal length f of the camera, the pixel size, the distortion of the lens, and the like. External parameters refer to parameters of a single camera coordinate system relative to other coordinate systems, such as position and pose between two cameras, which are typically represented by a rotation matrix R and a translation vector T of the camera coordinate system relative to the other coordinate systems. In this embodiment, the internal parameters and external parameters of the optical rangefinder may be calculated by a calibration algorithm, which is a prior art and will not be described herein. And correcting the original image according to the internal and external parameters, wherein the two corrected images are positioned on the same plane and parallel to each other. And carrying out pixel point matching on the two images, so that partial pixel points of the two images can be in one-to-one correspondence through a mapping relation. And calculating the depth of each pixel point according to the matching result of the pixel points of the two images, thereby obtaining the three-dimensional coordinates of the pixel point under the same coordinate system.
It is worth mentioning that the same coordinate system may be the world coordinate system (X w ,Y W ,Z w ) May be a vehicle body coordinate system (X v ,Y v ,Z v ) The effects of the embodiments of the present application can be achieved in any coordinate system, without limitation. The world coordinate system and the vehicle body coordinate system can be converted by a rigid body transformation mode.
Referring to fig. 5, a schematic diagram of a vehicle body coordinate system is provided, the vehicle body coordinate system is set by a person, the origin can be a midpoint position at the forefront end of the vehicle head, and a screen formed by an xy axis of the coordinate system is flush with the bottom of the vehicle disk. Wherein the x-axis points in the direction of travel of the vehicle body, the y-axis is perpendicular to the x-axis and points to the right, and the z-axis is perpendicular to the plane constructed by the xy-axis and points in the direction of the road surface. For a clearer explanation of the embodiments of the present application, the embodiments of the present application are explained with the same coordinate system as the vehicle body coordinate system.
It should be noted that step S310 is an optional step, and if the calibration of the vehicle is completed at the time of shipment, this step may be omitted.
S321: and acquiring water surface three-dimensional coordinate information T of the water accumulation area within the front preset distance by an optical distance meter.
The optical rangefinder may be composed of two or more optical cameras for acquiring an optical image of a region of water accumulation within a predetermined distance in front. It should be noted that, the optical rangefinder has completed calibration in step S310, that is, the three-dimensional coordinate information of the water surface of the water accumulation area can be obtained according to the optical image, and is recorded as a coordinate set of pixel points of the water surface: t= { (X 1 t,Y 2 t,Z 3 t)...(X i t,Y i t,Z i t) }, wherein X i t represents the linear distance from the water surface pixel point to the origin of the vehicle body coordinate system, Y i t represents the offset distance of the water surface pixel point from the origin of the vehicle body coordinate system, Z i t represents the depth distance of the water surface pixel point from the origin of the vehicle body coordinate system.
S322: and acquiring the three-dimensional coordinate information D of the road surface of the ponding area in the front preset distance through an acoustic range finder.
The acoustic distance meter can be a sonar distance meter, an ultrasonic distance meter or other sensors capable of penetrating through the water surface to finish the distance measurement task and is used for acquiring depth images of ponding areas in a preset distance in front. Similarly, the acoustic rangefinder has completed calibration in step 310, and the three-dimensional coordinate information of the road surface in the water accumulation area can be obtained according to the depth image, and is recorded as a coordinate set of pixel points of the road surface: d= { (X) 1 d,Y 2 d,Z 3 d)...(X j d,Y j d,Z j d)}。
It should be noted that, referring to fig. 6a, the coordinates of the pixels on the water surface and the coordinates of the pixels on the road surface are in one-to-one correspondence, for example, the world coordinates of the pixels on the water surface are (X) 1 t,Y 1 t,Z 1 t), the world coordinate of the road surface pixel point corresponding to the water surface pixel point is (X) 1 d,Y 1 d,Z 1 d) Wherein X is 1 t=X 1 d,Y 1 t=Y 1 d,Z 1 t≠Z 1 d. As can be seen from fig. 6a, the X and Y coordinate values of the water surface pixel point and the road surface pixel point are the same in the vehicle body coordinates, and the Z coordinate values are different.
In another possible implementation manner, referring to fig. 3, the step S322 may be replaced by the step S323, which also achieves the effects of the embodiments of the present application. Namely, the three-dimensional coordinate information D= { (X) of the road surface of the water accumulation area 1 d,Y 2 d,Z 3 d)...(X j d,Y j d,Z j d) The information can be acquired by combining high-precision map information with GPS data. The road surface three-dimensional coordinate information D of the water accumulation area can also be reported and acquired by a user. For example, one or more vehicles pass through the wading area, the user obtains the road surface height information of the wading area through the sensing device, and the road surface height information of the wading area is reported to the system through the communication device. The system transmits the road surface height information of the wading area to all users, so that the users can obtain the road surface three-dimensional coordinate information D of the wading area.
S330: and calculating the maximum water accumulation depth of the front water accumulation area according to the water surface three-dimensional coordinate information T and the road surface three-dimensional coordinate information D.
Referring to fig. 6b, fig. 6b is a schematic diagram of calculating a water accumulation depth according to an embodiment of the present application, before a vehicle does not enter a water accumulation area, the vehicle obtains, through a sensing module, a vehicle body coordinate of a single water surface pixel point within a preset near distance in front and a vehicle body coordinate of a corresponding road surface pixel point, which are respectively denoted as (X 1 t,Y 2 t,Z 3 t) and (X) 1 d,Y 1 d,Z 1 d) A. The invention relates to a method for producing a fibre-reinforced plastic composite The two coordinates are subtracted to obtain (Δx) 1 ,ΔY 1 ,ΔZ 1 ) The depth of accumulated water at the position is delta Z 1 . For example, in step S330, the vehicle body coordinates of a water surface pixel point in the water accumulation region are obtained as (X 1 t=0.1m,Y 1 t=2m,Z 1 t=1.5m), the ponding area is certainThe coordinates of the vehicle body of the pixel points of one road surface are (X) 1 d=0.1m,Y 1 d=2m,Z 1 d=2m), the coordinate difference is (Δx) 1 =0,ΔY 1 =0,ΔZ 1 =0.5m), the depth of the water surface pixel is 0.5m.
Traversing all pixel points in the water accumulation area, and subtracting Z values of the pixel points with one-to-one correspondence in the set T and the set D to obtain a coordinate difference set K= { (0, deltaZ) 1 )...(0,0,ΔZ k ) }. The maximum water accumulation depth of the water accumulation area is the maximum value of deltaZ in the collection K.
Wherein fig. 6a and 6b are only schematic and do not reflect the ratio between the body and the water accumulation area in the real world.
S340: judging whether the water accumulation depth exceeds the safe wading depth.
And judging whether the maximum water accumulation depth exceeds the safe wading depth by comparing the values. The safety wading depth can be one of exhaust port ground clearance height, wheel half height ground clearance height, door frame ground clearance height or engine air inlet ground clearance height. For example, a heavy duty wagon is 100 to 120 cm, a common wagon is 45 to 80 cm, an off-road jeep is 60 cm, and a minibus cannot exceed 40 cm.
S350: and if the maximum water accumulation depth exceeds the safe wading depth, sending out early warning prompt information.
The early warning prompt information comprises one or a combination of a plurality of early warning such as visual early warning, sound early warning, lamplight early warning and touch early warning. The early warning prompt information comprises at least one or more of a front ponding area distance, a ponding area, a maximum ponding width or a maximum wading distance with a visual effect and a maximum ponding depth.
For example, the voice warning may be provided by the speaker 114 to alert the warning message to the broadcast travel. The lamplight early warning can inform a driver whether the ponding depth of a ponding area in front of the driver exceeds the safe wading depth through the change of the on-off of atmosphere lamps in the vehicle, flashing and the like. The haptic early warning can inform the driver whether the ponding depth of the ponding area in front exceeds the safe wading depth through the vibration of the steering wheel. Wherein the visual early warning can send visual early warning information to the driver through a central control display screen, an instrument display screen, a Head-up display (HUD) or at least one other vehicle-mounted display device,
referring to fig. 7, a schematic diagram of visual early warning is provided in an embodiment of the present application. When the vehicle detects that a water accumulation area exists in the front preset distance and the maximum water accumulation depth of the water accumulation area exceeds the safe wading depth, early warning information of the water accumulation area is displayed in at least one vehicle-mounted display device, and the early warning information can be in the form of images or characters or a combination of the images and the characters.
Fig. 8, 9 and 10 are schematic structural diagrams of a possible wading early warning device according to an embodiment of the present application. The wading early warning device can be used for realizing the beneficial effects of the method embodiment.
As shown in fig. 8, the wading early warning device 800 includes a sensing module 810, a processing module 820, a judging module 830, and an early warning module 840.
In the embodiment of the present application, the wading early warning device 800 may be part of a vehicle. In addition to being part of a vehicle, the wading early-warning unit and the vehicle may have other relationships, for example, the wading early-warning unit 800 does not belong to the vehicle, but may provide wading early-warning information for the target vehicle through a certain coupling relationship. The various modules may also be located in other devices that are connected to the vehicle either wirelessly or by wire, as not limited herein. Optionally, the device modules are integrated in the mobile terminal, and early warning of the front ponding area is displayed or notified to the driver, so that the vehicle can judge whether to change the driving strategy to cope with ponding in a preset distance in front. Optionally, each device module may be a wading early warning module on the cloud server, and interact data with the vehicle in a wireless transmission manner, and determine information of a ponding area within a preset distance in front of the vehicle.
Wherein the sensing module 810 includes an optical range finder and an acoustic range finder that have completed calibration. And respectively acquiring an optical image and a depth image of the ponding region in a preset distance in front by using an optical distance meter and an acoustic distance meter, wherein the optical image and the pixel points in the same position of the depth image correspond to the same real physical world coordinates.
The processing module 820 is configured to obtain water accumulation information of the water accumulation area according to the obtained optical image and the depth image, where the water accumulation information includes at least one of a maximum water accumulation depth, a maximum water accumulation width, a maximum water accumulation length, or a water accumulation area. The flow of the method for calculating the maximum water accumulation width, the maximum water accumulation length or the water accumulation area is similar to that of the method for calculating the maximum water accumulation depth in the second graph, and is not repeated herein.
The determining module 830 is configured to determine whether the maximum water depth exceeds a safe wading distance.
The early warning module 840 is configured to send early warning prompt information, where the early warning information includes one or more of visual early warning, acoustic early warning, lamplight early warning, and tactile early warning. When the wading early warning device 800 is used to implement the functions of the method embodiment shown in fig. 3, the sensing module 810 is used to execute steps S321 and S322; the processing module 820 is configured to execute step S330; the judging module 830 is configured to execute step S340; the early warning module 840 is used for executing step S350. The above-mentioned details of the sensing module 810, the processing module 820, the judging module 830, and the pre-warning module 840 can be directly obtained by referring to the related descriptions in the method embodiment shown in fig. 3, which are not repeated herein.
It should be understood that the above division of the units of the water-related early warning device 800 is merely a division of one logic function.
Fig. 9 is a schematic diagram of another wading early warning device 900 according to an embodiment of the present application, which includes a sensing module 910, a positioning module 911, a communication module 912, a processing module 920, a judging module 930, and an early warning module 940.
When the wading early warning device 900 is used to implement the functions of the method embodiment shown in fig. 3, the sensing module 910, the positioning module 911 and the communication module 912 are used to execute steps S321 and S323; the processing module 920 is configured to execute step S330; the judging module 930 is configured to execute step S340; the early warning module 940 is configured to execute step S350. The above-mentioned sensing module 910, positioning module 911, communication module 912, processing module 920, judging module 930, and pre-warning module 940 may be directly described in the method embodiment shown in fig. 3, which is not repeated herein.
Fig. 10 is a schematic hardware structure of a wading early warning device according to an embodiment of the present application. The wading early warning device 1000 shown in fig. 10 includes a memory 1001, a processor 1002, a communication interface 1003, and a bus 1004. The memory 1001, the processor 1002, and the communication interface 1003 are connected to each other by a bus 1004.
The memory 1001 may be a ROM, a static storage device, and a RAM. The memory 1001 may store a program, and when the program stored in the memory 1001 is executed by the processor 1002, the processor 1002 and the communication interface 1003 are used to perform the respective steps of the wading early warning method of the embodiment of the present application.
The processor 1002 may be a general-purpose CPU, microprocessor, ASIC, GPU, or one or more integrated circuits, configured to execute related programs to implement functions required to be executed by units in the wading early warning device according to the embodiment of the present application, or execute the wading early warning method according to the embodiment of the present application.
The processor 1002 may also be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the wading early warning method in the embodiments of the present application may be completed by an integrated logic circuit of hardware in the processor 1002 or instructions in the form of software.
The processor 1002 may also be a general purpose processor, DSP, ASIC, FPGA or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 1001, and the processor 1002 reads information in the memory 1001, and combines with hardware thereof to implement functions required to be executed by units included in the wading early warning device of the embodiment of the present application, or execute the wading early warning method of the embodiment of the present application.
Communication interface 1003 uses a transceiver, such as, but not limited to, a transceiver, to enable communication between the water-in-one early warning device 1000 and other equipment or a communication network. For example, the optical image and the depth image to be processed may be acquired through the communication interface 1003.
Bus 1004 may include a path to transfer information between elements of device 1000 (e.g., memory 1001, processor 1002, communication interface 1003).
It should be appreciated that the processor in embodiments of the present application may be a central processing unit (central processing unit, CPU), but may also be other general purpose processors, digital signal processors (digital signal processor, DSP), application specific integrated circuits (application specific integrated circuit, ASIC), off-the-shelf programmable gate arrays (field programmable gate array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It should also be appreciated that the memory in embodiments of the present application may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. The volatile memory may be random access memory (random access memory, RAM) which acts as an external cache. By way of example but not limitation, many forms of random access memory (random access memory, RAM) are available, such as Static RAM (SRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate synchronous dynamic random access memory (DDR SDRAM), enhanced Synchronous Dynamic Random Access Memory (ESDRAM), synchronous Link DRAM (SLDRAM), and direct memory bus RAM (DR RAM).
The above embodiments may be implemented in whole or in part by software, hardware, firmware, or any other combination. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product comprises one or more computer instructions or computer programs. When the computer instructions or computer program are loaded or executed on a computer, the processes or functions described in accordance with the embodiments of the present application are all or partially produced. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center by wired (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more sets of available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium. The semiconductor medium may be a solid state disk.
It should be understood that the term "and/or" is merely an association relationship describing the associated object, and means that three relationships may exist, for example, a and/or B may mean: there are three cases, a alone, a and B together, and B alone, wherein a, B may be singular or plural. In addition, the character "/" herein generally indicates that the associated object is an "or" relationship, but may also indicate an "and/or" relationship, and may be understood by referring to the context.
In the present application, "at least one" means one or more, and "a plurality" means two or more. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b, or c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural.
It should be understood that, in various embodiments of the present application, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (17)

1. A method of wading early warning, comprising:
acquiring an optical image and a depth image of a ponding region, wherein the optical image and the depth image are respectively obtained by a calibrated optical range finder and an acoustic range finder;
determining a maximum water accumulation depth of the water accumulation region based on the optical image and the depth image;
judging whether the maximum water accumulation depth is larger than a safe wading depth or not;
and if the maximum ponding depth is greater than the safe wading depth, sending out early warning prompt information.
2. The method of claim 1, wherein the determining the maximum water depth for the water accumulation region based on the optical image and the depth image comprises:
determining three-dimensional coordinate information of the ponding region based on the optical image and the depth image;
And obtaining the maximum water accumulation depth of the water accumulation area based on the three-dimensional coordinate information of the water accumulation area.
3. The method of claim 2, wherein the determining three-dimensional coordinate information of the water accumulation region based on the optical image and the depth image comprises:
determining a water surface pixel point coordinate set based on the optical image;
determining a pavement pixel point coordinate set based on the depth image;
the three-dimensional coordinate information of the water accumulation area comprises the water surface pixel point coordinate set and the road surface pixel point coordinate set, wherein the three-dimensional coordinate is a coordinate system comprising an X axis, a Y axis and a Z axis, coordinates in the water surface pixel point coordinate set and coordinates in the road surface pixel point coordinate set have a one-to-one correspondence, and the one-to-one correspondence is used for indicating that the X coordinate value and the Y coordinate value of the water surface pixel point and the road surface pixel point coordinate value are the same in the three-dimensional coordinate system, and the Z coordinate values are different.
4. The method of claim 3, wherein obtaining the maximum water accumulation depth of the water accumulation region based on the three-dimensional coordinate information of the water accumulation region comprises:
subtracting two coordinate points with the one-to-one correspondence from the water surface pixel point coordinate set and the road surface pixel point coordinate set respectively to obtain a coordinate difference set;
And determining the maximum water accumulation depth according to the coordinate difference value set, wherein the maximum water accumulation depth is the maximum value of Z coordinates in the coordinate difference value set.
5. The method according to claims 1-4, wherein the method further comprises:
determining a space coordinate set of pixel points positioned at the water accumulation edge based on the optical image of the water accumulation area;
and determining at least one piece of wading information in the maximum water accumulation width, the maximum wading distance or the water accumulation area according to the space coordinate set.
6. The method of claims 1-5, wherein the alert message comprises a combination of one or more of visual alert, audible alert, light alert, tactile alert, and the like.
7. The method of claim 6, wherein the visual pre-warning comprises at least one or a combination of a water accumulation area, a water accumulation width, and a water accumulation depth with a visual effect.
8. A wading early warning device, characterized by comprising:
the sensing module is used for acquiring an optical image and a depth image of the ponding area, wherein the optical image and the depth image are respectively acquired by a calibrated optical range finder and an acoustic range finder;
The processing module is used for determining the maximum ponding depth of the ponding region based on the optical image and the depth image;
the judging module is used for judging whether the maximum water accumulation depth is larger than the safe wading depth;
and the early warning module is used for sending out early warning prompt information if the maximum ponding depth is larger than the safe wading depth.
9. The wading early warning device according to claim 8, wherein the processing module is specifically configured to:
determining three-dimensional coordinate information of the ponding region based on the optical image and the depth image;
and obtaining the maximum water accumulation depth of the water accumulation area based on the three-dimensional coordinate information of the water accumulation area.
10. The wading early warning device according to claim 9, wherein the three-dimensional coordinate information of the water accumulation region is determined based on the optical image and the depth image:
determining a water surface pixel point coordinate set based on the optical image;
determining a pavement pixel point coordinate set based on the depth image;
the three-dimensional coordinate information of the water accumulation area comprises the water surface pixel point coordinate set and the road surface pixel point coordinate set, wherein the three-dimensional coordinate is a coordinate system comprising an X axis, a Y axis and a Z axis, coordinates in the water surface pixel point coordinate set and coordinates in the road surface pixel point coordinate set have a one-to-one correspondence, and the one-to-one correspondence is used for indicating that the X coordinate value and the Y coordinate value of the water surface pixel point and the road surface pixel point coordinate value are the same in the three-dimensional coordinate system, and the Z coordinate values are different.
11. The wading early warning device according to claim 10, wherein the maximum water accumulation depth of the water accumulation area is obtained based on three-dimensional coordinate information of the water accumulation area.
Subtracting two coordinate points with the one-to-one correspondence from the water surface pixel point coordinate set and the road surface pixel point coordinate set respectively to obtain a coordinate difference set;
and determining the maximum water accumulation depth according to the coordinate difference value set, wherein the maximum water accumulation depth is the maximum value of Z coordinates in the coordinate difference value set.
12. The apparatus according to claims 8-11, wherein the calculation module obtains water accumulation information based on three-dimensional information of a spatial position of the water accumulation region, the water accumulation information including a maximum water accumulation width, a maximum wading distance, and a water accumulation area includes:
determining a space coordinate set of pixel points positioned at the edge of the accumulated water based on the optical image;
and determining at least one piece of wading information in a maximum water accumulation width, a maximum wading example or a water accumulation area according to the space coordinate set.
13. The wading early warning device according to claims 8-12, wherein the early warning prompt information comprises one of or a combination of two or more of visual early warning, acoustic early warning, light early warning, tactile early warning, etc.
14. The wading early warning device according to claim 13, wherein the visual early warning includes at least one of a water accumulation area, a water accumulation width, and a water accumulation depth or a combination of two or more of them having a visual effect.
15. A wading early warning device, characterized by comprising:
at least one processor;
at least one memory storing a computer program which, when executed by a processor, implements the image processing method according to any one of claims 1-7.
16. A computer program product, the computer program product comprising: computer program code which, when run, causes a processor to perform the method of any of claims 1-10.
17. A vehicle comprising a travel system, a sensing system, a control system, and a computer system, wherein the computer system is configured to perform the method of any of claims 1-10.
CN202111320824.3A 2021-11-09 2021-11-09 Wading early warning method and related device Pending CN116106931A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111320824.3A CN116106931A (en) 2021-11-09 2021-11-09 Wading early warning method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111320824.3A CN116106931A (en) 2021-11-09 2021-11-09 Wading early warning method and related device

Publications (1)

Publication Number Publication Date
CN116106931A true CN116106931A (en) 2023-05-12

Family

ID=86266044

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111320824.3A Pending CN116106931A (en) 2021-11-09 2021-11-09 Wading early warning method and related device

Country Status (1)

Country Link
CN (1) CN116106931A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117636662A (en) * 2023-12-10 2024-03-01 广东东软学院 Anti-flooding traffic control method and system for wading road section
CN117636662B (en) * 2023-12-10 2024-07-26 广东东软学院 Anti-flooding traffic control method and system for wading road section

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117636662A (en) * 2023-12-10 2024-03-01 广东东软学院 Anti-flooding traffic control method and system for wading road section
CN117636662B (en) * 2023-12-10 2024-07-26 广东东软学院 Anti-flooding traffic control method and system for wading road section

Similar Documents

Publication Publication Date Title
US12019443B2 (en) Use of detected objects for image processing
JP6622265B2 (en) A robust method for detecting traffic signals and their associated conditions
US10909396B2 (en) Use of relationship between activities of different traffic signals in a network to improve traffic signal state estimation
US9063548B1 (en) Use of previous detections for lane marker detection
US11634153B2 (en) Identification of proxy calibration targets for a fleet of vehicles
CN108290521B (en) Image information processing method and Augmented Reality (AR) device
US20180329423A1 (en) System To Optimize Sensor Parameters In An Autonomous Vehicle
JP2017152049A (en) Methods and systems for object detection using laser point clouds
US8838322B1 (en) System to automatically measure perception sensor latency in an autonomous vehicle
CN114842075B (en) Data labeling method and device, storage medium and vehicle
US20240017719A1 (en) Mapping method and apparatus, vehicle, readable storage medium, and chip
CN115205365A (en) Vehicle distance detection method and device, vehicle, readable storage medium and chip
CN115164910B (en) Travel route generation method, travel route generation device, vehicle, storage medium, and chip
US20220163675A1 (en) Methods of Using Background Images from a Light Detection and Ranging (LIDAR) Device
CN116106931A (en) Wading early warning method and related device
CN114822216B (en) Method and device for generating parking space map, vehicle, storage medium and chip
CN115082886B (en) Target detection method, device, storage medium, chip and vehicle
CN115082573B (en) Parameter calibration method and device, vehicle and storage medium
CN114842454A (en) Obstacle detection method, device, equipment, storage medium, chip and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination