CN117292359A - Obstacle determination method, apparatus, and storage medium for assisting driving - Google Patents

Obstacle determination method, apparatus, and storage medium for assisting driving Download PDF

Info

Publication number
CN117292359A
CN117292359A CN202311380001.9A CN202311380001A CN117292359A CN 117292359 A CN117292359 A CN 117292359A CN 202311380001 A CN202311380001 A CN 202311380001A CN 117292359 A CN117292359 A CN 117292359A
Authority
CN
China
Prior art keywords
obstacle
vehicle
information
driving
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311380001.9A
Other languages
Chinese (zh)
Inventor
陈建伟
戴红灿
赵玉超
田磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China National Heavy Duty Truck Group Jinan Power Co Ltd
Original Assignee
China National Heavy Duty Truck Group Jinan Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China National Heavy Duty Truck Group Jinan Power Co Ltd filed Critical China National Heavy Duty Truck Group Jinan Power Co Ltd
Priority to CN202311380001.9A priority Critical patent/CN117292359A/en
Publication of CN117292359A publication Critical patent/CN117292359A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides an obstacle determination method, device and storage medium for driving assistance, and relates to the technical field of obstacle recognition. The method comprises the steps of obtaining image information shot by a monocular camera, wherein the image information comprises the following steps: the vehicle obstacle detection system comprises lane information and at least one obstacle information, wherein whether a target obstacle exists in a vehicle driving area is determined according to the lane information and the at least one obstacle information, and when the target obstacle exists in the vehicle driving area, the target obstacle is output. The application provides a screening method for intelligent auxiliary driving target barriers of a commercial vehicle based on a monocular camera. The most critical camera targets are screened out through the monocular cameras carried by the vehicle, and one camera target is output at the current moment. The overall performance of the perception fusion algorithm is improved to a certain extent, and accidents caused by inaccurate output of a target obstacle in intelligent driving are avoided.

Description

Obstacle determination method, apparatus, and storage medium for assisting driving
Technical Field
The present application relates to the technical field of obstacle recognition, and in particular, to an obstacle determination method, apparatus, and storage medium for driving assistance.
Background
With the continuous progress of social economy and science technology, the demands of people for traveling are increasing, and vehicles on roads are increasing. Under the background, the L2-level intelligent driving assistance function for the commercial vehicle has important significance in the aspects of relieving the intensity of drivers, improving traffic safety, increasing traffic efficiency and the like, and becomes an important direction of development of the industry in recent years.
The L2-level assisted driving means that the automatic driving system of the vehicle can provide a part of the assistance function, but the driver must monitor the system and make necessary adjustments at any time. In the existing L2 driving assistance function, in terms of functions related to longitudinal control, a higher requirement is put forward on the accuracy of outputting a nearest path obstacle (CIPV) target of a perception fusion module in a multi-target complex scene.
In the current numerous sensing fusion schemes, cameras, millimeter wave radars and laser radars are mainly used in the aspect of sensors, and target fusion is performed by combining the characteristics of the sensors. Aiming at the difficulty that the screening of key camera targets in complex scenes becomes a perception module mainly based on a visual sensor, some schemes select by taking the motion state and the position of the camera targets as the basis, and the analysis of the motion state of the vehicle and the estimation of the running track are ignored.
Disclosure of Invention
The application provides an obstacle determination method, device and storage medium for driving assistance, which are used for solving the problems that in the intelligent driving process, the target obstacle selection of a camera of a perception fusion module is unreasonable and the target obstacle output is inaccurate in a complex scene.
In one aspect, the present application provides an obstacle determining method for assisting driving, including:
acquiring image information shot by a monocular camera, wherein the image information comprises: lane information and at least one obstacle information;
determining whether a target obstacle exists in a vehicle driving area according to the lane information and the at least one obstacle information;
and outputting the target obstacle when the target obstacle exists in the vehicle running area.
Optionally, determining whether the target obstacle exists in the vehicle driving area according to the lane information and the at least one obstacle information includes:
judging whether lane line information exists in the lane information;
if yes, estimating the running area of the vehicle according to the lane line information;
determining whether a target obstacle exists in the driving area according to the at least one obstacle information;
If not, acquiring driving information of the vehicle, and estimating a driving area of the vehicle according to the driving information;
and determining whether a target obstacle exists in the driving area according to the at least one obstacle information.
Optionally, estimating the driving area of the vehicle according to the lane line information includes:
determining a first position of the lane line according to the lane line information;
carrying out translation processing on the lane line according to the first position of the lane line and the second position of the vehicle so as to enable the lane line to coincide with the vehicle, and obtaining a new lane line, wherein the new lane line is the lane line at the second position;
and estimating the running area of the vehicle according to the new lane line.
Optionally, the driving information includes: the present running speed, yaw rate and running direction of the vehicle, and the estimating the running area of the vehicle according to the driving information includes:
determining a running radius of the vehicle according to the running speed and the yaw rate;
and estimating the running area of the vehicle according to the running radius and the running direction.
Optionally, the obstacle information includes: the determining whether the target obstacle exists in the driving area according to the at least one obstacle information includes:
estimating the target position of each obstacle in a preset period according to the current position, the movement direction and the movement speed of the at least one obstacle;
judging whether the target position is in the driving area or not;
if the target position is in the driving area, determining that an obstacle corresponding to the target position is a candidate obstacle;
if the number of the candidate obstacles is one, the candidate obstacle is taken as the target obstacle;
if the number of the candidate obstacles is multiple, determining an influence factor of each candidate obstacle according to target positions corresponding to the multiple candidate obstacles, wherein the influence factor is used for indicating the influence condition of the candidate obstacle on the running of the vehicle;
and taking the candidate obstacle with the largest influence factor among the plurality of candidate obstacles as the target obstacle.
Optionally, acquiring image information shot by the monocular camera includes:
Acquiring the running state of the vehicle and judging whether the running state is a normal state or not;
judging whether an auxiliary driving function of the vehicle is activated or not when the running state of the vehicle is a normal state;
and when the auxiliary driving function is activated, acquiring image information shot by the monocular camera.
Optionally, the method further comprises:
judging whether a first operation of the vehicle by a user is detected or not;
and if the first operation of the user on the vehicle is detected, controlling the auxiliary driving function of the vehicle to be closed.
In another aspect, the present application provides an obstacle determining device for assisting driving, the device including:
the acquisition module is used for acquiring image information shot by the monocular camera, and the image information comprises: lane information and at least one obstacle information;
a determining module, configured to determine whether a target obstacle exists in a vehicle driving area according to the lane information and the at least one obstacle information;
and the output module is used for outputting the target obstacle when the target obstacle exists in the vehicle running area.
Optionally, the apparatus further includes: the judging module and the estimating module are used for judging the state of the object;
The judging module is used for judging whether lane line information exists in the lane information or not;
the estimating module is used for estimating the running area of the vehicle according to the lane line information if the lane line information exists in the lane information;
the determining module is used for determining whether a target obstacle exists in the driving area according to the at least one obstacle information;
the acquisition module is further used for acquiring driving information of the vehicle if lane line information does not exist in the lane information;
the estimating module is further used for estimating the driving area of the vehicle according to the driving information.
Optionally, the determining module is further configured to determine a first position of the lane line according to the lane line information;
the determining module is used for carrying out translation processing on the lane line according to the first position of the lane line and the second position of the vehicle so as to enable the lane line to coincide with the vehicle to obtain a new lane line, wherein the new lane line is the lane line at the second position;
the estimating module is further configured to estimate a driving area of the vehicle according to the new lane line.
The determining module is used for determining the running radius of the vehicle according to the running speed and the yaw rate;
the estimating module is further configured to estimate a driving area of the vehicle according to the driving radius and the driving direction.
Optionally, the estimating module may be configured to estimate, according to a current position, a movement direction, and a movement speed of the at least one obstacle, a target position where each obstacle is located within a preset period;
the judging module is further used for judging whether the target position is in the driving area;
the determining module is configured to determine that an obstacle corresponding to the target position is a candidate obstacle if the target position is in the driving area;
the determining module is configured to take the candidate obstacle as the target obstacle if the number of candidate obstacles is one;
the determining module is used for determining an influence factor of each candidate obstacle according to the target positions corresponding to the plurality of candidate obstacles if the number of the candidate obstacles is a plurality of, wherein the influence factor is used for indicating the influence condition of the candidate obstacle on the running of the vehicle;
And the determining module is used for taking the candidate obstacle with the largest influence factor among the plurality of candidate obstacles as the target obstacle.
Optionally, the acquiring module is further configured to acquire an operating state of the vehicle;
the judging module is further used for judging whether the running state is a normal state or not;
the judging module is further used for judging whether the auxiliary driving function of the vehicle is activated or not when the running state of the vehicle is a normal state;
the acquisition module is used for acquiring the image information shot by the monocular camera when the driving assisting function is activated.
Optionally, the apparatus further includes: a control module;
the judging module is further used for judging whether a first operation of the vehicle by a user is detected;
and the control module is used for controlling the auxiliary driving function of the vehicle to be closed if the first operation of the user on the vehicle is detected.
In a third aspect, the present application provides an obstacle determining apparatus for assisting driving, the apparatus comprising:
a memory;
a processor;
wherein the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored in the memory to implement the obstacle determining method for driving assistance as described in the first aspect and the various possible implementations of the first aspect.
In a fourth aspect, the present application provides a computer readable storage medium having stored thereon a computer program for execution by a processor to implement the obstacle determining method for driving assistance as described in the first aspect and various possible implementations of the first aspect.
According to the obstacle determination method for driving assistance, image information shot by the monocular camera is acquired, and the image information comprises the following steps: the method comprises the steps of determining whether a target obstacle exists in a vehicle driving area according to lane information and at least one obstacle, outputting the target obstacle when the target obstacle exists in the vehicle driving area, providing an auxiliary function in automatic driving of the vehicle, and enabling a target obstacle screening function of a monocular camera to effectively improve the accuracy of outputting the target obstacle in a complex scene and reduce collision accidents caused by inaccurate outputting of the target obstacle in automatic driving.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic view of a scenario of an obstacle determining method for driving assistance provided in the present application;
fig. 2 is a flowchart of a method for determining an obstacle for driving assistance provided in the present application;
fig. 3 is a second schematic flow chart of the obstacle determining method for driving assistance provided in the present application;
fig. 4 is a schematic structural view of an obstacle determining device for driving assistance provided in the present application;
fig. 5 is a schematic structural view of an obstacle determining apparatus for driving assistance provided in the present application.
Specific embodiments thereof have been shown by way of example in the drawings and will herein be described in more detail. These drawings and the written description are not intended to limit the scope of the inventive concepts in any way, but to illustrate the concepts of the present application to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application, as detailed in the accompanying claims, rather than all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented, for example, in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, article, or apparatus.
In the embodiments of the present application, words such as "exemplary" or "such as" are used to mean examples, illustrations, or descriptions. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
With the continuous progress of social economy and science technology, the demands of people for traveling are increasing, and vehicles on roads are increasing. Under the background, the L2-level intelligent driving assistance function for the commercial vehicle has important significance in the aspects of relieving the intensity of drivers, improving traffic safety, increasing traffic efficiency and the like, and becomes an important direction of development of the industry in recent years.
The L2-level assisted driving means that the automatic driving system of the vehicle can provide a part of the assistance function, but the driver must monitor the system and make necessary adjustments at any time. In the existing L2 driving assistance function, in terms of functions related to longitudinal control, a higher requirement is put forward on the accuracy of outputting a nearest path obstacle (CIPV) target of a perception fusion module in a multi-target complex scene.
However, in many current sensing fusion schemes, cameras, millimeter wave radars and laser radars are mainly used in the aspect of sensors, and target fusion is performed by combining sensor characteristics. Aiming at the difficulty that the screening of key camera targets in complex scenes becomes a perception module mainly based on a visual sensor, some schemes select by taking the motion state and the position of the camera targets as the basis, and the analysis of the motion state of the vehicle and the estimation of the running track are ignored.
In view of the above, the present application proposes an obstacle determining method for assisting driving. The method is particularly applied to an L2-level intelligent driving auxiliary function of a commercial vehicle, and mainly aims at an accurate screening process of a key camera target in a complex scene in a driving process.
It can be appreciated that in urban traffic, driving scenes are complex and changeable, and a large number of obstacles such as pedestrians, bicycles, traffic signs and the like often exist on a lane. Fig. 1 is a schematic view of a scenario of an obstacle determining method for driving assistance provided in the present application. As shown in fig. 1, there are obstacles such as a bicycle 2, a pedestrian 3, and other automobiles 4 in front of the vehicle. These obstacles all affect the driving assistance function of the vehicle 1.
A monocular camera (not shown in fig. 1) is provided at the lower edge of the front windshield of the vehicle 1 in fig. 1, and may be used to capture an image on a driving section of the vehicle 1, so that a vehicle controller of the vehicle 1 outputs a corresponding obstacle according to the image captured by the monocular camera, thereby facilitating application of an auxiliary driving function. An illustration of the scenario shown in fig. 1 is provided. The image shot by the monocular camera comprises three obstacles such as a bicycle 2, a pedestrian 3 and other vehicles 4, at the moment, the bicycle 2 is parked at the roadside and is in a static state, the other vehicles 4 are running forward at a constant speed, and the pedestrian 3 is slowly passing through the road.
According to the method, a most critical camera target is screened out through a monocular camera carried by a vehicle, and the camera target is output so as to determine an obstacle with the largest influence on auxiliary driving; the method improves the output stability and accuracy of the target obstacle in the multi-target scene, and improves the overall performance of the perception fusion algorithm to a certain extent.
The following describes the technical solutions of the present application and how the technical solutions of the present application solve the above technical problems in detail with specific embodiments. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 2 is a flowchart illustrating a method for determining an obstacle for driving assistance according to an embodiment of the present application. The execution body of the present embodiment may be, for example, a vehicle controller. As shown in fig. 2, the method includes:
s101: acquiring image information shot by a monocular camera, wherein the image information comprises: lane information and at least one obstacle information.
The monocular camera is arranged at the lower edge of the front windshield of the vehicle, so that the monocular camera can shoot images in front of the vehicle in the running process; namely: the image information includes lane information in front of the vehicle and at least one obstacle information in front of the vehicle during running of the vehicle. The lane information may include information such as the position, shape and color of a lane line, traffic light information of a lane in which a vehicle is located, construction information around the lane, and the like. The obstacle information may indicate the position, size, and movement condition of an object, pedestrian, or other vehicle, etc., in the image.
It can be understood that, in the driving process of the vehicle, the lane information is obtained to enable the vehicle to know whether the current road condition can be turned and can pass, the lane information is used for navigation and lane keeping of the vehicle, the building information is obtained to enable the vehicle to know whether the current vehicle can accelerate, and the vehicle is provided with a school around the lane, so that the current vehicle must slow down and travel slowly, pedestrians are obtained, and the motion states of the vehicle and other objects can avoid collision and plan the driving path of the vehicle.
S102: and determining whether a target obstacle exists in a vehicle driving area according to the lane information and the at least one obstacle information.
It will be appreciated that lane information provides road limits and directions for the vehicle to travel, while obstacle information indicates the position and movement status of objects or obstacles in the image. By analyzing the lane information, the area range in which the vehicle can travel, i.e., the lane width and the distance between the lane lines, can be determined. In this way, the obstacle in the image can be compared with the lane region to determine whether the target obstacle is present. Meanwhile, the position and the dangerous degree of the target obstacle can be further determined according to the attribute information such as the position and the size of the obstacle. For example, if an obstacle is located directly in front of the vehicle and is large in size, an emergency braking or avoidance strategy may need to be taken.
S103: and outputting the target obstacle when the target obstacle exists in the vehicle running area.
It can be understood that, according to the presence of the target obstacle in the vehicle driving area determined by the lane information, the monocular camera screening function performs fusion matching on the screened obstacle (target obstacle) and the forward millimeter wave radar, determines the information of the target obstacle and the information of the current environment, and transmits the acquired information to the auxiliary driving system in real time, and the auxiliary driving system makes a further decision according to the received information, so that the vehicle avoids the target obstacle according to the decision of the auxiliary driving system, and the risk of collision between the vehicle and the obstacle is reduced.
According to the obstacle determination method for driving assistance provided by the embodiment, the image information shot by the monocular camera is acquired, and the image information comprises: the vehicle obstacle detection system comprises lane information and at least one obstacle information, wherein whether a target obstacle exists in a vehicle driving area is determined according to the lane information and the at least one obstacle information, and when the target obstacle exists in the vehicle driving area, the target obstacle is output. In intelligent auxiliary driving, the target obstacle screening method of the camera can effectively improve the output stability and accuracy of target obstacles in a multi-target scene, improve the overall performance of a perception fusion algorithm to a certain extent, and avoid accidents caused by inaccurate output of the target obstacles in intelligent driving.
Fig. 3 is a second schematic flow chart of a method for determining an obstacle for driving assistance according to an embodiment of the present application. The present embodiment is a detailed description of an obstacle determination method for assisting driving based on the embodiment of fig. 2. As shown in fig. 3, the method includes:
s201: acquiring image information shot by a monocular camera, wherein the image information comprises: lane information and at least one obstacle information
Step S201 is similar to step S101, and will not be described again.
It will be appreciated that a monocular camera can only take a corresponding image when in an operational state. Therefore, before acquiring the image information shot by the monocular camera, the running state of the vehicle needs to be acquired first, and whether the running state is a normal state or not is judged; and when the running state of the vehicle is a normal state, judging whether the auxiliary driving function of the vehicle is activated.
Wherein, the running state of the vehicle includes: normal and abnormal states. The normal state means that the vehicle is currently traveling forward, and the abnormal state may include, for example: reversing, turning around, and flameout of the vehicle.
Since the monocular camera is part of the driving assistance function of the vehicle, namely: the purpose of the monocular camera shooting images is to better realize auxiliary driving. Therefore, the monocular camera can capture an image only when the driving assist function of the vehicle is in an activated state (running). Namely: when the vehicle is in a state of motion without flameout and is not in reverse gear (the vehicle runs towards the front), and the auxiliary driving function of the vehicle is activated, the monocular camera shoots an image, and then the image information shot by the monocular camera can be acquired.
For example, the auxiliary driving function of the vehicle may be turned on and off by a user's operation of the vehicle. The specific steps may, for example, first determine whether a first operation of the vehicle by the user is detected;
and if the first operation of the user on the vehicle is detected, controlling the auxiliary driving function of the vehicle to be closed.
It will be appreciated that the first operation may be, for example: the clicking operation of the closing control corresponding to the driving assisting function by the user can also be: the reversing operation adopted by the user can be as follows: the user operates the turn signal lamp or the steering wheel. The present application is not limited in this regard.
When the intelligent auxiliary driving target obstacle screening function of the commercial vehicle runs normally, namely in the process of screening target obstacles, whether a user operates the vehicle or not is detected at any time. In an exemplary case where a large-volume obstacle is detected in the forward traveling area, the target obstacle screening function inputs the target obstacle to the vehicle driving system, and the vehicle driving system predicts that the braking operation is to be performed, but before the target obstacle screening function inputs the target obstacle, the user first sees the target obstacle and performs the turning or backing operation, and after the turning or backing operation of the user is detected, the vehicle assist driving function is turned off, and the turning or backing operation performed by the user is the first operation performed by the user. If the user does not take any operation, the auxiliary driving function is activated, the monocular camera shoots image information of the road ahead, and the target obstacle is screened out from the image information and reasonable operation is taken.
S202: and judging whether lane line information exists in the lane information.
The lane information refers to information about a lane position, a road sign, an instruction, and the like in a road on which a driving vehicle is traveling. By way of example, the number of lanes may be: single lane, two lane or multilane. Lane boundary lines may also be: solid, dashed or spaced lines. Lane markings may also be: including road markings or markings drawn on the lane, indicating the use or restriction of the driver's lane. Lane indications may also be: an indication is provided as to the direction of travel or steering behaviour of the lane, for example an arrow mark indicating a straight, left or right turn. Lane line information refers to a marking or boundary line between lanes that may provide a driver with important indications of the location and restriction of the lanes.
It will be appreciated that the intelligent driving assistance system may acquire road images using monocular cameras and identify and extract lane line information through image processing techniques. The driving assistance system can use the lane information to help the vehicle keep in a proper lane or change the driving track of the vehicle according to the known lane information so as to avoid collision risk.
S203: and lane line information exists in the lane information, and the first position of the lane line is determined according to the lane line information.
S204: and carrying out translation processing on the lane line according to the first position of the lane line and the second position of the vehicle so as to enable the lane line to coincide with the vehicle to obtain a new lane line, wherein the new lane line is the lane line at the second position.
The first position of the lane line refers to the position of the lane line extracted from the picture information shot by the monocular camera, the position of the lane line is fixed and must be observed, and the second position of the vehicle refers to the position of the central line of the vehicle.
It can be understood that the image shot by the monocular camera contains lane line information, the lane line information is clear and visible, and the vehicle can select the most suitable route according to the lane line information, change the existing driving route and estimate the driving area of the vehicle according to the new driving route.
It can be understood that the lane line information is extracted according to the picture information, the lane line information is input into the intelligent driving system, the system carries out translation processing on the lane line according to the first position of the lane line, the driving route of the vehicle is changed, the vehicle is controlled to translate to the first position of the lane line, the first position of the lane line is enabled to coincide with the central line of the vehicle, the central line of the vehicle is the second position of the vehicle, at the moment, the second position of the vehicle and the first position of the lane line are the same position, the intelligent driving system can estimate the driving area of the vehicle according to the position of the lane line conveniently, and the target barrier is screened.
S205: and estimating the running area of the vehicle according to the new lane line.
It can be understood that, because the vehicle changes its own driving route according to the lane line information, the position of the lane line is the latest driving route of the vehicle, the driving track of the vehicle is estimated according to the lane line information, the driving area of the vehicle is calculated, whether the object exists in the driving area or not is detected, and whether the collision risk exists or not.
S206: and determining the running radius of the vehicle according to the running speed and the yaw rate without lane line information in the lane information.
The yaw rate refers to the deflection of the vehicle about a vertical axis, the magnitude of which represents the stability of the vehicle. When the steering angle of the automobile is larger and the tire works in a nonlinear region, the steering intention cannot be realized by the steering system alone, and at the moment, the differential braking control is triggered to work, and the direct yaw moment control is realized by utilizing the differential braking, so that the driving intention of a driver is ensured, and the running stability control of the automobile is realized.
It can be understood that the lane line information does not exist in the picture information shot by the monocular camera, at this time, the vehicle intelligent driving system calculates the maximum steering angle of the vehicle running according to the yaw rate of the vehicle, calculates the running radius of the vehicle by combining the running speed, and can learn the running area of the vehicle according to the running speed, thereby detecting whether an obstacle exists in the running area.
S207: and estimating the running area of the vehicle according to the running radius and the running direction.
It can be understood that the intelligent driving system determines the running radius of the vehicle according to the running speed and the yaw rate, predicts the running area of the vehicle according to the running radius, the running maximum steering angle and the running direction, provides the screening area as large as possible for screening the target obstacle, and prevents collision accidents.
S208: and estimating the target position of each obstacle in a preset period according to the current position, the movement direction and the movement speed of the at least one obstacle.
Wherein, the obstacle refers to pedestrians, animals, other vehicles and other objects which influence the running of the vehicles on the road.
It can be understood that the current position, the movement direction and the movement speed of the obstacle are extracted according to the image information shot by the monocular camera, and the driving area of the obstacle and the position of the obstacle after a certain time are estimated. For example, if a pedestrian is passing through a road in front of a vehicle driving route, whether the pedestrian can pass smoothly before the vehicle reaches a zebra crossing is expected according to the movement speed of the pedestrian, and if the pedestrian can pass, a non-target obstacle of the pedestrian is judged.
S209: and judging whether the target position is in the driving area or not.
S210: and if the target position is in the running area, determining that the obstacle corresponding to the target position is a candidate obstacle.
It can be understood that the target positions of all the obstacles in the image after the preset time are predicted according to the image information shot by the monocular camera, and whether the target positions are in the estimated vehicle running area or not is judged by combining the estimated vehicle running area. If the target positions of all the obstacles are not in the estimated running area of the vehicle, no target obstacle is output, and the vehicle runs normally. If the target position of the obstacle is in the estimated running area of the vehicle, all the obstacles are marked as candidate obstacles, and then the next screening is carried out to ensure that one obstacle which needs emergency treatment is screened out.
S211: and if the number of the candidate obstacles is one, taking the candidate obstacle as the target obstacle.
It can be understood that the image information shot by the monocular camera only contains one obstacle, so that the movement track of the obstacle is calculated, the target position of the obstacle after the preset time is judged, and the intelligent driving system adopts response measures to avoid the obstacle, so that collision can be prevented, and the aim of safe driving is fulfilled.
S212: and if the number of the candidate obstacles is multiple, determining an influence factor of each candidate obstacle according to the target positions corresponding to the multiple candidate obstacles, wherein the influence factor is used for indicating the influence condition of the candidate obstacle on the running of the vehicle.
The influence factors refer to all factors which influence the driving of the vehicle, such as the size, the category, the movement speed, the distance between the obstacle and the vehicle, and the like.
It can be understood that the image information shot by the monocular camera contains a plurality of obstacles, the vehicle driving system respectively judges the target positions of the obstacles after the preset time, determines the influence factor of each candidate obstacle, and for example, the obstacles such as a motorcycle, a pedestrian, a puppy and a huge stone exist in front of the vehicle, the speeds of the motorcycle, the pedestrian and the puppy and the size of the stone, the distance between each obstacle and the vehicle, and traffic lights in front of a lane are all influence factors. The influence degree of the influence factors at different moments may be different, the actual situation is required to be combined for judgment, the judgment process is required to be combined with a plurality of influence factors for judgment, the most critical targets are screened out in an omnibearing manner, and an exemplary bicycle ready to pass through a road and a bicycle on which the user rides on the head are all influence factors, but the collision risk of the bicycle on the head and the bicycle on the head is larger, so that the influence factors are larger.
S213: and taking the candidate obstacle with the largest influence factor among the plurality of candidate obstacles as the target obstacle.
It will be appreciated that a number of candidate obstacles may be identified as a most urgent obstacle-first treatment, and that, for example, there may be a motorcycle with an obstacle stopped in front of the vehicle, the motorcycle being at the edge of the vehicle's travel area, and a large stone in the center of the vehicle's travel area, the target obstacle being a stone, and the stone being input to the driving system to cause the driving system to take action on the vehicle to avoid the stone.
By way of example, a car traveling fast and a pedestrian walking are present in the self-car traveling area, the car traveling at a speed significantly greater than the speed of the pedestrian, the car is more likely to collide with the self-car at the next moment, and the car is the target obstacle at that time.
For example, there is a first pedestrian passing through the road in the middle of the road and a second pedestrian passing through the road in the traveling area of the own vehicle, after 5 seconds, that is, before the own vehicle reaches the zebra crossing, the first pedestrian reaches the edge of the road, and the second pedestrian is in the middle of the road, so that the influence factor of the second is larger, the collision is easier to happen, and the second output is the target obstacle.
According to the obstacle determination method for driving assistance provided by the embodiment, the image information shot by the monocular camera is obtained, if lane line information exists in the image information, the lane line information and the central line of the vehicle are utilized to estimate the vehicle driving area, if the lane line information does not exist in the image information, the vehicle driving area is estimated by utilizing the vehicle driving speed and the yaw rate, and then the target position of the obstacle at the preset time is estimated according to the obstacle information in the image information. And judging the driving area and the target position, and screening out the most critical target as the input of the intelligent driving system. In intelligent auxiliary driving, the target obstacle screening method of the monocular camera can effectively improve the accuracy of target obstacle output in a complex scene, avoid collision accidents caused by inaccurate target obstacle output in the intelligent driving process, and improve the reliability and safety of intelligent driving.
Fig. 4 is a schematic structural view of an obstacle determining device for driving assistance provided in the present application. As shown in fig. 4, the obstacle determining device 300 for driving assistance provided in the present application includes:
the obtaining module 301 is configured to obtain image information captured by the monocular camera, where the image information includes: lane information and at least one obstacle information;
A determining module 302, configured to determine whether a target obstacle exists in a driving area of the vehicle according to the lane information and the at least one obstacle information;
and an output module 303, configured to output the target obstacle when the target obstacle exists in the vehicle driving area.
Optionally, the apparatus further includes: a judging module 304 and a pre-estimating module 305;
a judging module 304, configured to judge whether lane line information exists in the lane information;
the estimating module 305 is configured to estimate a driving area of the vehicle according to lane line information if the lane line information exists in the lane information;
the determining module 302 is configured to determine whether a target obstacle exists in the driving area according to the at least one obstacle information;
the acquiring module 301 is further configured to acquire driving information of the vehicle if lane line information does not exist in the lane information;
the estimating module 305 is further configured to estimate a driving area of the vehicle according to the driving information
Optionally, the determining module 302 is configured to determine, according to the lane line information, a first position of the lane line;
the determining module 302 is further configured to perform a translation process on the lane line according to the first position of the lane line and the second position of the vehicle, so that the lane line coincides with the vehicle, and a new lane line is obtained, where the new lane line is the lane line at the second position;
The estimating module 305 is further configured to estimate a driving area of the vehicle according to the new lane line.
The determining module 302 is configured to determine a running radius of the vehicle according to the running speed and the yaw rate;
the estimating module 305 is further configured to estimate a driving area of the vehicle according to the driving radius and the driving direction.
Optionally, the estimating module 305 may be configured to estimate, according to the current position, the movement direction, and the movement speed of the at least one obstacle, a target position where each obstacle is located within a preset period of time;
the judging module 304 is further configured to judge whether the target position is in the driving area;
the determining module 302 is configured to determine that an obstacle corresponding to the target position is a candidate obstacle if the target position is in the driving area;
the determining module 302 is configured to take the candidate obstacle as the target obstacle if the number of candidate obstacles is one;
the determining module 302 is configured to determine, if the number of the candidate obstacles is multiple, an impact factor of each candidate obstacle according to target positions corresponding to the multiple candidate obstacles, where the impact factor is used to indicate an impact condition of the candidate obstacle on the vehicle running;
The determining module 302 is configured to take, as the target obstacle, a candidate obstacle with the largest influence factor among the plurality of candidate obstacles.
Optionally, the acquiring module 301 is further configured to acquire an operating state of the vehicle;
the judging module 304 is further configured to judge whether the running state is a normal state;
the judging module 304 is further configured to judge whether an auxiliary driving function of the vehicle is activated when the running state of the vehicle is a normal state;
the acquiring module 301 is configured to acquire image information captured by the monocular camera when the driving assistance function is activated.
Optionally, the apparatus further includes: a control module 306;
the judging module 304 is further configured to judge whether a first operation of the vehicle by a user is detected;
and the control module 306 is used for controlling the auxiliary driving function of the vehicle to be closed if the first operation of the vehicle by the user is detected.
Fig. 5 is a schematic structural view of an obstacle determining apparatus for driving assistance provided in the present application. As shown in fig. 5, the present application provides an obstacle determining apparatus for assisting driving, the obstacle determining apparatus 400 for assisting driving including: a receiver 401, a transmitter 402, a processor 403 and a memory 404.
A receiver 401 for receiving instructions and data;
a transmitter 402 for transmitting instructions and data;
memory 404 for storing computer-executable instructions;
a processor 403 for executing computer-executable instructions stored in a memory 404 to implement the steps executed by the obstacle determining method for assisting driving in the above-described embodiment. Reference may be made in particular to the description of the embodiments of the obstacle determination method for driving assistance described above.
Alternatively, the memory 404 may be separate or integrated with the processor 403.
When the memory 404 is provided separately, the electronic device further comprises a bus for connecting the memory 404 and the processor 403.
The present application also provides a computer-readable storage medium having stored therein computer-executable instructions that, when executed by a processor, implement an obstacle determination method for driving assistance as performed by the obstacle determination device for driving assistance described above.
Those of ordinary skill in the art will appreciate that all or some of the steps, systems, functional modules/units in the apparatus, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between the functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed cooperatively by several physical components. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as known to those skilled in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the present application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. A method for obstacle determination for driving assistance, the method comprising:
acquiring image information shot by a monocular camera, wherein the image information comprises: lane information and at least one obstacle information;
determining whether a target obstacle exists in a vehicle driving area according to the lane information and the at least one obstacle information;
And outputting the target obstacle when the target obstacle exists in the vehicle running area.
2. The method of claim 1, wherein the determining whether a target obstacle exists in a vehicle driving area based on the lane information and the at least one obstacle information comprises:
judging whether lane line information exists in the lane information;
if yes, estimating the running area of the vehicle according to the lane line information;
determining whether a target obstacle exists in the driving area according to the at least one obstacle information;
if not, acquiring driving information of the vehicle, and estimating a driving area of the vehicle according to the driving information;
and determining whether a target obstacle exists in the driving area according to the at least one obstacle information.
3. The method of claim 2, wherein the estimating the driving area of the vehicle based on the lane line information comprises:
determining a first position of the lane line according to the lane line information;
carrying out translation processing on the lane line according to the first position of the lane line and the second position of the vehicle so as to enable the lane line to coincide with the vehicle, and obtaining a new lane line, wherein the new lane line is the lane line at the second position;
And estimating the running area of the vehicle according to the new lane line.
4. The method of claim 2, wherein the driving information comprises: the present running speed, yaw rate and running direction of the vehicle, and the estimating the running area of the vehicle according to the driving information includes:
determining a running radius of the vehicle according to the running speed and the yaw rate;
and estimating the running area of the vehicle according to the running radius and the running direction.
5. The method of any of claims 2-4, wherein the obstacle information comprises: the determining whether the target obstacle exists in the driving area according to the at least one obstacle information includes:
estimating the target position of each obstacle in a preset period according to the current position, the movement direction and the movement speed of the at least one obstacle;
judging whether the target position is in the driving area or not;
if the target position is in the driving area, determining that an obstacle corresponding to the target position is a candidate obstacle;
If the number of the candidate obstacles is one, the candidate obstacle is taken as the target obstacle;
if the number of the candidate obstacles is multiple, determining an influence factor of each candidate obstacle according to target positions corresponding to the multiple candidate obstacles, wherein the influence factor is used for indicating the influence condition of the candidate obstacle on the running of the vehicle;
and taking the candidate obstacle with the largest influence factor among the plurality of candidate obstacles as the target obstacle.
6. The method according to claim 1, wherein the acquiring image information captured by the monocular camera includes:
acquiring the running state of the vehicle and judging whether the running state is a normal state or not;
judging whether an auxiliary driving function of the vehicle is activated or not when the running state of the vehicle is a normal state;
and when the auxiliary driving function is activated, acquiring image information shot by the monocular camera.
7. The method of claim 6, wherein the method further comprises:
judging whether a first operation of the vehicle by a user is detected or not;
and if the first operation of the user on the vehicle is detected, controlling the auxiliary driving function of the vehicle to be closed.
8. An obstacle determining device for assisting driving, the device comprising:
the acquisition module is used for acquiring image information shot by the monocular camera, and the image information comprises: lane information and at least one obstacle information;
a determining module, configured to determine whether a target obstacle exists in a vehicle driving area according to the lane information and the at least one obstacle information;
and the output module is used for outputting the target obstacle when the target obstacle exists in the vehicle running area.
9. An obstacle determining device for assisting driving, characterized by comprising:
a memory;
a processor;
wherein the memory stores computer-executable instructions;
the processor executes the computer-executable instructions stored in the memory to implement the obstacle determining method for driving assistance as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored therein computer-executable instructions, which when executed by a processor, are for implementing the obstacle determination method for driving assistance as claimed in any one of claims 1 to 7.
CN202311380001.9A 2023-10-23 2023-10-23 Obstacle determination method, apparatus, and storage medium for assisting driving Pending CN117292359A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311380001.9A CN117292359A (en) 2023-10-23 2023-10-23 Obstacle determination method, apparatus, and storage medium for assisting driving

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311380001.9A CN117292359A (en) 2023-10-23 2023-10-23 Obstacle determination method, apparatus, and storage medium for assisting driving

Publications (1)

Publication Number Publication Date
CN117292359A true CN117292359A (en) 2023-12-26

Family

ID=89251706

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311380001.9A Pending CN117292359A (en) 2023-10-23 2023-10-23 Obstacle determination method, apparatus, and storage medium for assisting driving

Country Status (1)

Country Link
CN (1) CN117292359A (en)

Similar Documents

Publication Publication Date Title
US10899345B1 (en) Predicting trajectories of objects based on contextual information
JP7121497B2 (en) Virtual roadway generation device and method
US10739780B1 (en) Detecting street parked vehicles
CN113631448B (en) Vehicle control method and vehicle control device
CN108437986B (en) Vehicle driving assistance system and assistance method
CN107077792B (en) Travel control system
CN103874931B (en) For the method and apparatus of the position of the object in the environment for asking for vehicle
CN110036426B (en) Control device and control method
US20210086768A1 (en) Driving assistance control apparatus, driving assistance system, and driving assistance control method for vehicle
US9026356B2 (en) Vehicle navigation system and method
CN112141114B (en) Narrow passage auxiliary system and method
US11987239B2 (en) Driving assistance device
CN106062852A (en) System for avoiding collision with multiple moving bodies
CN108734081B (en) Vehicle Lane Direction Detection
JP2009245120A (en) Intersection visibility detection device
US11348463B2 (en) Travel control device, travel control method, and storage medium storing program
US11279352B2 (en) Vehicle control device
JP7495179B2 (en) Driving Support Devices
JP2011209919A (en) Point map creating device and program for crossing point map creating device
EP3211618A1 (en) Adjacent lane verification for an automated vehicle
CN112758013A (en) Display device and display method for vehicle
JP2021131775A (en) Driving assistance system for vehicle
US20220080982A1 (en) Method and system for creating a road model
CN113302105A (en) Driving assistance method and driving assistance device
CN114194186A (en) Vehicle travel control device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination