CN116985817A - Method, system, vehicle and computer program product for predicting an afflux scene - Google Patents

Method, system, vehicle and computer program product for predicting an afflux scene Download PDF

Info

Publication number
CN116985817A
CN116985817A CN202310967714.9A CN202310967714A CN116985817A CN 116985817 A CN116985817 A CN 116985817A CN 202310967714 A CN202310967714 A CN 202310967714A CN 116985817 A CN116985817 A CN 116985817A
Authority
CN
China
Prior art keywords
vehicle
lane
scene
entry
predicting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310967714.9A
Other languages
Chinese (zh)
Inventor
王德瑾
谢鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mercedes Benz Group AG
Original Assignee
Mercedes Benz Group AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mercedes Benz Group AG filed Critical Mercedes Benz Group AG
Priority to CN202310967714.9A priority Critical patent/CN116985817A/en
Publication of CN116985817A publication Critical patent/CN116985817A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/18Propelling the vehicle
    • B60W30/18009Propelling the vehicle related to particular drive situations
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/18Propelling the vehicle
    • B60W30/18009Propelling the vehicle related to particular drive situations
    • B60W30/18163Lane change; Overtaking manoeuvres
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/0097Predicting future conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0015Planning or execution of driving tasks specially adapted for safety

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to a method for predicting an incoming scene of a lane, comprising: judging whether a shielding structure shielding the view of the vehicle exists on the side surface of the vehicle, particularly in front of the side surface; acquiring image data about surrounding objects located around the occlusion structure; and predicting whether an entry scene of a lane exists or not based on the image data based on the recognition of the structure and/or the type of the surrounding object in a case where the vehicle view is blocked. The invention also relates to a system for predicting an incoming scene of a lane, a vehicle and a computer program product. According to the invention, even if a vehicle or a detection device such as a radar and a camera cannot recognize the converging scene of the lane due to the limitation of the view field by the shielding structure, the lane with the converging main road can be indirectly and early judged, so that corresponding coping strategies such as lane changing can be adopted in advance, thereby realizing safe driving and improving the passing efficiency.

Description

Method, system, vehicle and computer program product for predicting an afflux scene
Technical Field
The invention relates to a method for predicting an incoming scene of a lane. The invention also relates to a system for predicting an incoming scene of a lane, a vehicle and a computer program product.
Background
In order to facilitate the integration of vehicles, an integration port is often provided. However, the incorporation of other vehicles may affect the current vehicle's traffic efficiency, and may even pose a safety hazard to the current vehicle. Therefore, during the automatic driving, it is necessary to detect the sink and adjust the driving behavior of the vehicle.
In the related art, by locating the current position of a vehicle and based on stored map data, it is possible to determine sink information on a travel path of the vehicle, for example, whether there is a sink on a current lane in which the vehicle is located or whether there is a sink within a predetermined distance. In addition, in the case of an autonomous vehicle, by detecting the extension or the trend of the road markings, it is possible to determine that a road structure having an entry or the like exists based on the relative positional relationship (for example, intersection) of the plurality of road markings.
In actual driving, since the map data is not updated in real time, it is necessary to detect the import information in real time; however, such real-time detection is often limited, e.g., the "field of view" of the image sensor, radar, etc. is blocked by surrounding objects and such entry information cannot be detected, which would result in the vehicle not being able to make corresponding adjustments to its driving behavior in time or in advance.
Disclosure of Invention
The invention is based on the object of predicting, in particular indirectly, the possible entry situations to the side of a vehicle when the vehicle is driving on a traffic lane, in particular in the case of limited visual field, in particular in the case of limited lateral visual field, on the basis of the detection of the surroundings or of objects of the surroundings, so that appropriate countermeasures, for example lane changes, can be taken in advance in order to reduce the adverse effect on the traffic efficiency of the vehicle or even to avoid safety accidents.
Within the scope of the present invention, an "entry scene" or "entry scene of a vehicle" or "entry scene of a lane" can be understood, for example, as the following driving scene: the other vehicles on the other lanes change lane to the lane of the lane where the current lane is located or the other vehicles on the other lanes change lane to the lane where the current vehicle is located, and/or the lane is reduced or the road surface is narrowed. One possible exemplary scenario is where a ramp merges into a main road or where an auxiliary road merges into a main road, and one additional exemplary scenario is where a main road merges into an auxiliary road. In these driving scenarios, the flow of traffic will slow down or the vehicle may need to slow down, thereby affecting the traffic efficiency of the vehicle. In addition, collision accidents may occur due to abrupt insertion of other vehicles. For efficiency reasons and for safety reasons, it is therefore desirable to accurately and timely anticipate such incoming scenes and to take corresponding countermeasures in advance.
In order to solve the above-mentioned task, a method for predicting an intake scene of a lane, a system for predicting an intake scene of a lane, a vehicle and a computer program product are proposed.
According to a first aspect of the present invention, a method for predicting an ingress scenario of a lane is presented, the method comprising:
judging whether a shielding structure shielding the view of the vehicle exists on the side surface of the vehicle, particularly in front of the side surface;
acquiring image data about surrounding objects located around the occlusion structure; and
and when the vehicle view is blocked, predicting whether an entry scene of a lane exists according to the identification of the structure and/or the category of the surrounding environment object based on the image data.
In actual driving, the field of view of the vehicle or the in-vehicle detection device is affected by surrounding vehicles or surrounding infrastructure. For example, vehicles located around the vehicle, particularly large vehicles (e.g., buses, trucks, etc.), and public infrastructure on both sides of the road (e.g., green belts, isolation belts, sound-insulating belts, etc.), will block the forward and lateral, particularly lateral, forward views of the vehicle to such an extent that the vehicle itself cannot directly detect road conditions, i.e., an entry scene, important to its travel, such as other vehicles having or about to have competing paths with the vehicle, upcoming complex road conditions (particularly entry), and road markings characterizing the entry scene, etc.
In this case, i.e. in the case of a blocked lateral view of the vehicle, it is possible to detect the surrounding objects located around the blocking structure by means of an application-specific or already installed detection device. The surrounding object is in spatial proximity, in particular in close proximity, to the shielding structure; in other words, the distance between the surrounding object and the shielding structure in the horizontal direction and/or in the vertical direction is smaller than a predetermined threshold value. In this case, a spatially close coupling relationship means that the surrounding objects can characterize or imply the presence or absence of the shielding structure and thus the presence of the corresponding entry scene. For example, in actual traveling, when there is an afflux, infrastructure structures such as green belts, soundproof belts, or isolation belts on both sides of a road, for example, are interrupted or terminated, so that a "gap structure" is visually formed, whereby the afflux can be predicted by recognizing the "gap structure". For example, in actual traveling, in addition to road markings, an entry or an entry scene can be indicated by traffic signs, for example, a road indication sign such as a junction sign, an indication sign such as a lane reduction sign, a stop-and-go sign, a deceleration-and-go sign, or the like, and thus the entry scene can be predicted by recognizing these traffic signs.
According to the present invention, in a vehicle for automated driving/assisted driving, even if a detection device such as a radar and/or a camera cannot recognize whether or not there is a traffic flow to be remitted on the other side of the shield facing away from the vehicle due to a shield such as a sound barrier, it is possible to indirectly determine whether or not there is a traffic flow to be remitted on the other side of the shield facing away from the vehicle. The recognition possibilities of the entry scene are thereby not only widened, but the effective range of the recognition of the entry scene in the spatial dimension is enlarged to a certain extent, and corresponding countermeasures can therefore be taken in advance.
A preferred embodiment of the method according to the invention provides for the shielding structure to be identified on the basis of size characteristics, color characteristics and/or dynamic characteristics. Illustratively, the shielding structure includes a sound barrier tape and a spacer tape. It is conceivable to distinguish the belt or green belt from the vehicle (in particular, a large vehicle) according to the dimensional characteristics, since the vehicle is smaller than such a shielding structure of the soundproof belt in terms of its width and/or height. In addition or alternatively, it is also conceivable to distinguish the shielding structure according to color characteristics, since the green belt has a different color than the normal belt in addition to being smaller in height. Additionally or alternatively, it is also conceivable to distinguish in terms of dynamic characteristics, since this can distinguish static shielding structures from dynamic vehicles on the one hand and on the other hand can determine driving scenarios, such as urban roads, highways, etc., to some extent on the basis of speed.
Preferably, the relevant surrounding object is determined based on the occlusion structure. In other words, on the basis of the identification or classification of the shielding structure, the environment in which the vehicle is currently located, such as a highway, an urban road, etc., is determined in combination with other information, and further the possible kinds of signage in the respective environment are determined. For example, when the soundproof tape is recognized, it is necessary to additionally recognize or detect an attention merging sign having a triangular outline/shape, which is displayed as yellow, and which includes a Y-shaped pattern, for the case where the entrance of the expressway is merged. For example, when a green belt is recognized, it is necessary to recognize or detect whether a stop giving way sign or a speed reduction giving way sign, which is displayed as a combination of red and white and contains text, is present in a case where an urban road is merged, that is, in a case where a vehicle is merged from a main road into an auxiliary road, for example. In this way, the surrounding objects to be processed can be determined specifically to a specific driving scene or a merging scene, so that the complexity of the data processing is reduced while the accuracy is increased, since, for example, the image matching algorithm (for shape, color, etc.) to be used subsequently can already be determined and the range of the data to be processed is reduced. .
Preferably, the image data to be acquired and/or processed is determined based on the position of the shielding structure. In particular, if the right view is limited or an occlusion structure is identified on the right side, the right-mounted camera is activated or the image portion of the captured image corresponding to the occlusion structure on the right side is processed; the same applies to cases where the left-hand view is limited or where the occlusion structure is identified on the left. Furthermore, only the surrounding objects in the image that are spatially closely coupled, in particular immediately adjacent or connected to each other, to the shielding structure are processed in order to filter out irrelevant objects. Thereby reducing the energy consumption and the requirements for computational power.
According to a preferred embodiment of the method according to the invention, it is provided that if the surrounding object forms a gap structure relative to the shielding structure, it is determined that an entry scene of the lane is present. It is contemplated that when a continuous deadening or isolation zone breaks or ends, it is often meant that there are intersections, which are often meant that vehicles on other lanes are about to merge or merge. A discontinuous structure or "break" is visually formed due to the interruption or termination of the sound barrier tape. Within the scope of the invention, this is referred to as a "notch structure", for example, a sound-insulating tape can form such a notch structure with other environmental objects (e.g. a road surface, another sound-insulating tape to which other traffic lanes belong, etc.). In one embodiment, such a gap structure is identified by a difference in height, i.e. a height difference, whereby it is determined that the surrounding object forms a gap structure with respect to the shielding structure when the height difference of the surrounding object with respect to the shielding structure is larger than a predetermined threshold. Alternatively or additionally, in one embodiment, such a notch structure can be identified on the basis of the image data by means of algorithms known in the art, wherein it is determined on the basis of the image matching algorithm (by pattern matching or pattern matching) whether a notch structure is present, wherein in particular a matching in terms of shape and/or color is possible, for example a sudden change in shape (contour) and a change in color can characterize an interruption or termination of the shielding structure. Thus, the presence of intersections/entrances that are relevant to the importance of the entry scene is determined indirectly, at a greater distance, and early by the identification of the "gap structure".
According to a preferred embodiment of the method according to the invention, it is provided that if the surrounding object is classified as a traffic sign, it is determined that an entry scene of the lane is present. In one embodiment, the traffic sign is identified based on a color feature. Thus, even if the specific content of the traffic sign is not clearly recognized, it can be indirectly judged that the lane converging into the main road exists.
A preferred embodiment of the method according to the invention provides that additionally, on the basis of the deceleration behavior of the preceding vehicle, it is predicted whether an entry situation for the lane is present. It is conceivable that, in the case where there is an entry scene, when vehicles on the auxiliary road or ramp enter, the vehicles on the main road or ramp may be decelerated, and the vehicle flow speed may be slowed down. Therefore, the merging scene of the lane is predicted based on the running dynamic change of the vehicle on a certain lane, so that the accuracy is further improved.
According to a preferred embodiment of the method according to the invention, a lane in which the vehicle is currently located is determined, wherein the vehicle is steered away from the current lane if the vehicle is currently in an entry lane, or wherein the vehicle is steered away from a non-entry lane if the vehicle is currently in a non-entry lane, such that the vehicle is steered toward another non-entry lane or remains in the current non-entry lane.
Within the scope of the invention, a "merging lane" refers to a lane on which a vehicle on another lane is about to drive in. For a multilane roadway, "non-afflux lane" refers to other lanes on the roadway other than the afflux lane.
That is, based on the current lane and the determined entry lane, the vehicle can be operated such that the vehicle is not located on the entry lane, thereby eliminating the adverse effect of the entry scene on the passage of the host vehicle. Further, for a multilane roadway, if the vehicle is currently located on a non-afflux lane, the vehicle can not only remain on the current non-afflux lane, but can also select another non-afflux lane that is more advantageous based on driving requirements.
According to a preferred embodiment of the method according to the invention, it is provided that when the vehicle is no longer in the entry lane, information about the entry scene of the existing lane is transmitted to surrounding vehicles, in particular to the rear vehicles, for example via a TCU (Telematics Control Unit, remote information control unit). This enables information sharing between vehicles. On the one hand, this makes it possible for vehicles which are not equipped with the system according to the invention to obtain information about the upcoming entry scenario by means of communication technology means, such as V2X (Vehicle-to-evaluation), TCU, etc. On the other hand, the information is transmitted to the rear vehicle in particular, so that the rear vehicle can react to the entry scene earlier.
According to a second aspect of the present invention, a system for predicting an ingress scenario of a lane is presented, the system controlling means being configured for performing the method for predicting an ingress scenario of a lane according to the present invention.
According to a third aspect of the invention, a vehicle is proposed, which is fitted with a system for predicting an afflux scene of a lane according to the invention.
According to a fourth aspect of the invention, a computer program product, such as a computer readable program medium, is proposed, the computer program product comprising or storing computer program instructions which, when being executed by a processor, are capable of performing the method for predicting an afflux scene of a lane according to the invention.
Drawings
The invention is described in more detail below by reference to the accompanying drawings. The drawings show:
FIG. 1 shows a schematic flow chart of one embodiment of a system for predicting an afflux scenario of a lane in accordance with the present invention; and
fig. 2 shows a schematic block diagram of one embodiment of a method for predicting an incoming scene of a lane according to the present invention.
Detailed Description
When an autonomous/assisted vehicle is traveling on a road, the "field of view" of the driver and/or detection devices such as radar, cameras, etc. is adversely affected by obstructions or structures such as sound barriers, green belts, isolation belts, etc., making it difficult to determine the upcoming merging lane (merging flow). The vehicles enter the entering lane, which reduces the speed of the traffic flow, so that the traffic efficiency of the vehicles in the entering lane is reduced, and in extreme cases, the vehicles can collide to cause potential safety hazards.
In this regard, the invention is based on the following idea: when the detection device is shielded by a shielding structure (such as a sound-insulating band, a separation band and the like) and cannot judge whether the traffic flow to be converged exists on the other side of the shielding structure, whether the traffic flow to be converged into the main road exists or not is indirectly judged by virtue of surrounding environment objects around the shielding structure. The surrounding object forms a "notch structure" with the shielding structure, for example. Alternatively or additionally, the surrounding object is a marker characterizing the inflow of traffic, which is preferably used for the determination without clearly identifying the specific content. In this way, the vehicle or the system for autonomous/assisted driving can perform lane changing operations, taking into account safety, in particular with the steering device, so that the vehicle leaves the entry lane and/or moves to a (more) safe non-entry lane, thereby improving the traffic efficiency and safety of the vehicle driving.
A schematic flow chart diagram of one embodiment of a system 100 for predicting an entry scenario for a lane in accordance with the present invention is shown in fig. 1.
The system 100 comprises a detection means 110, a control means 120 and an execution means 130.
The detection device 110 is used to detect the surroundings of the vehicle or objects of the surroundings. The detection device 110 may be provided in an application-specific manner or may have been installed previously. The detection device comprises a radar, a laser radar, a camera, a video sensor, an ultrasonic sensor and the like. The detection device 110 can be mounted directly in front of the vehicle, for example above a bumper, on a windshield, or can be mounted on the left and/or right side of the vehicle, for example near a rear view mirror. The types, numbers and/or mounting locations of the detection devices are merely exemplary and not limiting, and can be varied according to the actual requirements.
The control device 120 receives the data transmitted by the detection device 110 and performs an analysis process thereon. The control device is configured to determine whether the vehicle is blocked, in particular in its lateral view and/or in its lateral forward view, based on the data transmitted by the detection device 110. Furthermore, the control device is configured for identifying the shielding structure and the relevant surrounding objects around the shielding structure based on the data transmitted by the detection device 110, which in particular can utilize computer vision techniques, which in particular can be based on a neural network, preferably a deep neural network. Furthermore, the control device is configured to determine or predict whether an entry scene of the lane exists based on the data transmitted by the detection device 110, from the occlusion structure and its surrounding objects.
The actuator 130 is divided here by way of example into two modules, namely a vehicle actuator module 131 and an inter-vehicle communication module 132. The vehicle execution module 131 may be an execution mechanism that realizes a specific vehicle function, such as a steering device, a brake device, a lamp control device, etc., or may be another control device connected to the control device 120 via an in-vehicle data bus (e.g., CAN bus, LIN bus, ethernet, etc.), such as another ECU (Electronic Control Unit, electronic control device unit) or a vehicle control device. The vehicle execution module 131 can receive data signals or command signals from the control device 120 to accordingly steer the vehicle and its hardware components, for example, to perform lane change operations. The inter-vehicle communication module 132 is used to exchange data between the vehicle and other vehicles. In principle, the inter-vehicle communication module 132 may be configured as any module or communication interface that enables inter-vehicle communication. The inter-vehicle communication module 132 is preferably configured as a TCU. The data relating to the predicted entry scenario is transmitted via the inter-vehicle communication module 132 to other vehicles in the surrounding environment, so that the other vehicles can learn about the entry scenario in a timely manner, in particular earlier, so that corresponding countermeasures can be taken more timely.
The devices and modules included in system 100 are illustratively presented herein for ease of illustration only, however, it should be understood that the system is not limited thereto, but can include other devices and modules as appropriate.
Fig. 2 shows a schematic block diagram of an exemplary embodiment of a method 1 according to the invention for predicting an entry scene of a lane. In the following, the method 1 is described in connection with a scenario where a ramp merges into a main road.
The automated driving/assisted driving vehicle currently travels on a highway having three lanes, i.e., a left lane, a center lane, and a right lane. A sound barrier tape, for example 5 meters high, is provided on the left side of the left lane and on the right side of the right lane, respectively. There is already an ingress on the right side of the road, however, the vehicle is currently far from the front ingress and there are many other vehicles around the vehicle, so the vehicle cannot detect the road markings associated with the ingress either.
In step S101, the surroundings of the vehicle or the surrounding objects are detected by the detection device 110. For example, detection and ranging with a radar, capturing an image of the surrounding environment with a camera, and so forth.
In step S102, the control device 120 receives the detected radar data and/or image data and processes these data in an analysis in order to determine whether a shielding structure shielding the vehicle' S field of view is present on the vehicle side, in particular in the front side.
In one embodiment, the following determination is made on the reflective object that reflects the radar signal based on the reflected signal from the radar: whether the elevation angle or elevation angle range of the reflective object relative to the vehicle is greater than a predetermined angle threshold and/or whether the spacing, in particular the lateral spacing, of the reflective object relative to the vehicle is less than a predetermined spacing threshold. If the elevation angle is greater than a predetermined angle threshold and/or the spacing is less than a predetermined spacing threshold, it may be determined that the field of view is occluded.
It is conceivable that the size information and/or dynamic information of the reflective object can be obtained on the basis of Lei Dadian cloud data, in particular lidar point cloud data, whereby it can be initially determined whether the reflective object, which obstructs the view of the vehicle, relates to an intended shielding structure, i.e. whether a sound barrier band is involved, the size (height and/or width) of which is greater than a predetermined threshold and/or which is a static object.
Alternatively or additionally, in one embodiment, it is determined whether an expected occlusion structure is present in the surrounding environment based on image data from the camera. If it is recognized that a sound-shielding tape exists on the right side of the vehicle by means of pattern matching based on the image data, it can be determined that a shielding structure shielding the view of the vehicle exists on the right side of the vehicle.
Here, if the result is not (not explicitly shown here), the process returns to step S101 to continue the detection. Otherwise, if yes, proceed to step S103, where the control device 120 starts the camera to capture an image of the soundproof belt and its surrounding objects.
In step S104, it is determined whether the surrounding object forms a notch structure with respect to the shielding structure. Here, if there is an ingress, the structure of the sound barrier band to which the current road belongs will terminate on the one hand, and the ramp and its associated infrastructure (e.g. another sound barrier band) will enter the vehicle's field of view on the other hand, thereby visually forming a "notched structure".
Such notch structures appear as visual discontinuities, in particular as highly abrupt changes. If the difference in height of the other sound-deadening band with respect to the sound-deadening band is greater than the predetermined threshold, it is determined that the other sound-deadening band forms a notched structure with respect to the sound-deadening band and thus that there is an afflux scene, and then proceeds to step S106, i.e., the yes branch. Otherwise return to step S101 or optionally return to step S103. Alternatively or additionally, it is determined whether a "notch structure" exists based on an image matching algorithm. It is conceivable here that the notch structure is formed by a sound-insulating band to which the main road belongs and another sound-insulating band to which the ramp belongs, but not by a sound-insulating band to which the main road belongs and the road surface or other background, so that the surrounding objects (i.e. the other sound-insulating band) associated with the sound-insulating band are taken into account in determining the notch structure in a driving scene-specific manner and thus false positives are reduced.
In step S105, it is determined whether a sign, for example, a yellow traffic flow incoming sign, exists around the soundproof belt, particularly above the soundproof belt. If the result is yes, it can be determined that there is a lane or a traffic flow leading into the main road ahead and then proceed to step S106. In this case, a corresponding determination can be made when the sign is recognized without further recognition of the specific content of the sign. For example, only a yellow surrounding object or a triangular surrounding object that is present above the soundproof tape needs to be identified. Thus reducing the algorithm complexity of the import scene recognition and simultaneously improving the recognition speed.
It should be noted here that the order of steps shown is merely exemplary, and the steps can also be performed simultaneously or in other orders, as long as the object of the invention is achieved. Further, when the result of step S104 and/or the result of step S105 is yes, proceeding to step S106; in other words, depending on the application, the process proceeds to step S106 when one of the two results is yes, or proceeds to step S106 when both the results are yes.
In step S106, the vehicle or the system or the control device 120 determines in which lane the vehicle is currently located, for example, based on the positioning or based on the lane line.
In step S107, the control device 120 generates an instruction to operate the vehicle in consideration of the lane in which the vehicle is currently located and the driving request.
In one embodiment, if the vehicle is currently in the entry lane, i.e., the right lane, the vehicle is maneuvered away from the right lane, e.g., toward the center lane or even toward the left lane.
In one embodiment, if the vehicle is currently in a non-afflux lane, such as a center lane, the vehicle need not perform a steering operation and remains on the center lane. Alternatively, it is conceivable that, in order to further reduce the influence of the afflux lane (for example, the vehicle on the right lane changes lane to the middle lane), the control device 120 steers the vehicle toward the left lane, thereby further reducing the adverse influence on the passing efficiency.
In step S108, information about the right lane as an entry lane or the existence of an entry scene is transmitted to surrounding vehicles, in particular to the rear vehicles. This is achieved in particular by means of a TCU. Thus, even if the surrounding vehicles are not equipped with the system according to the present invention, it is possible to obtain information about the entry scene in advance and to take a corresponding countermeasure thereto, which improves traffic safety and vehicle passing efficiency as a whole. The transmitted information can additionally also include distance information of the entry from the vehicle, which is determined, for example, by means of a plurality of images that follow one another, so that the surrounding vehicle can incorporate the distance information or the time information into the driving plan.
In summary, during automatic/assisted driving, it is possible to indirectly judge the upcoming incoming traffic by means of other surrounding objects for road conditions where shielding structures such as sound barriers are present on both sides of the road, which affect the field of view of the vehicle, and thus to take corresponding countermeasures (lane changes) in advance so that the vehicle travels away from the incoming lane and towards other safer lanes with higher traffic efficiency.
Although specific embodiments of the invention have been described in detail herein, they are presented for purposes of illustration only and are not to be construed as limiting the scope of the invention. Various alternatives and modifications can be devised without departing from the spirit and scope of the invention.

Claims (10)

1. A method for predicting an afflux scenario of a lane, the method comprising:
judging whether a shielding structure shielding the view of the vehicle exists on the side surface of the vehicle, particularly in front of the side surface;
acquiring image data about surrounding objects located around the occlusion structure; and
and when the vehicle view is blocked, predicting whether an entry scene of a lane exists according to the identification of the structure and/or the category of the surrounding environment object based on the image data.
2. The method of claim 1, wherein the occlusion structure is identified based on a dimensional feature, a color feature, and/or a dynamic feature,
wherein the shielding structure comprises a sound insulation belt and a separation belt,
wherein a relevant surrounding object is determined based on the occlusion structure, and/or
Wherein the image data to be acquired and/or processed is determined based on the position of the shielding structure.
3. The method according to claim 1 or 2, wherein if the surrounding object forms a gap structure with respect to the shielding structure, it is determined that an entry scene of a lane exists,
when the height difference of the surrounding environment object relative to the shielding structure is larger than a preset threshold value, judging that the surrounding environment object forms a notch structure relative to the shielding structure; and/or
And judging whether a gap structure exists or not based on an image matching algorithm, wherein matching is performed in terms of shape and/or color.
4. A method according to any one of claims 1 to 3, wherein if the surrounding object is classified as a traffic sign, it is determined that there is an entry scene of a lane, wherein the traffic sign is identified based on a color feature.
5. The method of any of claims 1 to 4, wherein the presence of an entry scenario of a lane is predicted additionally based on deceleration behavior of a preceding vehicle.
6. The method of any one of claims 1 to 5, wherein a lane in which the vehicle is currently located is determined,
wherein if the vehicle is currently in the entry lane, the vehicle is controlled to leave the current lane, or
Wherein if the vehicle is currently in a non-afflux lane, the vehicle is maneuvered such that the vehicle is driven to another non-afflux lane or remains on the current non-afflux lane.
7. Method according to any of claims 1 to 6, wherein when the vehicle is no longer in an entry lane, information about the entry scene of the existing lane is transmitted, for example by TCU, to surrounding vehicles, in particular to the rear vehicles.
8. A system for predicting an afflux scenario of a lane, the system comprising a control device configured to perform the method of any one of claims 1 to 7.
9. A vehicle mounted with the system of claim 8.
10. A computer program product, such as a computer readable program medium, comprising or storing computer program instructions which, when executed by a processor, are capable of performing the method according to any one of claims 1 to 7.
CN202310967714.9A 2023-08-02 2023-08-02 Method, system, vehicle and computer program product for predicting an afflux scene Pending CN116985817A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310967714.9A CN116985817A (en) 2023-08-02 2023-08-02 Method, system, vehicle and computer program product for predicting an afflux scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310967714.9A CN116985817A (en) 2023-08-02 2023-08-02 Method, system, vehicle and computer program product for predicting an afflux scene

Publications (1)

Publication Number Publication Date
CN116985817A true CN116985817A (en) 2023-11-03

Family

ID=88533453

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310967714.9A Pending CN116985817A (en) 2023-08-02 2023-08-02 Method, system, vehicle and computer program product for predicting an afflux scene

Country Status (1)

Country Link
CN (1) CN116985817A (en)

Similar Documents

Publication Publication Date Title
EP3696789B1 (en) Driving control method and driving control apparatus
EP0896918B1 (en) Drive assist system for motor vehicles
EP3696788B1 (en) Driving control method and driving control apparatus
US20100030474A1 (en) Driving support apparatus for vehicle
US10935976B2 (en) Blinker judgment device and autonomous driving system
KR102020340B1 (en) A intersection and walking safety monitoring system using lidar
KR20200142155A (en) Advanced Driver Assistance System, Vehicle having the same and method for controlling the vehicle
JP6984172B2 (en) Driving support method and driving support device
CN110036426B (en) Control device and control method
US20190111930A1 (en) Vehicle controller
US11987239B2 (en) Driving assistance device
JP2009294934A (en) Autonomous mobile apparatus and control method for autonomous mobile apparatus
CN111583711A (en) Behavior control method and behavior control device
JP7472983B2 (en) Control device, control method and program
CN111127908A (en) Automatic driving control method for vehicle confluence
JP6828429B2 (en) Vehicle collision avoidance support device and vehicle collision avoidance support method
US11180141B2 (en) Vehicle control system
JP4541609B2 (en) Stop line recognition device and vehicle driving support device using the stop line recognition device
CN110799403A (en) Vehicle control device
US20220032906A1 (en) Device and method for reducing collision risk
CN114516329A (en) Vehicle adaptive cruise control system, method, and computer readable medium
JP5936258B2 (en) Driving assistance device
CN113650607B (en) Low-speed scene automatic driving method, system and automobile
CN113147755A (en) Anti-collision system and method for vehicle video network
CN113631446A (en) Vehicle control device, vehicle control method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication