CN114743395A - Signal lamp detection method, device, equipment and medium - Google Patents

Signal lamp detection method, device, equipment and medium Download PDF

Info

Publication number
CN114743395A
CN114743395A CN202210279875.4A CN202210279875A CN114743395A CN 114743395 A CN114743395 A CN 114743395A CN 202210279875 A CN202210279875 A CN 202210279875A CN 114743395 A CN114743395 A CN 114743395A
Authority
CN
China
Prior art keywords
information
lane
target
vehicle
signal lamp
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210279875.4A
Other languages
Chinese (zh)
Other versions
CN114743395B (en
Inventor
李丰军
周剑光
高列
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Automotive Innovation Co Ltd
Original Assignee
China Automotive Innovation Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Automotive Innovation Co Ltd filed Critical China Automotive Innovation Co Ltd
Priority to CN202210279875.4A priority Critical patent/CN114743395B/en
Publication of CN114743395A publication Critical patent/CN114743395A/en
Application granted granted Critical
Publication of CN114743395B publication Critical patent/CN114743395B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • G08G1/096708Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control
    • G08G1/096725Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control where the received information generates an automatic action on the vehicle control

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Atmospheric Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application relates to a signal lamp detection method, a signal lamp detection device, a signal lamp detection equipment and a signal lamp detection medium, wherein lane information of a target lane where a vehicle is located can be obtained by obtaining lane line information, signal lamp information and vehicle pose information and according to the vehicle pose information and the lane line information, so that more accurate lane information can be obtained; and determining target signal lamp information according to the vehicle pose information and the signal lamp information, wherein the target signal lamp information is the information of a target signal lamp closest to the vehicle, and the target sub-signal lamp information corresponding to the lane information is determined from at least one sub-signal lamp information corresponding to the target signal lamp information, so that the time delay of sensing signal lamp elements is reduced, and the accuracy of the signal lamp information is improved.

Description

Signal lamp detection method, device, equipment and medium
Technical Field
The invention relates to the technical field of automatic driving, in particular to a signal lamp detection method, a signal lamp detection device, signal lamp detection equipment and a signal lamp detection medium.
Background
The automatic driving core technology system comprises perception, decision and execution. With the continuous development of the automatic driving technology, the perception module becomes more important as the 'eye' of the automatic driving automobile. Meanwhile, traffic elements such as lane lines and signal lamps are used as indispensable identification objects of the perception module, and the traffic elements can be correctly and quickly retrieved and identified to provide basis for later-stage planning and control. In the related art, real-time image data can be acquired through a front-view camera mounted on an automatic driving vehicle, and the state of a signal lamp is analyzed according to the real-time image data. However, real-time image data is acquired through images, which is susceptible to interference in rainy and snowy weather, and signal light information is not accurate enough.
Disclosure of Invention
In order to solve the technical problems, the invention provides a signal lamp detection method, a signal lamp detection device, signal lamp detection equipment and a signal lamp detection medium, which can improve the accuracy of signal lamp information while reducing the time delay of sensing signal lamp elements.
According to a first aspect of the embodiments of the present disclosure, there is provided a signal lamp detection method, including:
acquiring lane line information, signal lamp information and vehicle pose information;
acquiring target lane information of a target lane where the vehicle is located according to the vehicle pose information and the lane line information;
determining target signal lamp information according to the vehicle pose information and the signal lamp information, wherein the target signal lamp information is the information of a target signal lamp closest to the vehicle;
and determining target sub-signal lamp information corresponding to the target lane information from the sub-signal lamp information corresponding to the target signal lamp information.
In one possible implementation, the lane line information includes lane stop line information corresponding to each of a plurality of lanes, and each lane stop line information includes first point set information of a plurality of points; the acquiring target lane information of a target lane where the vehicle is located according to the vehicle pose information and the lane line information includes:
determining first target point information closest to the vehicle from the first point set information according to the vehicle pose information;
determining the target lane according to the first target point information;
and acquiring the target lane information of the target lane.
In one possible implementation manner, the lane line information includes structure information of lane boundaries and second point set information respectively corresponding to lanes; the structure information represents the position structure relationship of each lane boundary, and the second point set information represents the position relationship of a plurality of points on each lane boundary;
the acquiring the target lane information of the target lane where the vehicle is located according to the vehicle pose information and the lane line information includes:
according to the vehicle pose information, the second point set information and the structure information, respectively determining a second target point and a third target point which are closest to the vehicle, wherein the second target point and the third target point are points on the boundary of two adjacent lanes;
determining the target lane according to the second target point and the third target point;
and acquiring the target lane information of the target lane.
In one possible implementation manner, the determining target signal light information according to the vehicle pose information and the signal light information includes:
extracting the vehicle position information and the vehicle course information from the vehicle pose information;
determining target signal lamp information which is closest to the vehicle within a preset yaw angle of the vehicle based on the vehicle position information, the vehicle course information and the signal lamp information; and the preset yaw angle is a positive and negative preset angle deviating from the course.
In one possible implementation, the method further includes:
collecting point set information of a lane stop line in advance;
constructing multi-dimensional tree information corresponding to the lane stop line based on the point set information of the lane stop line;
the determining a first target point closest to the vehicle according to the vehicle pose information and the point set information of the lane stop line includes:
and performing nearest neighbor search processing on the multidimensional tree information corresponding to the lane stop line based on the vehicle pose information to obtain the first target point closest to the vehicle.
In one possible implementation, the method further includes:
acquiring point set information of a signal lamp in advance;
constructing multi-dimensional tree information corresponding to the signal lamp based on the point set information of the signal lamp;
the step of determining the target signal lamp information which is closest to the vehicle within the preset yaw angle of the vehicle based on the vehicle position information, the course information and the signal lamp information comprises the following steps:
screening out multi-dimensional tree information within a preset yaw angle of the vehicle from the multi-dimensional tree information corresponding to the signal lamp based on the vehicle position information;
and carrying out nearest neighbor search processing on the multi-dimensional tree information within the preset yaw angle of the vehicle to obtain the target signal lamp information.
In one possible implementation manner, the target signal lamp information includes a corresponding relationship between different lane identification information and sub-signal lamp identification information, and sub-signal lamp state information corresponding to different sub-signal lamp identification information;
the step of matching the target sub-signal light information corresponding to the lane information where the vehicle is located from the target signal light information includes:
extracting identification information of the lane where the vehicle is located from the lane information;
matching target sub-signal lamp identification information corresponding to the identification information of the lane where the vehicle is located from the corresponding relation;
and extracting the target sub-signal lamp state information corresponding to the target sub-signal lamp identification information from the target signal lamp information.
In one possible implementation manner, after determining the target sub-signal light information corresponding to the lane information from the sub-signal light information corresponding to the target signal light information, the method further includes:
acquiring path planning information of the vehicle;
and performing automatic driving control on the vehicle based on the target sub-signal lamp information and the path planning information.
In one possible implementation manner, after determining the target sub-signal light information corresponding to the lane information from the at least one sub-signal light information corresponding to the target signal light information, the method further includes:
sending the target sub-signal lamp information to a related vehicle of the vehicle so that the related vehicle performs automatic driving control based on the target sub-signal lamp information and the path planning information of the related vehicle; the associated vehicle refers to a vehicle located in the same lane as the vehicle.
According to a second aspect of the embodiments of the present disclosure, there is provided a signal light detecting apparatus, which may include:
the first information acquisition module is used for acquiring at least one lane line information, signal lamp information and vehicle pose information;
the second information acquisition module is used for acquiring lane information of a target lane where the vehicle is located according to the vehicle pose information and the at least one lane line information;
the target signal lamp information determining module is used for determining target signal lamp information according to the vehicle pose information and the signal lamp information, wherein the target signal lamp information is the information of a target signal lamp closest to the vehicle;
and the target sub-signal lamp information determining module is used for determining target sub-signal lamp information corresponding to the lane information from at least one piece of sub-signal lamp information corresponding to the target signal lamp information.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of any of the first aspect above.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium, wherein instructions, when executed by a processor of an electronic device, enable the electronic device to perform the method of any one of the first aspect of the embodiments of the present disclosure.
According to a fifth aspect of the embodiments of the present disclosure, there is provided a computer program product for causing a computer to execute the method of any one of the first aspect of the embodiments of the present disclosure.
The application has the following beneficial effects:
according to the lane information acquisition method and the lane information acquisition device, lane information of a target lane where a vehicle is located is acquired by acquiring lane line information, signal lamp information and vehicle pose information and according to the vehicle pose information and the lane line information, and relatively accurate lane information can be obtained; and determining target signal lamp information according to the vehicle pose information and the signal lamp information, wherein the target signal lamp information is the information of a target signal lamp closest to the vehicle, and the target sub-signal lamp information corresponding to the lane information is determined from at least one sub-signal lamp information corresponding to the target signal lamp information, so that the time delay of sensing signal lamp elements is reduced, and the accuracy of the signal lamp information is improved.
Drawings
In order to more clearly illustrate the technical solutions of the present application, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application;
FIG. 2 is a flow diagram illustrating a signal light detection method according to an exemplary embodiment;
fig. 3 is a flowchart of a method for acquiring lane information of a target lane where a vehicle is located according to an embodiment of the present application;
fig. 4 is a schematic view of a lane line according to an embodiment of the present application;
fig. 5 is a schematic view of a lane line according to an embodiment of the present disclosure;
fig. 6 is a flowchart for acquiring lane information of a target lane where a vehicle is located according to an embodiment of the present disclosure;
FIG. 7 is a flowchart illustrating a method for determining target signal light information according to vehicle pose information and signal light information according to an embodiment of the present disclosure;
fig. 8 is a schematic diagram of determining target signal light information according to an embodiment of the present application;
fig. 9 is a flowchart illustrating a signal lamp detection method according to an embodiment of the present application;
FIG. 10 is a flow chart illustrating a method for signal detection according to an embodiment of the present disclosure;
fig. 11 is a flowchart illustrating a method for determining target sub-signal information corresponding to lane information from at least one sub-signal information corresponding to the target signal information according to an embodiment of the present application;
FIG. 12 is a diagram of a signal light detection device according to an embodiment of the present application;
fig. 13 is a block diagram of an electronic device for signal lamp detection according to an embodiment of the present application.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In order to implement the technical solution of the present application, so that more engineering workers can easily understand and apply the present application, the working principle of the present application will be further described with reference to specific embodiments.
The present application may be applied to automatic driving control of a vehicle, and in particular, to a signal lamp detection method, apparatus, device, and medium.
Referring to fig. 1, a schematic diagram of an implementation environment provided by an embodiment of the present disclosure is shown, where the implementation environment may include:
at least one terminal 01 and at least one server 02. The at least one terminal 01 and the at least one server 02 may perform data communication through a network.
In an alternative embodiment, the terminal 01 may be an executor of the signal light detection method. Terminal 01 may include, but is not limited to, vehicle terminals, smart phones, desktop computers, tablet computers, laptop computers, smart speakers, digital assistants, Augmented Reality (AR)/Virtual Reality (VR) devices, smart wearable devices, and other types of electronic devices. The operating system running on terminal 01 may include, but is not limited to, an android system, an IOS system, linux, windows, Unix, and the like. The server 02 may provide the terminal 01 with lane line information and signal light information. Optionally, the server 02 may be an independent physical server, may also be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server that provides basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, Network service, cloud communication, middleware service, domain name service, security service, CDN (Content Delivery Network), big data, and an artificial intelligence platform.
In an alternative embodiment, the terminal 01 may provide the vehicle pose information to the server 02, and the terminal 01 may refer to an on-vehicle terminal of the vehicle pose information. The server 02 may be an executor of the signal light detection method. Optionally, the server 02 may be an independent physical server, or a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server that provides basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a web server, cloud communication middleware service, domain name service, security service, CDN (Content delivery network), and a big data and artificial intelligence platform.
It should be noted that the following figures show a possible sequence of steps, and in fact do not limit the order that must be followed. Some steps may be performed in parallel without dependency on each other. User information (including but not limited to user device information, user personal information, user behavior information, etc.) to which the present disclosure relates is information that is authorized by a user or is sufficiently authorized by various parties.
FIG. 2 is a flow chart illustrating a signal light detection method according to an exemplary embodiment. As shown in fig. 2, the signal lamp detection method includes the following steps:
in step S201, lane line information, signal light information, and vehicle pose information are acquired.
In this embodiment of the present description, the lane line information may be at least one of a plurality of pieces of preset lane line information, and the preset lane line information may refer to information of a preset lane line, and may include, for example, identification information of the preset lane line. The preset lane line herein may refer to a lane line for forming a preset lane, for example, a lane stop line, a lane dividing line, and the like.
In one possible implementation manner, at least one piece of preset lane line information within a preset range may be acquired as the lane line information. As one example, the at least one may be a preset number. The preset range may be a range with the vehicle as a center and the radius as a preset length, and the preset length is only required to be sufficient to obtain a sufficient amount of lane line information, which is not limited in the present disclosure.
Optionally, when the high-precision map information includes lane line association information, the lane line information may be obtained by obtaining the high-precision map information. Specifically, after the high-precision map information is acquired, the lane line associated information in the high-precision map information may be extracted and integrated to obtain the lane line information.
In this embodiment, the signal light information may be at least one of a plurality of signal light information, and the plurality of signal light information may refer to information of a plurality of signal lights, and may include, for example, identification information, status information, and the like of the plurality of signal lights, where the status information may include information on whether to allow the traffic and read second information.
In one possible implementation manner, a plurality of pieces of signal light information within a preset range may be acquired as the signal light information. As an example, the preset range may be a sector range formed by offsetting the vehicle yaw angle by preset angles from the left and right of the vehicle. The number of signal lamps acquired within the range may be a preset number.
Optionally, when the high-precision map information includes signal lamp related information, after the high-precision map information is acquired, signal lamp related information in the high-precision map information may be extracted and integrated to obtain signal lamp information. It should be noted that the high-precision map information is updated in real time, and according to the high-precision map information updated in real time, real-time signal lamp information can be acquired, so that timeliness of the signal lamp information in application is ensured.
In the embodiments of the present specification, the vehicle pose information may include position information and posture information of the vehicle. Specifically, the position information of the vehicle may be two-dimensional or three-dimensional vehicle coordinates, for example, the position information of the vehicle may be global satellite positioning data that can be obtained by the vehicle using the positioning module, and the global satellite positioning data may be latitude and longitude information of the vehicle. The attitude information of the vehicle may be heading angle information of the vehicle, which may characterize a direction of travel of the vehicle.
In step S202, lane information of a target lane in which the vehicle is located is acquired according to the vehicle pose information and the lane line information.
In the embodiment of the present specification, the lane information of the target lane may refer to lane information corresponding to a lane in which the vehicle is located, and may be, for example, lane identification information of the target lane, a traveling direction of the target lane, or the like. According to the vehicle pose information and the lane line information, lane line information corresponding to the vehicle is determined, and a target lane where the vehicle is located is determined based on the lane line information corresponding to the vehicle, so that the lane information is correspondingly obtained.
In some example embodiments, the lane line information may include lane stop line information corresponding to each of a plurality of lanes, and each of the lane stop line information may include first point set information of a plurality of points. The first point set information may be position information of a plurality of points, such as longitude and latitude information or abscissa information in a self-established coordinate system. On this basis, as shown in fig. 3, step S202 may include:
in step S301, first target point information closest to the vehicle is determined from the first point set information according to the vehicle pose information.
In the embodiments of the present specification, the distances of the plurality of points in the vehicle and the first point set information may be determined based on the vehicle pose information and the first point set information, respectively. First target point information closest to the vehicle is determined from the distances between the vehicle and the plurality of points.
In step S302, a target lane is determined based on the first target point information.
In this embodiment of the present specification, based on correspondence between a plurality of points and a lane in the first point set information, after the first target point information is obtained, a target lane may be searched for in a matching manner. Specifically, the correspondence relationship between the plurality of points and the lane in the first point set information may be in a form in which position information of the plurality of points and lane identification information are mapped.
In step S303, lane information of the target lane is acquired.
In the embodiment of the present specification, the lane information of the target lane may be lane identification information of the target lane.
The vehicle position and orientation information is the two-dimensional coordinate (x) of the vehicle14,ym) The three lanes are respectively the left-turn lanes L shown in FIG. 41Straight traffic lane L2And a right-turn lane L3For example, the three lanes respectively correspond to the lane stop line information Ll1Lane stop line information Ll2And a lane stop line Ll3The lane line information may be as shown in the following table:
TABLE 1
Figure BDA0003556713080000081
Figure BDA0003556713080000091
Where x represents the coordinates of the vehicle or point in the x direction and y represents the coordinates of the vehicle or point in the y direction. According to two-dimensional coordinates (x) of the vehicle14,ym) And first point set information (included points (x)14,y14) May determine that the lane stop line identification information corresponding to the vehicle is L)l1Based on the lane stop line marking information-L corresponding to the vehiclel1Determining that the lane identification information corresponding to the vehicle is L1I.e. lane information of the target lane in which the vehicle is located.
By acquiring the lane stop line information corresponding to a plurality of lanes, wherein each lane stop line information comprises first point set information of a plurality of points, the first target point information closest to the vehicle can be determined from the first point set information according to the vehicle pose information, and the lane information of the target lane where the vehicle is located can be accurately obtained at a place far away from the lane stop line.
In some exemplary embodiments, the lane line information may include a structural information of a lane boundaryAnd second point set information. The point set information of the lane boundary may represent a position structure relationship of each lane boundary, and the second point set information may represent a position relationship of a plurality of points on each lane boundary, such as longitude and latitude information or horizontal and vertical coordinate information in a self-established coordinate system. In particular, the lane boundaries may include a lane line between two adjacent lanes, and a lane line between a motor lane and a non-motor lane. The structural information of the lane boundaries refers to the position relationship between different lane boundaries, and the structural information can represent the mapping relationship between the lane boundaries and the lanes, for example, in fig. 5, there are lane boundaries L41Lane line L42Lane line L51And lane line L52The structure information of the lane boundary and the second point set information may be as shown in the following table:
Figure BDA0003556713080000092
Figure BDA0003556713080000101
from the structure information of the lane boundary and the second point set information, the lane boundary L is known41And lane line L42Constituting a traffic lane L4Lane line of demarcation L42And lane line L51Form a lane L5Lane line of demarcation L51And lane line L52Form a lane L6Lane L4And a lane L5All contain a lane boundary L42Lane L5 and lane L6All contain a lane boundary L51. Here, a method of acquiring the structure information of the lane line and the second point set information in combination is described, that is, the structure information of the lane line and the second point set information may be acquired from the information table. In addition, the structure information of the lane boundary and the second point set information may be divided into two parts, the structure information of the lane boundary may be acquired from the information table 1,second point set information of the lane boundary is acquired from the information table 2. The lane line information can be deployed according to actual conditions.
Specifically, when the lane line information includes the structure information of the lane boundary and the second point set information, as shown in fig. 6, acquiring the lane information of the target lane where the vehicle is located according to the vehicle pose information and the lane line information may include the following steps:
in step S601, a second target point and a third target point that are closest to the vehicle are determined according to the vehicle pose information, the second point set information, and the structure information, respectively.
In the embodiment of the present specification, the second target point and the third target point are points on the boundary between two adjacent lanes, respectively.
Optionally, a second target point and a third target point which are closest to the vehicle are respectively determined according to the vehicle pose information, the second point set information and the structure information, and the second target point which is closest to the vehicle can be determined according to the vehicle pose information and the second point set information; according to the second point set information and the structure information, determining a lane boundary corresponding to the second target point, and recording the lane boundary as a lane boundary LM(ii) a At the dividing line L of the traffic laneMAnd determining a third target point closest to the vehicle from the points except the upper total amount point.
Optionally, a second target point and a third target point that are closest to the vehicle are respectively determined according to the vehicle pose information, the second point set information, and the structure information, and a point that is closest to the vehicle on each lane boundary line may be determined from lane boundary lines near the vehicle according to the vehicle pose information and the second point set information, for example, points D on four different lane boundary lines are determined from lane boundary lines within a preset range of the vehicle1、D2、D3And D4The four points are the points closest to the vehicle on the boundary line of the lanes; based on point D1、D2、D3、D4The respective distances from the vehicle, the points with the first and second shortest distance ranking from the vehicle, i.e. the second target point and the third target point, can be determined.
In step S602, a target lane is determined based on the second target point and the third target point.
In the embodiment of the present specification, based on the structural information of the lane boundary, the target lane formed by the lane boundary corresponding to the second target point and the third target point may be determined.
In the embodiment of the present description, if a point closest to the vehicle on the boundary line of two lanes is determined, and the two points are not points on the boundary line of the adjacent lanes, the vehicle pose information may be obtained again, and the steps of determining the second target point and the third target point may be repeated until the second target point and the third target point are obtained. And if the vehicle pose information is repeatedly acquired, determining that the times of the steps of the second target point and the third target point reach preset times, and generating an error prompt instruction so as to avoid the safety problem caused by vehicle positioning or lane line information error.
In step S603, lane information of the target lane is acquired.
In the embodiment of the present specification, the structure information of the lane boundary may include lane identification information. After the target lane is determined, lane identification information of the target lane, i.e., lane information of the target lane, may be extracted from the structure information of the lane boundary.
According to the embodiment, the accurate second target point and the accurate third target point can be obtained according to the vehicle pose information, the second point set information and the structure information; the target lane is determined according to the second target point and the third target point, so that the determination processing of the target lane corresponding to the vehicle is not influenced by factors such as rainy and snowy weather, night and the like, and the target lane can be accurately determined at any distance from the intersection.
In step S203, target signal light information is determined from the vehicle pose information and the signal light information.
In this embodiment, the target signal light information may be information of a target signal light closest to the vehicle. It should be noted that the target signal lamp closest to the vehicle may be in the traveling direction of the vehicle, so as to avoid the signal lamp behind the vehicle from interfering with the signal lamp detection processing of the vehicle. Specifically, the vehicle pose information may include position information and pose information of the vehicle, where the pose information may characterize a direction of travel of the vehicle. Through the position information and the posture information of the vehicle, the target signal lamp closest to the vehicle can be determined in the driving direction of the vehicle, and the target signal lamp information can be obtained. The target signal lamp information may be identification information of the target signal lamp.
In step S204, the target sub-signal information corresponding to the lane information is specified from the at least one sub-signal information corresponding to the target signal information.
In this embodiment, the sub-signal light information may refer to information of a sub-signal light in the signal light, and the signal light with the sub-signal light may indicate whether to allow passing through in lanes simultaneously, for example, the signal light with a left turn indicator light, a straight going indicator light, and a right turn indicator light may indicate that 15 seconds remain for a green light in a left turn lane, 55 seconds remain for a red light in a straight going lane, and a red light in a right turn lane at one time.
Optionally, the signal light information may include signal light identification information, sub-signal light identification information, signal light status information, sub-signal light status information, and relationship information between the signal light and the sub-signal light. Specifically, the sub-signal lamp information may include identification information of at least one sub-signal lamp, information whether the sub-signal lamp allows passage, and second reading information of the sub-signal lamp. The target sub-signal light information corresponding to the lane information is determined from at least one piece of sub-signal light information corresponding to the target signal light information, and the target sub-signal light identification information and the sub-signal light state information corresponding to the lane information may be extracted from the target signal light information.
Optionally, the signal light information may include signal light identification information and signal light status information. After the target signal lamp information is determined, at least one piece of sub-signal lamp information corresponding to the target signal lamp information can be acquired according to the corresponding relation between the signal lamp and the sub-signal lamp. The state information of the sub-signal lamps and the relation information of the signal lamps and the sub-signal lamps.
In the embodiment, the lane information of the target lane where the vehicle is located is obtained by obtaining the lane line information, the signal lamp information and the vehicle pose information and according to the vehicle pose information and the lane line information, so that more accurate lane information can be obtained; the method comprises the steps of determining target signal lamp information according to vehicle position and position information and signal lamp information, wherein the target signal lamp information is the information of a target signal lamp closest to a vehicle, and determining the target signal lamp information corresponding to lane information from at least one piece of sub-signal lamp information corresponding to the target signal lamp information, so that the time delay of sensing signal lamp elements can be reduced, and the accuracy of the signal lamp information can be improved.
In some exemplary embodiments, as shown in fig. 7, determining the target signal light information from the vehicle pose information and the signal light information may include:
in step S701, vehicle position information and vehicle heading information are extracted from the vehicle pose information.
In this embodiment, the vehicle heading information may be heading information measured by a gyroscope in the vehicle, and represents a driving direction of the vehicle.
In step S702, target signal light information closest to the vehicle within a preset yaw angle of the vehicle is determined based on the vehicle position information, the vehicle heading information, and the signal light information.
In the embodiment of the present specification, the preset yaw angle may be a positive or negative preset angle deviating from the heading direction, as shown in fig. 8, taking the yaw angle as a positive or negative α for example, the signal lamp S closest to the vehicle in the sector area formed by the positive or negative α of the vehicle1The information of (2) is the target signal lamp information.
According to the embodiment, the vehicle position information and the vehicle course information are extracted from the vehicle position and posture information, and the signal lamp information which is closest to the vehicle in the preset yaw angle of the vehicle is determined based on the vehicle position information, the vehicle course information and the signal lamp information, so that the accuracy of the signal lamp information corresponding to the vehicle can be improved, the target signal lamp information can be determined at any distance from the intersection, and the application scene is wide.
In some exemplary embodiments, in a case where first target point information closest to the vehicle is determined from the first point set information according to the vehicle pose information, and the target lane is determined according to the first target point information, thereby acquiring lane information of the target lane, as shown in fig. 9, the method may further include:
in step S901, point set information of the lane stop line is collected in advance.
In step S902, multidimensional tree information corresponding to the lane stop line is constructed based on the point set information of the lane stop line.
In the embodiment of the present description, index information of a lane stop line and point set information may be constructed, and multidimensional tree information corresponding to the lane stop line may be constructed in combination with a structural relationship of the lane stop line.
Based on this, in step S301, determining the first target point information closest to the vehicle from the first point set information according to the vehicle pose information may include the steps of:
in step S903, based on the vehicle pose information, nearest neighbor search processing is performed on the multidimensional tree information corresponding to the lane stop line, so as to obtain a first target point closest to the vehicle.
In this embodiment of the specification, after the multi-dimensional tree information corresponding to the lane stop line is constructed, an area within a certain range from the vehicle may be determined based on the vehicle pose information and the multi-dimensional tree information, and a first target point closest to the vehicle may be further determined in the area.
In the embodiment, the nearest neighbor search processing is performed on the multi-dimensional tree information corresponding to the lane stop line by constructing the multi-dimensional tree information corresponding to the lane stop line and based on the vehicle pose information, so that the lane stop line and the point set information can be conveniently stored, and the first target point closest to the vehicle can be quickly obtained by gradually retrieving when the multi-dimensional tree information is used.
In some exemplary embodiments, as shown in fig. 10, the method may further include:
in step S1001, point set information of a signal lamp is collected in advance.
In this embodiment, the point set information of the signal lamp may refer to position information of points representing the signal lamp. The point set information of the signal lamp may include identification information of the signal lamp, position information of points characterizing the signal lamp, identification information of the sub-signal lamp, and status information of the sub-signal lamp.
In step S1002, multi-dimensional tree information corresponding to the signal lamp is constructed based on the point set information of the signal lamp.
In the embodiment of the present specification, index information of the states of the signal lamp and the signal lamp may be established according to the identification information of the signal lamp and the position information of the point representing the signal lamp. And generating multi-dimensional tree information according to the signal lamp and the index information of the signal lamp state. For example, one dimension of the multidimensional tree information may represent index information of position information of a point to which identification information of a signal lamp corresponds, and another dimension may represent index information of states of the signal lamp and the signal lamp.
Based on this, the step S702 determining the target signal light information closest to the vehicle within the preset yaw angle of the vehicle based on the vehicle position information, the heading information, and the signal light information may include:
in step S1003, multidimensional tree information within a preset yaw angle of the vehicle is screened from multidimensional tree information corresponding to the traffic light based on the vehicle position information.
In the embodiment of the description, after the multidimensional tree information corresponding to the signal lamp is constructed, multidimensional tree information corresponding to the signal lamp within the preset yaw angle range of the vehicle can be screened from the multidimensional tree information corresponding to the signal lamp based on the vehicle position information and the heading information.
In step S1004, nearest neighbor search processing is performed on the multidimensional tree information within the preset yaw angle of the vehicle, and target traffic light information is obtained.
In this embodiment, matching with the vehicle position information may be performed for each point in the multidimensional tree information within the preset yaw angle of the vehicle, and information of a point closest to the vehicle position information, that is, target signal lamp information, may be determined.
Through the point set information of the signal lamp is collected in advance, the multidimensional tree information corresponding to the signal lamp is constructed based on the point set information of the signal lamp, the multidimensional tree information in the preset yaw angle can be searched and processed in the nearest mode in actual use, the target signal lamp information is obtained quickly and accurately, and therefore data processing amount is reduced, and the overall efficiency of signal lamp detection processing is improved.
In some exemplary embodiments, the target signal light information may include correspondence of different lane identification information to the sub signal light identification information, and sub signal light state information corresponding to the different sub signal light identification information. In addition, as shown in fig. 11, the step S204 of determining the target sub-signal information corresponding to the lane information from the at least one sub-signal information corresponding to the target signal information may include:
in step S1101, identification information of the lane in which the vehicle is located is extracted from the lane information.
In the embodiment of the present specification, the lane information may further include identification information of at least one lane. After the lane in which the vehicle is located is determined, the identification information of the lane in which the vehicle is located may be extracted from the lane information.
In step S1102, the sub-signal lamp identification information corresponding to the identification information of the lane in which the vehicle is located is matched from the correspondence relationship.
In this embodiment, the target signal lamp information may include corresponding relationships between different lane identification information and sub signal lamp identification information, and the corresponding sub signal lamp identification information may be matched based on the identification information of the lane where the vehicle is located.
In step S1103, target sub-signal lamp state information corresponding to the target sub-signal lamp identification information is extracted from the target signal lamp information.
In this embodiment of this specification, the target signal lamp information further includes sub-signal lamp state information corresponding to different sub-signal lamp identification information. After the sub-signal lamp identification information is obtained, corresponding sub-signal lamp state information can be extracted.
Optionally, in some embodiments, the point set information of the signal lamp may further include identification information of the sub-signal lamp and status information of the sub-signal lamp. On the basis, index information of the states of the signal lamps and the signal lamps, index information of the states of the sub-signal lamps and index information of the states of the signal lamps and the sub-signal lamps can be established according to the identification information of the signal lamps, the position information of points representing the signal lamps, the identification information of the sub-signal lamps and the state information of the sub-signal lamps. For example, one dimension of the multidimensional tree information can represent the index information of the signal lamp and the sub-signal lamp, and one dimension can represent the index information of the state of the sub-signal lamp and the sub-signal lamp.
By extracting the identification information of the lane where the vehicle is located from the lane information and matching the target sub-signal lamp identification information corresponding to the identification information of the lane where the vehicle is located from the corresponding relation between the different lane identification information and the sub-signal lamp identification information, the target sub-signal lamp identification information can be conveniently and efficiently obtained, the target sub-signal lamp state information corresponding to the target sub-signal lamp identification information is extracted from the target signal lamp information, the processing efficiency of determining the target sub-signal lamp state information can be improved, the target sub-signal lamp state information can be determined at any distance from the intersection, and the applicable scene is wide.
In some exemplary embodiments, acquiring lane line information and signal light information may include: and acquiring high-precision map information, and performing serialization processing on the high-precision map information to obtain lane line information and signal lamp information.
In the embodiments of the present specification, the information in the high-precision map information may be adjusted according to actual conditions. For example, the lane line information in the high-precision map information may be periodically adjusted according to a change in lane, and the signal light information may be updated in real time.
Specifically, after the high-precision map information is acquired, the high-precision map information may be serialized. Specifically, the high-precision map information may be decoded, and then the decoded data may be serialized. Specifically, the serialization processing may be to store the decoded data according to a preset field, so as to obtain lane line information and signal light information that can be directly called.
Optionally, after the serialization processing, data compression may be performed on the data obtained after the serialization processing according to the lane line information attribute and the signal light information attribute, so as to save space occupied by data storage and optimize data storage.
By acquiring the high-precision map information and performing serialization processing on the high-precision map information to obtain the lane line information and the signal lamp information, a convenient mode for acquiring the lane line information and the signal lamp information can be provided, and timeliness of the lane line information and the signal lamp information is guaranteed.
In some exemplary embodiments, after determining the target sub-signal light information corresponding to the lane information from among the at least one sub-signal light information corresponding to the target signal light information, the method may further include: and acquiring the path planning information of the vehicle, and performing automatic driving control on the vehicle based on the target sub-signal lamp information and the path planning information. In this embodiment, the target sub-signal lamp information may provide an accurate real-time result of traffic elements for automatic driving control, and based on the accurate and real-time target sub-signal lamp information and the path planning information, the accuracy of automatic driving control may be improved, and the safety of automatic driving may be improved.
In some exemplary embodiments, after determining the target sub signal light information corresponding to the lane information from among the at least one sub signal light information corresponding to the target signal light information, the method may further include: and sending the target sub-signal lamp information to a related vehicle of the vehicle so that the related vehicle carries out automatic driving control based on the target sub-signal lamp information and the path planning information of the related vehicle. In particular, the associated vehicle may refer to a vehicle located in the same lane as the vehicle in the present application. The associated vehicle can compare and screen the sub-signal lamp information determined based on the self-vehicle position information with the received target sub-signal lamp information so as to ensure the accuracy of the sub-signal lamp information corresponding to the current lane; or the associated vehicle can perform automatic driving control by using the target sub-signal lamp information and the path planning information of the own vehicle after determining that the associated vehicle is in the same lane with the current vehicle.
The present application further provides a signal lamp detecting device, as shown in fig. 12, the device may include:
a first information obtaining module 1201, configured to obtain lane line information, signal lamp information, and vehicle pose information;
a second information obtaining module 1202, configured to obtain lane information of a target lane where a vehicle is located according to the vehicle pose information and the lane line information;
a target signal lamp information determining module 1203, configured to determine target signal lamp information according to the vehicle pose information and the signal lamp information, where the target signal lamp information is information of a target signal lamp closest to the vehicle;
a target sub-signal light information determining module 1204, configured to determine, from at least one piece of sub-signal light information corresponding to the target signal light information, target sub-signal light information corresponding to the lane information.
According to the lane information acquisition method and device, lane information of a target lane where a vehicle is located is acquired through acquiring lane line information, signal lamp information and vehicle position and position information according to the vehicle position and position information and the lane line information, and relatively accurate lane information can be obtained; and determining target signal lamp information according to the vehicle pose information and the signal lamp information, wherein the target signal lamp information is the information of a target signal lamp closest to the vehicle, and the target sub-signal lamp information corresponding to the lane information is determined from at least one sub-signal lamp information corresponding to the target signal lamp information, so that the time delay of sensing signal lamp elements is reduced, and the accuracy of the signal lamp information is improved.
In some exemplary embodiments, the second information obtaining module 1202 may include:
a first target point information determining unit, configured to determine, according to the vehicle pose information, first target point information that is closest to the vehicle from the first point set information;
a target lane determining unit, configured to determine the target lane according to the first target point information;
a lane information acquisition unit for acquiring the lane information of the target lane.
In some exemplary embodiments, the lane line information includes structure information of a lane boundary and second point set information; the structure information represents the position structure relationship of each lane boundary, and the second point set information represents the position relationship of a plurality of points on each lane boundary; the second information obtaining module 1202 may include:
a second target point determining unit, configured to determine, according to the vehicle pose information, the second point set information, and the structure information, a second target point and a third target point that are closest to the vehicle, where the second target point and the third target point are points on boundaries between two adjacent lanes, respectively;
a target lane determining unit, configured to determine the target lane according to the second target point and the third target point;
a lane information acquisition unit for acquiring the lane information of the target lane.
In some exemplary embodiments, the target signal light information determination module 1203 may include:
the information extraction unit is used for extracting the vehicle position information and the vehicle course information from the vehicle pose information;
the target signal lamp information determining unit is used for determining the target signal lamp information which is closest to the vehicle within a preset yaw angle of the vehicle based on the vehicle position information, the vehicle course information and the signal lamp information; and the preset yaw angle is a positive and negative preset angle deviating from the course.
In some exemplary embodiments, the apparatus may further include:
the first point set information acquisition module is used for acquiring point set information of the lane stop line in advance;
the first multi-dimensional tree information construction module is used for constructing multi-dimensional tree information corresponding to the lane stop line based on the point set information of the lane stop line;
the first target point information determining unit may be further configured to perform nearest neighbor search processing on the multidimensional tree information corresponding to the lane stop line based on the vehicle pose information, so as to obtain the first target point closest to the vehicle.
In some exemplary embodiments, the apparatus may further include:
the second point set information acquisition module is used for acquiring point set information of the signal lamp in advance;
the second multi-dimensional tree information construction module is used for constructing multi-dimensional tree information corresponding to the signal lamp based on the point set information of the signal lamp;
the target signal lamp information determining unit can be further used for screening out multi-dimensional tree information within a preset yaw angle of the vehicle from the multi-dimensional tree information corresponding to the signal lamp based on the vehicle position information;
and carrying out nearest neighbor search processing on the multi-dimensional tree information in the preset yaw angle of the vehicle to obtain the target signal lamp information.
In some exemplary embodiments, the target signal lamp information includes correspondence of different lane identification information and sub signal lamp identification information, and sub signal lamp state information corresponding to different sub signal lamp identification information; the target sub-signal light information determination module 1204 may include:
the identification information extraction unit is used for extracting identification information of a lane where the vehicle is located from the lane information;
the matching unit is used for matching the target sub-signal lamp identification information corresponding to the identification information of the lane where the vehicle is located from the corresponding relation;
and the target sub-signal lamp state information extraction unit is used for extracting the target sub-signal lamp state information corresponding to the target sub-signal lamp identification information from the target signal lamp information.
In some exemplary embodiments, the first information obtaining module 1201 may be further configured to obtain high-precision map information; and carrying out serialization processing on the high-precision map information to obtain the lane line information and the signal lamp information.
In some exemplary embodiments, the apparatus may further include:
the route planning information acquisition module is used for acquiring the route planning information of the vehicle;
and the automatic driving control module is used for carrying out automatic driving control on the vehicle based on the target sub-signal lamp information and the path planning information.
In some exemplary embodiments, the apparatus may further include:
the target sub-signal lamp information sending module is used for sending the target sub-signal lamp information to a related vehicle of the vehicle so as to enable the related vehicle to carry out automatic driving control based on the target sub-signal lamp information and the path planning information of the related vehicle; the associated vehicle refers to a vehicle located in the same lane as the vehicle.
FIG. 13 is a block diagram illustrating an electronic device for signal light detection, which may be a server or an interrupt, according to an exemplary embodiment, an internal block diagram of which may be as shown in FIG. 13. The electronic device comprises a processor, a memory, a network interface, a display screen and an input device which are connected through a system bus. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic equipment comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the electronic device is used for connecting and communicating with an external terminal through a network. The computer program is executed by a processor to implement a semaphore detection method. The display screen of the electronic equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the electronic equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the electronic equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 13 is merely a block diagram of some of the structures associated with the disclosed aspects and does not constitute a limitation on the electronic devices to which the disclosed aspects apply, as a particular electronic device may include more or less components than those shown, or combine certain components, or have a different arrangement of components.
The present application additionally provides an electronic device, which may include: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to execute the executable instructions to implement the detection method in any of the above embodiments.
The present application additionally provides a computer-readable storage medium, wherein when the instructions in the computer-readable storage medium are executed by a processor of an electronic device, the electronic device is enabled to implement the detection method in any of the above embodiments.
The present application additionally provides a computer program product comprising a computer program/instructions, which when executed by a processor, implement the detection method in any of the above embodiments.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: rather, the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components in the embodiments may be combined into one module or unit or component, and furthermore, may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that although embodiments described herein include some features included in other embodiments, not other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims of the present invention, any of the claimed embodiments may be used in any combination.
The present invention may also be embodied as apparatus or system programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps or the like not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several systems, several of these systems can be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering and these words may be interpreted as names.

Claims (12)

1. A signal light detection method, characterized in that the method comprises:
acquiring lane line information, signal lamp information and vehicle pose information;
acquiring target lane information of a target lane where the vehicle is located according to the vehicle pose information and the lane line information;
determining target signal lamp information according to the vehicle pose information and the signal lamp information, wherein the target signal lamp information is the information of a target signal lamp closest to the vehicle;
and determining target sub-signal lamp information corresponding to the target lane information from the sub-signal lamp information corresponding to the target signal lamp information.
2. The method according to claim 1, wherein the lane-line information includes lane-stop-line information respectively corresponding to lanes, each lane-stop-line information including first point set information of a plurality of points; the acquiring target lane information of a target lane where the vehicle is located according to the vehicle pose information and the lane line information includes:
determining first target point information closest to the vehicle from the first point set information according to the vehicle pose information;
determining the target lane according to the first target point information;
and acquiring the target lane information of the target lane.
3. The method according to claim 1, wherein the lane line information includes structure information and second point set information of lane boundaries to which at least one lane corresponds respectively; the structure information represents the position structure relationship of each lane boundary, and the second point set information represents the position relationship of a plurality of points on each lane boundary;
the acquiring the target lane information of the target lane where the vehicle is located according to the vehicle pose information and the lane line information includes:
according to the vehicle pose information, the second point set information and the structure information, respectively determining a second target point and a third target point which are closest to the vehicle, wherein the second target point and the third target point are points on the boundary of two adjacent lanes;
determining the target lane according to the second target point and the third target point;
and acquiring the target lane information of the target lane.
4. The method of claim 1, wherein determining target signal light information from the vehicle pose information and the signal light information comprises:
extracting the vehicle position information and the vehicle course information from the vehicle pose information;
determining target signal lamp information which is closest to the vehicle within a preset yaw angle of the vehicle based on the vehicle position information, the vehicle course information and the signal lamp information; and the preset yaw angle is a positive and negative preset angle deviating from the course.
5. The method of claim 2, further comprising:
collecting point set information of a lane stop line in advance;
constructing multi-dimensional tree information corresponding to the lane stop line based on the point set information of the lane stop line;
the determining a first target point closest to the vehicle according to the vehicle pose information and the point set information of the lane stop line comprises:
and performing nearest neighbor search processing on the multidimensional tree information corresponding to the lane stop line based on the vehicle pose information to obtain the first target point closest to the vehicle.
6. The method of claim 4, further comprising:
collecting point set information of a signal lamp in advance;
constructing multi-dimensional tree information corresponding to the signal lamp based on the point set information of the signal lamp;
the step of determining the target signal lamp information which is closest to the vehicle within the preset yaw angle of the vehicle based on the vehicle position information, the course information and the signal lamp information comprises the following steps:
screening out multi-dimensional tree information within a preset yaw angle of the vehicle from the multi-dimensional tree information corresponding to the signal lamp based on the vehicle position information;
and carrying out nearest neighbor search processing on the multi-dimensional tree information within the preset yaw angle of the vehicle to obtain the target signal lamp information.
7. The method according to claim 1, wherein the target signal lamp information includes correspondence of different lane identification information and sub signal lamp identification information, and sub signal lamp status information corresponding to different sub signal lamp identification information;
the determining the target sub-signal information corresponding to the lane information from the at least one sub-signal information corresponding to the target signal information comprises:
extracting identification information of a lane where the vehicle is located from the lane information;
matching target sub-signal lamp identification information corresponding to the identification information of the lane where the vehicle is located from the corresponding relation;
and extracting the target sub-signal lamp state information corresponding to the target sub-signal lamp identification information from the target signal lamp information.
8. The method according to claim 1, wherein after determining the target sub-signal light information corresponding to the lane information from among the at least one sub-signal light information corresponding to the target signal light information, the method further comprises:
acquiring path planning information of the vehicle;
and performing automatic driving control on the vehicle based on the target sub-signal lamp information and the path planning information.
9. The method according to claim 1, wherein after determining the target sub-signal light information corresponding to the lane information from among the at least one sub-signal light information corresponding to the target signal light information, the method further comprises:
sending the target sub-signal lamp information to a related vehicle of the vehicle so that the related vehicle performs automatic driving control based on the target sub-signal lamp information and the path planning information of the related vehicle; the associated vehicle refers to a vehicle located in the same lane as the vehicle.
10. A signal light detecting apparatus, characterized in that the apparatus comprises:
the first information acquisition module is used for acquiring lane line information, signal lamp information and vehicle pose information;
the second information acquisition module is used for acquiring target lane information of a target lane where the vehicle is located according to the vehicle pose information and the lane line information;
the target signal lamp information determining module is used for determining target signal lamp information according to the vehicle pose information and the signal lamp information, wherein the target signal lamp information is the information of a target signal lamp closest to the vehicle;
and the target sub-signal lamp information determining module is used for determining target sub-signal lamp information corresponding to the target lane information from at least one piece of sub-signal lamp information corresponding to the target signal lamp information.
11. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the signal light detection method of any one of claims 1 to 9.
12. A computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the signal light detection method of any one of claims 1 to 9.
CN202210279875.4A 2022-03-21 2022-03-21 Signal lamp detection method, device, equipment and medium Active CN114743395B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210279875.4A CN114743395B (en) 2022-03-21 2022-03-21 Signal lamp detection method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210279875.4A CN114743395B (en) 2022-03-21 2022-03-21 Signal lamp detection method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN114743395A true CN114743395A (en) 2022-07-12
CN114743395B CN114743395B (en) 2024-03-08

Family

ID=82277083

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210279875.4A Active CN114743395B (en) 2022-03-21 2022-03-21 Signal lamp detection method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN114743395B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117218885A (en) * 2023-10-11 2023-12-12 南栖仙策(南京)高新技术有限公司 Traffic signal lamp fault detection method, device, equipment and storage medium

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013029427A (en) * 2011-07-28 2013-02-07 Aisin Aw Co Ltd Stop line detection system, stop line detection device, stop line detection method, and computer program
JP2017091151A (en) * 2015-11-09 2017-05-25 トヨタ自動車株式会社 Drive support apparatus
CN108335510A (en) * 2018-03-21 2018-07-27 北京百度网讯科技有限公司 Traffic lights recognition methods, device and equipment
CN109325390A (en) * 2017-08-01 2019-02-12 郑州宇通客车股份有限公司 A kind of localization method combined based on map with FUSION WITH MULTISENSOR DETECTION and system
CN109829351A (en) * 2017-11-23 2019-05-31 华为技术有限公司 Detection method, device and the computer readable storage medium of lane information
CN110889965A (en) * 2019-11-22 2020-03-17 北京京东乾石科技有限公司 Unmanned vehicle control method and device and unmanned vehicle
CN111422204A (en) * 2020-03-24 2020-07-17 北京京东乾石科技有限公司 Automatic driving vehicle passing judgment method and related equipment
US20200272835A1 (en) * 2018-08-22 2020-08-27 Beijing Sensetime Technology Development Co., Ltd. Intelligent driving control method, electronic device, and medium
CN112580571A (en) * 2020-12-25 2021-03-30 北京百度网讯科技有限公司 Vehicle running control method and device and electronic equipment
WO2021094799A1 (en) * 2019-11-12 2021-05-20 日産自動車株式会社 Traffic signal recognition method and traffic signal recognition device
WO2021115455A1 (en) * 2019-12-13 2021-06-17 上海商汤临港智能科技有限公司 Traffic information identification and smart traveling method, device, apparatus, and storage medium
CN113129606A (en) * 2020-01-15 2021-07-16 宁波吉利汽车研究开发有限公司 Road signal lamp early warning method, device and medium
CN113428178A (en) * 2021-07-24 2021-09-24 中汽创智科技有限公司 Control method, device and medium for automatically driving vehicle and vehicle
CN113682313A (en) * 2021-08-11 2021-11-23 中汽创智科技有限公司 Lane line determination method, lane line determination device and storage medium
EP3944212A2 (en) * 2020-12-22 2022-01-26 Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. Method and apparatus of assisting vehicle driving, electronic device and storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013029427A (en) * 2011-07-28 2013-02-07 Aisin Aw Co Ltd Stop line detection system, stop line detection device, stop line detection method, and computer program
JP2017091151A (en) * 2015-11-09 2017-05-25 トヨタ自動車株式会社 Drive support apparatus
CN109325390A (en) * 2017-08-01 2019-02-12 郑州宇通客车股份有限公司 A kind of localization method combined based on map with FUSION WITH MULTISENSOR DETECTION and system
CN109829351A (en) * 2017-11-23 2019-05-31 华为技术有限公司 Detection method, device and the computer readable storage medium of lane information
CN108335510A (en) * 2018-03-21 2018-07-27 北京百度网讯科技有限公司 Traffic lights recognition methods, device and equipment
US20200272835A1 (en) * 2018-08-22 2020-08-27 Beijing Sensetime Technology Development Co., Ltd. Intelligent driving control method, electronic device, and medium
WO2021094799A1 (en) * 2019-11-12 2021-05-20 日産自動車株式会社 Traffic signal recognition method and traffic signal recognition device
CN110889965A (en) * 2019-11-22 2020-03-17 北京京东乾石科技有限公司 Unmanned vehicle control method and device and unmanned vehicle
WO2021115455A1 (en) * 2019-12-13 2021-06-17 上海商汤临港智能科技有限公司 Traffic information identification and smart traveling method, device, apparatus, and storage medium
CN113129606A (en) * 2020-01-15 2021-07-16 宁波吉利汽车研究开发有限公司 Road signal lamp early warning method, device and medium
CN111422204A (en) * 2020-03-24 2020-07-17 北京京东乾石科技有限公司 Automatic driving vehicle passing judgment method and related equipment
EP3944212A2 (en) * 2020-12-22 2022-01-26 Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. Method and apparatus of assisting vehicle driving, electronic device and storage medium
CN112580571A (en) * 2020-12-25 2021-03-30 北京百度网讯科技有限公司 Vehicle running control method and device and electronic equipment
CN113428178A (en) * 2021-07-24 2021-09-24 中汽创智科技有限公司 Control method, device and medium for automatically driving vehicle and vehicle
CN113682313A (en) * 2021-08-11 2021-11-23 中汽创智科技有限公司 Lane line determination method, lane line determination device and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117218885A (en) * 2023-10-11 2023-12-12 南栖仙策(南京)高新技术有限公司 Traffic signal lamp fault detection method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN114743395B (en) 2024-03-08

Similar Documents

Publication Publication Date Title
EP3505869B1 (en) Method, apparatus, and computer readable storage medium for updating electronic map
CN110287276A (en) High-precision map updating method, device and storage medium
CN109931945B (en) AR navigation method, device, equipment and storage medium
JP2018084573A (en) Robust and efficient algorithm for vehicle positioning and infrastructure
US10152635B2 (en) Unsupervised online learning of overhanging structure detector for map generation
CN113034566B (en) High-precision map construction method and device, electronic equipment and storage medium
WO2021003487A1 (en) Training data generation for dynamic objects using high definition map data
CN111311902A (en) Data processing method, device, equipment and machine readable medium
WO2018058888A1 (en) Street view image recognition method and apparatus, server and storage medium
CN113804204A (en) Driving method and device applied to vehicle, electronic equipment and storage medium
US11275939B2 (en) Movement intelligence using satellite imagery
CN106649777B (en) Method for constructing intersection topological relation in panoramic vector data
CN112236764A (en) Outside-view position indication for digital cartography
CN114743395B (en) Signal lamp detection method, device, equipment and medium
CN104101357A (en) Navigation system and method for displaying photomap on navigation system
CN110647600A (en) Three-dimensional map construction method and device, server and storage medium
CN109520513B (en) Three-dimensional map drawing method and device
EP3948660A1 (en) System and method for determining location and orientation of an object in a space
CN114578401B (en) Method and device for generating lane track points, electronic equipment and storage medium
Moseva et al. Development of a Platform for Road Infrastructure Digital Certification
CN115830073A (en) Map element reconstruction method, map element reconstruction device, computer equipment and storage medium
CN115345944A (en) Method and device for determining external parameter calibration parameters, computer equipment and storage medium
CN113048988B (en) Method and device for detecting change elements of scene corresponding to navigation map
CN114440905A (en) Intermediate layer construction method and device, electronic equipment and storage medium
CN106537172B (en) Method for determining the position and/or orientation of a sensor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant