CN116468868A - Traffic signal lamp graph building method, device, equipment and storage medium - Google Patents

Traffic signal lamp graph building method, device, equipment and storage medium Download PDF

Info

Publication number
CN116468868A
CN116468868A CN202310478031.7A CN202310478031A CN116468868A CN 116468868 A CN116468868 A CN 116468868A CN 202310478031 A CN202310478031 A CN 202310478031A CN 116468868 A CN116468868 A CN 116468868A
Authority
CN
China
Prior art keywords
signal lamp
current
frame
boundary
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310478031.7A
Other languages
Chinese (zh)
Inventor
范云凤
张肖
崔留争
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xiaopeng Autopilot Technology Co Ltd
Original Assignee
Guangzhou Xiaopeng Autopilot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Xiaopeng Autopilot Technology Co Ltd filed Critical Guangzhou Xiaopeng Autopilot Technology Co Ltd
Priority to CN202310478031.7A priority Critical patent/CN116468868A/en
Publication of CN116468868A publication Critical patent/CN116468868A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention belongs to the technical field of vehicles, and discloses a traffic signal lamp graph building method, a traffic signal lamp graph building device, traffic signal lamp graph building equipment and a traffic signal lamp storage medium. The method comprises the following steps: acquiring a current observation image frame, and determining a plurality of current signal lamp boundary frames and current observation information of each current signal lamp boundary frame; acquiring multi-frame historical observation image frames acquired by the same shooting device, and determining a historical signal lamp boundary frame; determining signal lamps respectively corresponding to the current signal lamp boundary frames based on the current signal lamp frames and the history signal lamp boundary frames; when the drawing information of each signal lamp does not exist, determining the historical observation information of each signal lamp; and obtaining the drawing information of each signal lamp according to the current observation information and the historical observation information, so that each drawing node binds the signal lamps according to the drawing information of each signal lamp. By the method, accurate geographic positions of the signal lamps and relevant contents of on-line map building are obtained, so that the signal lamp binding is realized, and effective information is provided for constructing a high-precision map by a cloud.

Description

Traffic signal lamp graph building method, device, equipment and storage medium
Technical Field
The present invention relates to the field of vehicle technologies, and in particular, to a traffic signal lamp mapping method, device, equipment and storage medium.
Background
For advanced auxiliary driving functions of urban scenes, an important capability is to quickly and accurately identify traffic lights and bind the traffic lights to corresponding lanes, so that an auxiliary planning module makes corresponding decisions, such as waiting, straight going and the like. In the case where the high-precision map exists, even if the 3D position of the traffic signal lamp and other map-building related information are not available, the 3D position of the traffic signal lamp and other map-building related information can be bound to the corresponding correct lane, but in the case where the high-precision map does not exist, the 3D position of the traffic signal lamp and other map-building related information are very important for the binding of the traffic signal lamp and other map-building related information to the correct lane, and meanwhile, the correct 3D position and other map-building related information are also useful for the corresponding cloud map building. Therefore, a method for constructing traffic signal map information for subsequent signal map construction is needed in the absence of high-precision map.
The foregoing is provided merely for the purpose of facilitating understanding of the technical solutions of the present invention and is not intended to represent an admission that the foregoing is prior art.
Disclosure of Invention
The invention mainly aims to provide a traffic signal lamp mapping method, device, equipment and storage medium, and aims to solve the technical problem of how to construct traffic signal lamp mapping information so as to carry out subsequent signal lamp mapping when a high-precision map does not exist.
In order to achieve the above purpose, the present invention provides a traffic signal lamp mapping method, which comprises the following steps:
acquiring a current observation image frame acquired by a shooting device, and determining a plurality of current signal lamp boundary frames and current observation information of each current signal lamp boundary frame existing in the current observation image frame;
acquiring multi-frame historical observation image frames acquired by the same shooting device, and determining a plurality of historical signal lamp boundary frames existing in each frame of historical observation image frame;
associating each current signal lamp boundary frame with a plurality of historical signal lamp boundary frames to determine signal lamps respectively corresponding to each current signal lamp boundary frame;
when the mapping information of each signal lamp does not exist, determining the historical observation information of each signal lamp according to multi-frame historical observation image frames;
and carrying out feature fusion according to the current observation information of the boundary box of each current signal lamp and the historical observation information of each signal lamp to obtain the map building information of each signal lamp, so that each map building node carries out signal lamp binding according to the map building information of each signal lamp.
Optionally, the acquiring the current observation image frame acquired by the shooting device, determining a plurality of current signal lamp bounding boxes existing in the current observation image frame and current observation information of each current signal lamp bounding box, includes:
Determining the boundary frame sizes and the image coordinates of a plurality of current signal lamp boundary frames existing in the current observation image frame;
when the boundary frame size of each current signal lamp boundary frame is a preset size, determining the image edge position of each current signal lamp boundary frame according to the image coordinates of each current signal lamp boundary frame;
and when the image edge position of each current signal lamp boundary frame is not the preset edge position, determining the current observation information of each current signal lamp boundary frame according to the current observation image frame.
Optionally, the associating each current signal lamp boundary box with a plurality of history signal lamp boundary boxes, and determining the signal lamp corresponding to each current signal lamp boundary box respectively includes:
acquiring vehicle motion information;
determining the moving speeds of a plurality of historical signal lamp boundary boxes according to the vehicle motion information;
position prediction is carried out on each history signal lamp boundary frame according to the moving speed of the plurality of history signal lamp boundary frames, and the predicted boundary frame position of each history signal lamp boundary frame is determined;
and calculating the position relation between the position of each prediction boundary frame and the boundary frame of each current signal lamp, and determining the signal lamp respectively corresponding to each current signal lamp boundary frame.
Optionally, the feature fusion is performed according to the current observation information of the boundary box of each current signal lamp and the historical observation information of each signal lamp, so as to obtain the mapping information of each signal lamp, which includes:
determining a plurality of observation states of each signal lamp according to the current observation information of the boundary box of each current signal lamp and the historical observation information of each signal lamp;
determining the observation attribute of each signal lamp according to a plurality of observation states of each signal lamp;
and triangulating according to the observation attribute of each signal lamp to obtain the drawing information of each signal lamp.
Optionally, triangulating according to the observation attribute of each signal lamp to obtain the mapping information of each signal lamp, including:
determining the observation position of each signal lamp according to the observation attribute of each signal lamp;
grouping the signal lamps according to the observation positions of the signal lamps, and detecting whether the signal lamps have the same group of signal lamps or not;
when the same group of signal lamps exist in each signal lamp, synchronous triangularization is carried out according to the same group position of the same group of signal lamps and the observation position of each signal lamp, and the optimized position of each signal lamp is output;
according to the optimized positions of the signal lamps, carrying out position updating on the observed attributes of the signal lamps to obtain optimized attributes of the signal lamps;
And carrying out attribute association according to the optimized attribute of each signal lamp to obtain the drawing information of each signal lamp.
Optionally, the grouping of the signal lamps according to the observing positions of the signal lamps, and after detecting whether the signal lamps have the same group of signal lamps, further includes:
when the same group of signal lamps do not exist in each signal lamp, acquiring a historical observation image frame and a current observation image frame of each signal lamp;
determining a boundary frame position queue of each signal lamp according to the historical observation image frame and the current observation image frame of each signal lamp;
constructing a position matrix according to the boundary frame position queues of the signal lamps and solving to obtain the optimized positions of the signal lamps;
and carrying out position update on the observation attribute of each signal lamp according to the optimized position of each signal lamp to obtain the optimized attribute of each signal lamp.
Optionally, the performing attribute association according to the optimized attribute of each signal lamp to obtain the mapping information of each signal lamp includes:
acquiring acquisition signal lamps and acquisition attributes of the acquisition signal lamps determined by other shooting devices;
performing signal lamp association on the acquisition signal lamps and each signal lamp, and determining acquisition attributes and optimization attributes of each signal lamp according to association results;
Acquiring shooting parameters of shooting devices corresponding to the acquisition attribute and the optimization attribute;
determining target attributes in acquisition attributes and optimization attributes of all signal lamps according to the shooting parameters;
and obtaining the mapping information of each signal lamp according to the target attribute of each signal lamp.
In addition, in order to achieve the above purpose, the present invention also provides a traffic signal lamp mapping device, where the traffic signal lamp mapping device includes:
the acquisition module is used for acquiring the current observation image frames acquired by the shooting device and determining a plurality of current signal lamp boundary frames and current observation information of each current signal lamp boundary frame in the current observation image frames;
the acquisition module is also used for acquiring multi-frame historical observation image frames acquired by the same shooting device and determining a plurality of historical signal lamp boundary boxes existing in each frame of historical observation image frame;
the association module is used for associating each current signal lamp boundary frame with a plurality of historical signal lamp boundary frames and determining signal lamps corresponding to the current signal lamp boundary frames respectively;
the determining module is used for determining the historical observation information of each signal lamp according to the multi-frame historical observation image frame when the mapping information of each signal lamp does not exist;
And the fusion module is used for carrying out feature fusion according to the current observation information of the boundary box of each current signal lamp and the historical observation information of each signal lamp to obtain the map building information of each signal lamp so that each map building node carries out signal lamp road binding according to the map building information of each signal lamp.
In addition, in order to achieve the above object, the present invention also provides a traffic signal lamp mapping apparatus, including: the system comprises a memory, a processor and a traffic light mapping program stored on the memory and capable of running on the processor, wherein the traffic light mapping program is configured to realize the traffic light mapping method.
In addition, in order to achieve the above object, the present invention also proposes a storage medium, on which a traffic signal mapping program is stored, which when executed by a processor, implements the traffic signal mapping method as described above.
The invention determines a plurality of current signal lamp boundary frames and current observation information of each current signal lamp boundary frame in the current observation image frame by acquiring the current observation image frame acquired by a shooting device; acquiring multi-frame historical observation image frames acquired by the same shooting device, and determining a plurality of historical signal lamp boundary frames existing in each frame of historical observation image frame; associating each current signal lamp boundary frame with a plurality of historical signal lamp boundary frames to determine signal lamps respectively corresponding to each current signal lamp boundary frame; when the mapping information of each signal lamp does not exist, determining the historical observation information of each signal lamp according to multi-frame historical observation image frames; and carrying out feature fusion according to the current observation information of the boundary box of each current signal lamp and the historical observation information of each signal lamp to obtain the map building information of each signal lamp, so that each map building node carries out signal lamp binding according to the map building information of each signal lamp. By the method, the car light signals of the side car are identified, the steering light states of surrounding cars are displayed through scene reconstruction, an auxiliary function for displaying the state of the side car steering light is provided for a driver, the driver can know the change condition of the steering light of the car in the front road section conveniently, and the driving safety of the user is improved. By the method, after the current signal lamp boundary frame is detected, signal lamp association is carried out based on the history signal lamp boundary frame, the signal lamps corresponding to the current signal lamp boundary frames are determined, when the map building information of the signal lamps does not exist, the map building information of the signal lamps is determined according to the history observation information of the signal lamps and the current observation information of the current signal lamp boundary frame, accurate geographic positions and online map building related contents of the signal lamps are obtained, so that signal lamp binding is realized, and effective information is provided for cloud construction of a high-precision map.
Drawings
FIG. 1 is a schematic diagram of a construction device of a traffic signal lamp in a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flow chart of a first embodiment of a method for mapping traffic signals according to the present invention;
FIG. 3 is a schematic diagram of a bounding box of an example of a method of mapping traffic signals according to the present invention;
FIG. 4 is a flow chart of a second embodiment of a method for mapping traffic signals according to the present invention;
FIG. 5 is a schematic overall flow chart of an embodiment of a method for mapping traffic signals according to the present invention;
fig. 6 is a block diagram of a first embodiment of a traffic signal mapping apparatus according to the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Referring to fig. 1, fig. 1 is a schematic diagram of a construction device of a traffic signal lamp in a hardware running environment according to an embodiment of the present invention.
As shown in fig. 1, the traffic signal lamp mapping apparatus may include: a processor 1001, such as a central processing unit (Central Processing Unit, CPU), a communication bus 1002, a user interface 1003, a network interface 1004, a memory 1005. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display, an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a Wireless interface (e.g., a Wireless-Fidelity (Wi-Fi) interface). The Memory 1005 may be a high-speed random access Memory (Random Access Memory, RAM) or a stable nonvolatile Memory (NVM), such as a disk Memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
It will be appreciated by those skilled in the art that the structure shown in fig. 1 does not constitute a limitation of the traffic signal mapping apparatus, and may include more or fewer components than shown, or may combine certain components, or may have a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a storage medium, may include an operating system, a network communication module, a user interface module, and a mapping program of traffic lights.
In the traffic light mapping apparatus shown in fig. 1, the network interface 1004 is mainly used for data communication with a network server; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 in the traffic signal lamp mapping equipment can be arranged in the traffic signal lamp mapping equipment, and the traffic signal lamp mapping equipment invokes a traffic signal lamp mapping program stored in the memory 1005 through the processor 1001 and executes the traffic signal lamp mapping method provided by the embodiment of the invention.
The embodiment of the invention provides a traffic signal lamp mapping method, and referring to fig. 2, fig. 2 is a flow chart of a first embodiment of the traffic signal lamp mapping method.
In this embodiment, the method for mapping the traffic signal lamp includes the following steps:
step S10: and acquiring a current observation image frame acquired by a shooting device, and determining a plurality of current signal lamp boundary boxes and current observation information of each current signal lamp boundary box existing in the current observation image frame.
It should be noted that, the execution body of the present embodiment is a control terminal of a vehicle, and the vehicle includes, but is not limited to, the control terminal, a plurality of photographing devices, and a vehicle driving unit. The plurality of shooting devices are used for acquiring surrounding environment images and in-vehicle environment images in the running process of the vehicle.
It can be understood that the shooting device on the vehicle performs image acquisition and outputs a frame of image acquired at the current moment, and the frame of image acquired at the current moment is the current observation image frame. Because a plurality of signal lamps can be shot in one frame of image when one shooting device is used for image acquisition in the running process of the vehicle, the outline of the signal lamps is the boundary frame of the signal lamps. When the control terminal acquires the current observation image frame acquired by the shooting device, contour recognition is carried out, contours of a plurality of signal lamps in the current observation image frame are determined, and the contours of the plurality of signal lamps in the current observation image frame are the boundary frames of the plurality of current signal lamps. For example, as shown in fig. 3, fig. 3 is a current observation image frame a acquired by the photographing device a, in which 6 signal lamps are acquired, wherein each 3 signal lamps are located on the same rod, and when the control terminal performs contour recognition, 6 current signal lamp bounding boxes 1-6 are obtained.
In a specific implementation, since a frame of image acquired by the shooting device can record the signal lamp state displayed by the signal lamp at the current moment, state information corresponding to the current signal boundary box is determined according to the current observation image frame, wherein the state information comprises, but is not limited to, color, direction and countdown, the state information corresponding to each signal lamp, geographical position information where the shooting device acquires the current observation image frame and image coordinates of each signal lamp boundary box are the current observation information. For example, a signal light boundary box of 6 signal lights is detected in the current observation image frame, wherein, the signal light states displayed by two signal light boundary boxes are green, and the corresponding state information is color: green; the signal lamp states displayed by the two signal lamp boundary boxes are straight arrows displaying red, and the corresponding state information is color: red, direction: straight running; the signal lamp states displayed by the boundary boxes of the remaining two signal lamps are displayed for counting down for 10s, and the corresponding state information is as follows: and (5) counting down for 10 seconds.
Specifically, the step S10 includes: determining the boundary frame sizes and the image coordinates of a plurality of current signal lamp boundary frames existing in the current observation image frame; when the boundary frame size of each current signal lamp boundary frame is a preset size, determining the image edge position of each current signal lamp boundary frame according to the image coordinates of each current signal lamp boundary frame; and when the image edge position of each current signal lamp boundary frame is not the preset edge position, determining the current observation information of each current signal lamp boundary frame according to the current observation image frame.
In order to ensure the accuracy of the subsequent image construction information determination, after the current observation image frame acquired in the shooting state is acquired, data preprocessing is required, when the current signal lamp boundary frame existing in the current observation image frame is unreasonable, the current observation image frame is not subjected to subsequent processing (namely attribute generation of the signal lamp), the next frame of observation image frame acquired by the shooting device is acquired, when the signal lamp boundary frame in the next frame of observation image frame is reasonable, attribute generation of the signal lamp is performed on the detected signal lamp boundary frame, and the next frame of observation image frame is taken as the current observation image frame, so that the usability of the current observation image frame is ensured.
It will be appreciated that the rationality of the current signal bounding box is detected, particularly by the bounding box size of the current signal bounding box and the location in the image. The boundary frame sizes of the plurality of current signal lamp boundary frames and the image coordinates thereof in the current observation image frame are determined according to the current observation image frame, the coordinate system is a two-dimensional coordinate system, and the coordinate system is established by using a certain angle of the current observation image frame, for example, the lower left corner or other angles.
In a specific implementation, the preset size refers to a standard size of the signal lamp boundary frame, and when the boundary frame sizes of all current signal lamp boundary frames in the current observation image frame are all the preset sizes, the image coordinates of each current signal lamp boundary frame are utilized for further detection. When the boundary frame size of the boundary frame of the current signal lamp is not the preset size, the shooting device is indicated to have an abnormality when the current observation image frame is acquired, or the signal lamp is not completely recorded in the current observation image frame, at the moment, the reasonability is not required to be further detected, and the shooting device is waited to acquire the next observation image frame.
After determining that the size of the bounding box is the preset size, determining the position of each current signal lamp bounding box in the current observation image frame according to the image coordinates of each current signal lamp bounding box, wherein the position of each current signal lamp bounding box in the current observation image frame is the image edge position of each current signal lamp bounding box, when the image edge position of each current signal lamp bounding box is not the preset image edge position, indicating that each current signal lamp bounding box is not located at the image edge of the current observation image frame, generating the attribute of the signal lamp by using the current observation image frame, determining the state information of each current signal lamp bounding box according to the current observation image frame, and determining the current observation information by combining the geographical position information of each current signal lamp bounding box when the current observation image frame is acquired by using the shooting device and the current geographical position of each signal lamp bounding box calculated by using the image coordinates of each signal lamp bounding box, namely the current observation information comprises but is not limited to the state information of each current signal lamp bounding box and the current geographical position of each signal lamp bounding box.
Step S20: and acquiring multi-frame historical observation image frames acquired by the same shooting device, and determining a plurality of historical signal lamp boundary boxes existing in each frame of historical observation image frame.
When the control terminal generates the attribute of the signal lamp, the control terminal first performs the observation image frame acquired by each photographing device, and then performs the information association between the photographing devices. For example, after 6 current signal lamp boundary frames are detected in the current observation image frames of the shooting device a, determining signal attributes of signal lamp boundary frames corresponding to the 6 current signal lamp boundary frames by using the current observation image frames and the historical observation image frames acquired by the shooting device a, determining whether signal lamps 1-6 corresponding to the 6 current signal lamp boundary frames exist in signal lamps detected by other shooting devices, if so, acquiring signal lamp attributes of the signal lamps 1-6 determined according to the observation image frames of other shooting devices, and selecting attributes by using the signal lamp attributes of the signal lamps 1-6 determined by the observation image frames of the shooting device a and the signal lamp attributes of the signal lamps 1-6 determined according to the observation image frames of other shooting devices, and determining target attributes of the signal lamps 1-6, thereby obtaining map building information of the signal lamps 1-6.
It can be understood that after detecting the signal lamp boundary box in the current observation image frame of the shooting device, the control terminal acquires a plurality of historical observation image frames of the same shooting device in a period of time before, and outputs a plurality of historical signal lamp boundary boxes existing in each historical observation image frame. For example, the acquisition time of the current observation image frame is 08:08:00, then acquire 08:07: 45-08: 08: the number of frames in the 00-space period may be not be 15s, but the present embodiment is not limited thereto, and only 15s is taken as an example here.
Step S30: and associating each current signal lamp boundary frame with a plurality of historical signal lamp boundary frames to determine signal lamps respectively corresponding to each current signal lamp boundary frame.
In addition, since the signal lamp corresponding to the current signal lamp boundary frame may have been detected in the history observation image frame, in order to detect where the current signal lamp boundary frame appears in the history observation image frame, the history signal lamp boundary frame in which the signal lamp corresponding to each current signal lamp boundary frame is the same signal lamp is determined, so that the signal lamp corresponding to the current signal lamp boundary frame and determined according to the history observation image needs to be associated with a plurality of history signal lamp boundary frames.
Specifically, the step S30 includes: acquiring vehicle motion information; determining the moving speeds of a plurality of historical signal lamp boundary boxes according to the vehicle motion information; position prediction is carried out on each history signal lamp boundary frame according to the moving speed of the plurality of history signal lamp boundary frames, and the predicted boundary frame position of each history signal lamp boundary frame is determined; and calculating the position relation between the position of each prediction boundary frame and the boundary frame of each current signal lamp, and determining the signal lamp respectively corresponding to each current signal lamp boundary frame.
It can be understood that the vehicle motion information includes, but is not limited to, a running speed and a running position of the vehicle at each moment, and since the photographing device moves along with the vehicle, the running speed of the vehicle at each moment in the vehicle motion information can be used as a moving speed of the photographing device, the moving speed of the photographing device can be used as a moving speed of each history signal boundary frame, and the position of the signal boundary frame at the current moment can be predicted by using the image position of the history signal boundary frame of the previous frame and the moving speed of the history signal boundary frame, and the position of the history signal boundary frame at the current moment is the predicted boundary frame position.
In a specific implementation, the intersection ratio of each current signal lamp boundary frame and the predicted boundary frame position is sequentially calculated, the current signal lamp boundary frame with the minimum intersection ratio with the predicted boundary frame position is obtained, the angle information defined by the predicted boundary frame position according to the image coordinates and the angle information defined by the current signal lamp boundary frame according to the image coordinates are further utilized to calculate the angle difference value between the two, and when the angle difference value between the two is within the preset difference value range, the current signal lamp boundary frame and the historical signal lamp boundary frame can be associated, so that the signal lamp which corresponds to the current signal lamp boundary frame and is determined according to the historical observation image is determined.
Step S40: and when the mapping information of each signal lamp does not exist, determining the historical observation information of each signal lamp according to the multi-frame historical observation image frames.
It should be noted that the mapping information refers to the accurate 3D position (i.e. geographical position) of the signal lamp and the indication attribute, and the indication attribute includes, but is not limited to, color, direction and countdown. For example, the mapping information of the signal lamp a is xx position, and the signal lamp a is a straight color indicator lamp.
It can be understood that after the association of the current signal lamp boundary boxes is completed, whether the map building information of each signal lamp is generated before the current moment is determined after the signal lamps corresponding to the current signal lamp boundary boxes are respectively determined, if the map building information of each signal lamp is generated before the current moment, the map building information of each signal lamp is directly sent to downstream map building nodes, so that each map building node binds each signal lamp to a corresponding lane according to the map building information of each signal lamp, the map building nodes comprise but are not limited to other vehicles and cloud sides, and the cloud sides can also use the map building information of each signal lamp to build a high-precision map.
In a specific implementation, if no mapping information of each signal lamp is generated before the current time, all state information recorded before the current time of each signal lamp is detected according to multi-frame historical observation image frames, all state information recorded before the current time of each signal lamp, geographical position information when each historical observation image frame is acquired by using a shooting device, and a historical geographical position obtained by calculating image coordinates of each historical signal lamp boundary frame are historical observation information, namely, the historical observation information includes but is not limited to all state information of each signal lamp between the current time and historical geographical positions of each signal lamp.
Step S50: and carrying out feature fusion according to the current observation information of the boundary box of each current signal lamp and the historical observation information of each signal lamp to obtain the map building information of each signal lamp, so that each map building node carries out signal lamp binding according to the map building information of each signal lamp.
It should be noted that, the current observation information of the bounding box of each current signal lamp and the historical observation information of each signal lamp are subjected to information fusion, so that the observation attribute of each signal lamp is determined, the observation attribute comprises the indication attribute of each signal lamp and the 3D position of each signal lamp determined after preliminary fusion, the 3D position in the observation attribute of each signal lamp is subjected to position optimization, the accurate 3D position of each signal lamp is obtained, the observation attribute is updated by using the accurate 3D position of each signal lamp, namely, the accurate 3D position is used for replacing the 3D position of each signal lamp determined after preliminary fusion, and thus the map building information of each signal lamp is obtained.
It can be understood that the mapping information of each signal lamp is sent to a plurality of mapping nodes, so that each mapping node binds each signal lamp to a corresponding lane according to the mapping information of each signal lamp, and the binding of the signal lamps is realized.
In the embodiment, a current observation image frame acquired by a shooting device is acquired, and a plurality of current signal lamp boundary frames and current observation information of each current signal lamp boundary frame existing in the current observation image frame are determined; acquiring multi-frame historical observation image frames acquired by the same shooting device, and determining a plurality of historical signal lamp boundary frames existing in each frame of historical observation image frame; associating each current signal lamp boundary frame with a plurality of historical signal lamp boundary frames to determine signal lamps respectively corresponding to each current signal lamp boundary frame; when the mapping information of each signal lamp does not exist, determining the historical observation information of each signal lamp according to multi-frame historical observation image frames; and carrying out feature fusion according to the current observation information of the boundary box of each current signal lamp and the historical observation information of each signal lamp to obtain the map building information of each signal lamp, so that each map building node carries out signal lamp binding according to the map building information of each signal lamp. By the method, the car light signals of the side car are identified, the steering light states of surrounding cars are displayed through scene reconstruction, an auxiliary function for displaying the state of the side car steering light is provided for a driver, the driver can know the change condition of the steering light of the car in the front road section conveniently, and the driving safety of the user is improved. By the method, after the current signal lamp boundary frame is detected, signal lamp association is carried out based on the history signal lamp boundary frame, the signal lamps corresponding to the current signal lamp boundary frames are determined, when the map building information of the signal lamps does not exist, the map building information of the signal lamps is determined according to the history observation information of the signal lamps and the current observation information of the current signal lamp boundary frame, accurate geographic positions and online map building related contents of the signal lamps are obtained, so that signal lamp binding is realized, and effective information is provided for cloud construction of a high-precision map.
Referring to fig. 4, fig. 4 is a flowchart of a second embodiment of a method for constructing a traffic signal according to the present invention.
Based on the above first embodiment, the step S50 in the traffic signal mapping method of the present embodiment includes:
step S501: and determining a plurality of observation states of each signal lamp according to the current observation information of the boundary box of each current signal lamp and the historical observation information of each signal lamp.
The observation state includes at least one indication state and at least one signal lamp position, and the indication state is state information. And carrying out information statistics according to the current observation information of each signal lamp boundary frame and the historical observation information of each signal lamp, and determining all indication states and signal lamp positions of each signal lamp recorded under multi-frame historical observation image frames and current observation image frames. For example, the indication state (1) of the signal lamp 1 is determined as the color according to the current observation information: red, direction: straight line, signal lamp position is xx; determining the indication state (2) of the signal lamp 1 as the color according to the historical observation information: green, direction: and (5) going straight, wherein the position of the signal lamp is xx.
It can be understood that the current geographic position of each signal lamp can be calculated by using the geographic position information of the shooting device in the current observation information when the current observation image frame is acquired and the image coordinates of the boundary frame of each current signal lamp, and the historical geographic position of each signal lamp can also be calculated by using the geographic position information of each historical observation image frame and the image coordinates of the boundary frame of each historical signal lamp. Since there may be errors in the position calculation, a plurality of signal positions may be obtained for each signal after statistics.
Step S502: and determining the observation attribute of each signal lamp according to the plurality of observation states of each signal lamp.
When a plurality of signal lamp positions exist, all signal lamp positions are weighted and averaged on 3D position coordinates, the 3D positions of the signal lamps determined after preliminary fusion are output, and the indication attribute of each signal lamp is determined according to the indication states in the plurality of observation states. For example, the indication state of the signal lamp 1 is 2 kinds, (1): color: red, direction: straight running; (2) the method comprises the following steps Color: green, direction: straight going, the indication attribute of the signal lamp 1 is a straight going color lamp (red/green).
Step S503: and triangulating according to the observation attribute of each signal lamp to obtain the drawing information of each signal lamp.
In order to obtain accurate 3D positions of each signal lamp, triangularization is performed according to the observation attribute of each signal lamp, position optimization is performed on the 3D positions in the observation attribute of each signal lamp, the observation attribute is updated by using the accurate 3D positions of each signal lamp, that is, the accurate 3D positions are used for replacing the 3D positions of each signal lamp determined after preliminary fusion, so that the map building information of each signal lamp is obtained.
Specifically, the step S503 includes: determining the observation position of each signal lamp according to the observation attribute of each signal lamp; grouping the signal lamps according to the observation positions of the signal lamps, and detecting whether the signal lamps have the same group of signal lamps or not; when the same group of signal lamps exist in each signal lamp, synchronous triangularization is carried out according to the same group position of the same group of signal lamps and the observation position of each signal lamp, and the optimized position of each signal lamp is output; according to the optimized positions of the signal lamps, carrying out position updating on the observed attributes of the signal lamps to obtain optimized attributes of the signal lamps; and carrying out attribute association according to the optimized attribute of each signal lamp to obtain the drawing information of each signal lamp.
It can be understood that the observation position of each signal lamp is the 3D position of each signal lamp determined after preliminary fusion, and whether other signal lamps exist in the row of each signal lamp (namely, the same signal rod exists) is judged according to the observation position of each signal lamp, and when other signal lamps exist in the signal rod of each signal lamp, the signal lamps are indicated to exist in the same group.
In specific implementation, the consistency of the same group of positions of the same group of signal lamps and the observation positions of the signal lamps is constrained, and the accurate 3D positions of the signal lamps are obtained through joint optimization, wherein the accurate 3D positions of the signal lamps are the optimized positions of the signal lamps.
The observation positions in the observation attributes of the signal lamps are replaced by the optimized positions of the signal lamps, so that the observation attributes after the position optimization of the signal lamps are obtained, and the observation attributes after the position optimization of the signal lamps are the optimized attributes.
It can be understood that the optimization attribute is an attribute of each signal lamp determined by the control terminal according to the observation image frame acquired by the same shooting device, in fact, in the running process of the vehicle, there may be a case that more than one shooting device acquires the same signal lamp, other shooting devices also output the optimization attribute of the signal lamp, after determining the optimization attribute of each signal lamp detected by the current shooting device, the control terminal needs to detect whether the attribute of each signal lamp detected by the current shooting device is also generated after processing the observation image frame of the other shooting device, if the attribute of each signal lamp detected by the current shooting device is also generated after processing the observation image frame of the other shooting device, all the attributes of each signal lamp detected by the current observation image frame of the current shooting device need to be correlated, and the attribute with the highest confidence degree is selected as the map building information of each signal lamp in a plurality of attributes. For example, 3 signal lamps are detected in the current observation image frame of the photographing device a, and are respectively 1, 2 and 3, and the optimization attributes of the signal lamps 1, 2 and 3 are determined according to the current observation image frame and the historical observation image frame which are acquired by the photographing device a and comprise the signal lamps 1, 2 and 3, and the control terminal also generates the optimization attributes of the signal lamps 1, 2 and 3 after processing the observation frame images of the photographing device B and the photographing device C, at this time, 3 optimization attributes exist in the signal lamps 1, 2 and 3, votes are carried out on the optimization attributes of the signal lamps 1, 2 and 3, the optimization attribute with the highest confidence degree of each signal lamp is determined according to the observation image frame of the photographing device B, and at this time, the optimization attribute of each signal lamp 1, 2 and 3 determined by the observation image frame of the photographing device B is taken as the map building information of each signal lamp 1, 2 and 3.
Further, the grouping of the signal lamps according to the observation positions of the signal lamps, and detecting whether the signal lamps have the same group of signal lamps, further includes: when the same group of signal lamps do not exist in each signal lamp, acquiring a historical observation image frame and a current observation image frame of each signal lamp; determining a boundary frame position queue of each signal lamp according to the historical observation image frame and the current observation image frame of each signal lamp; constructing a position matrix according to the boundary frame position queues of the signal lamps and solving to obtain the optimized positions of the signal lamps; and carrying out position update on the observation attribute of each signal lamp according to the optimized position of each signal lamp to obtain the optimized attribute of each signal lamp.
In a specific implementation, when the same group of signal lamps do not exist in each signal lamp, the joint optimization of the positions cannot be performed by utilizing the same group positions of the same group of signal lamps and the observation positions of each signal lamp, and at the moment, the 3D positions of each signal lamp need to be normally triangulated. The specific process is as follows: and determining the position of the signal lamp boundary frame of each signal lamp in the multi-frame historical observation image frame and the position of the signal lamp boundary frame of each signal lamp in the current observation image frame, and outputting an image coordinate observation queue of the signal lamp boundary frame of each signal lamp, wherein the image coordinate observation queue of the signal lamp boundary frame of each signal lamp is the boundary frame position queue of each signal lamp. Detecting whether each signal lamp appears at different positions on an observation image frame according to a boundary frame position queue of each signal lamp, if so, constructing a triangularization resolving matrix by utilizing the boundary frame position queue, carrying out svd decomposition, determining optimal coordinates of all vertexes of the boundary frame of each signal lamp, outputting calculation errors and size information of the boundary frame of each signal lamp, filtering the boundary frame position queue by utilizing the calculation errors, filtering out observation positions of each signal lamp with large errors, constructing a triangularization resolving matrix again based on the filtered boundary frame queue, and solving to obtain optimal coordinates of all vertexes of the boundary frame of each signal lamp, wherein the optimal coordinates of all vertexes of the boundary frame of each signal lamp are the optimal positions of each signal lamp, and the coordinates at the optimal positions are geographic coordinates of all vertexes of the boundary frame of each signal lamp.
Further, the performing attribute association according to the optimized attribute of each signal lamp to obtain the mapping information of each signal lamp includes: acquiring acquisition signal lamps and acquisition attributes of the acquisition signal lamps determined by other shooting devices; performing signal lamp association on the acquisition signal lamps and each signal lamp, and determining acquisition attributes and optimization attributes of each signal lamp according to association results; acquiring shooting parameters of shooting devices corresponding to the acquisition attribute and the optimization attribute; determining target attributes in acquisition attributes and optimization attributes of all signal lamps according to the shooting parameters; and obtaining the mapping information of each signal lamp according to the target attribute of each signal lamp.
It should be noted that, the control terminal may also detect a plurality of signal lamps according to the observation image frames of other shooting devices, and determine the optimized attribute of each signal lamp, where the plurality of signal lamps detected according to the observation image frames of other shooting devices are acquisition signal lamps, and the optimized attribute of the acquisition signal lamps is acquisition attribute. And associating the acquisition signal lamp with a plurality of signal lamps detected by the current observation image frame, determining whether each signal lamp detected by the current observation image frame exists in the acquisition signal lamp, and acquiring the optimization attribute and the acquisition attribute corresponding to each signal lamp detected by the current observation image frame when each signal lamp detected by the current observation image frame exists in the acquisition signal lamp.
It can be understood that, because the shooting parameters of the shooting device during image acquisition affect the image quality of the observed image frame, the shooting parameters include, but are not limited to, shooting pixels, focal segment ranges, photosensitive element sizes, and the like, the better the shooting parameters are, the better the image quality of the observed image frame is, the higher the accuracy of the attribute of the signal lamp is generated, therefore, the shooting parameters of the shooting device can be used for selecting the optimized attribute and the acquisition attribute, determining the attribute with the highest confidence coefficient of each signal lamp in the acquisition attribute and the optimized attribute, the attribute with the highest confidence coefficient of each signal lamp in the acquisition attribute and the optimized attribute is the target attribute, and finally, the target attribute of each signal lamp is used as the map building information of each signal lamp.
In a specific implementation, as shown in fig. 5, after acquiring a current observation image frame acquired by a photographing device, a control terminal detects whether the current observation image frame has availability, when the current observation image frame has availability, selects the same photographing device and acquires a historical observation image frame, determines a plurality of historical signal lamp boundary frames, associates each current signal lamp boundary frame with the historical signal lamp boundary frame, determines signal lamps corresponding to each current signal lamp boundary frame, detects whether image building information of each signal lamp is generated, fuses observation attributes of each signal lamp by using the historical observation information and the current observation information when the image building information is not generated, performs position optimization on the observation attributes of each signal lamp, thereby outputting optimization attributes, detecting whether the image building information is associated with other photographing devices, and performing confidence voting if the observation image frames of other photographing devices also output the same optimization attributes of the signal lamps, thereby finally obtaining the image building information of each signal lamp.
The embodiment determines a plurality of observation states of each signal lamp according to the current observation information of the boundary box of each current signal lamp and the historical observation information of each signal lamp; determining the observation attribute of each signal lamp according to a plurality of observation states of each signal lamp; and triangulating according to the observation attribute of each signal lamp to obtain the drawing information of each signal lamp. By the method, the observation attribute of each signal lamp is determined through the current observation information and the historical observation information of the boundary box of each current signal lamp, the observation position in the observation attribute of each signal lamp is optimized, the accuracy of the map building information is ensured, and a good foundation is laid for the subsequent use of the map building information of each signal lamp for signal lamp binding.
In addition, the embodiment of the invention also provides a storage medium, wherein the storage medium is stored with a traffic signal lamp mapping program, and the traffic signal lamp mapping program realizes the traffic signal lamp mapping method when being executed by a processor.
Because the storage medium adopts all the technical schemes of all the embodiments, the storage medium has at least all the beneficial effects brought by the technical schemes of the embodiments, and the description is omitted here.
Referring to fig. 6, fig. 6 is a block diagram illustrating a construction of a first embodiment of a traffic signal lamp according to the present invention.
As shown in fig. 6, a traffic signal lamp mapping device provided by an embodiment of the present invention includes:
the acquisition module 10 is configured to acquire a current observation image frame acquired by the photographing device, and determine a plurality of current signal lamp bounding boxes and current observation information of each current signal lamp bounding box existing in the current observation image frame.
The acquiring module 10 is further configured to acquire multiple frames of historical observation image frames acquired by the same shooting device, and determine a plurality of historical signal lamp bounding boxes existing in each frame of historical observation image frame.
And the association module 20 is configured to associate each current signal lamp bounding box with a plurality of historical signal lamp bounding boxes, and determine signal lamps corresponding to each current signal lamp bounding box respectively.
The determining module 30 is configured to determine historical observation information of each signal lamp according to the multi-frame historical observation image frame when the mapping information of each signal lamp does not exist.
And the fusion module 40 is used for carrying out feature fusion according to the current observation information of the boundary box of each current signal lamp and the historical observation information of each signal lamp to obtain the map building information of each signal lamp, so that each map building node carries out signal lamp binding according to the map building information of each signal lamp.
It should be understood that the foregoing is illustrative only and is not limiting, and that in specific applications, those skilled in the art may set the invention as desired, and the invention is not limited thereto.
In the embodiment, a current observation image frame acquired by a shooting device is acquired, and a plurality of current signal lamp boundary frames and current observation information of each current signal lamp boundary frame existing in the current observation image frame are determined; acquiring multi-frame historical observation image frames acquired by the same shooting device, and determining a plurality of historical signal lamp boundary frames existing in each frame of historical observation image frame; associating each current signal lamp boundary frame with a plurality of historical signal lamp boundary frames to determine signal lamps respectively corresponding to each current signal lamp boundary frame; when the mapping information of each signal lamp does not exist, determining the historical observation information of each signal lamp according to multi-frame historical observation image frames; and carrying out feature fusion according to the current observation information of the boundary box of each current signal lamp and the historical observation information of each signal lamp to obtain the map building information of each signal lamp, so that each map building node carries out signal lamp binding according to the map building information of each signal lamp. By the method, the car light signals of the side car are identified, the steering light states of surrounding cars are displayed through scene reconstruction, an auxiliary function for displaying the state of the side car steering light is provided for a driver, the driver can know the change condition of the steering light of the car in the front road section conveniently, and the driving safety of the user is improved. By the method, after the current signal lamp boundary frame is detected, signal lamp association is carried out based on the history signal lamp boundary frame, the signal lamps corresponding to the current signal lamp boundary frames are determined, when the map building information of the signal lamps does not exist, the map building information of the signal lamps is determined according to the history observation information of the signal lamps and the current observation information of the current signal lamp boundary frame, accurate geographic positions and online map building related contents of the signal lamps are obtained, so that signal lamp binding is realized, and effective information is provided for cloud construction of a high-precision map.
It should be noted that the above-described working procedure is merely illustrative, and does not limit the scope of the present invention, and in practical application, a person skilled in the art may select part or all of them according to actual needs to achieve the purpose of the embodiment, which is not limited herein.
In addition, technical details not described in detail in the embodiment can be referred to the method for constructing a traffic signal lamp provided in any embodiment of the present invention, and are not described herein.
In an embodiment, the obtaining module 10 is further configured to determine a bounding box size and image coordinates of a plurality of current signal lamp bounding boxes existing in the current observed image frame; when the boundary frame size of each current signal lamp boundary frame is a preset size, determining the image edge position of each current signal lamp boundary frame according to the image coordinates of each current signal lamp boundary frame; and when the image edge position of each current signal lamp boundary frame is not the preset edge position, determining the current observation information of each current signal lamp boundary frame according to the current observation image frame.
In one embodiment, the association module 20 is further configured to obtain vehicle motion information; determining the moving speeds of a plurality of historical signal lamp boundary boxes according to the vehicle motion information; position prediction is carried out on each history signal lamp boundary frame according to the moving speed of the plurality of history signal lamp boundary frames, and the predicted boundary frame position of each history signal lamp boundary frame is determined; and calculating the position relation between the position of each prediction boundary frame and the boundary frame of each current signal lamp, and determining the signal lamp respectively corresponding to each current signal lamp boundary frame.
In an embodiment, the fusion module 40 is further configured to determine a plurality of observation states of each signal lamp according to the current observation information of the boundary box of each current signal lamp and the historical observation information of each signal lamp; determining the observation attribute of each signal lamp according to a plurality of observation states of each signal lamp; and triangulating according to the observation attribute of each signal lamp to obtain the drawing information of each signal lamp.
In an embodiment, the fusion module 40 is further configured to determine an observation position of each signal lamp according to an observation attribute of each signal lamp; grouping the signal lamps according to the observation positions of the signal lamps, and detecting whether the signal lamps have the same group of signal lamps or not; when the same group of signal lamps exist in each signal lamp, synchronous triangularization is carried out according to the same group position of the same group of signal lamps and the observation position of each signal lamp, and the optimized position of each signal lamp is output; according to the optimized positions of the signal lamps, carrying out position updating on the observed attributes of the signal lamps to obtain optimized attributes of the signal lamps; and carrying out attribute association according to the optimized attribute of each signal lamp to obtain the drawing information of each signal lamp.
In an embodiment, the fusion module 40 is further configured to obtain a historical observed image frame and a current observed image frame of each signal lamp when the signal lamps do not have the same group of signal lamps; determining a boundary frame position queue of each signal lamp according to the historical observation image frame and the current observation image frame of each signal lamp; constructing a position matrix according to the boundary frame position queues of the signal lamps and solving to obtain the optimized positions of the signal lamps; and carrying out position update on the observation attribute of each signal lamp according to the optimized position of each signal lamp to obtain the optimized attribute of each signal lamp.
In an embodiment, the fusion module 40 is further configured to obtain the acquisition signal lamps and the acquisition attribute of each acquisition signal lamp determined by other shooting devices; performing signal lamp association on the acquisition signal lamps and each signal lamp, and determining acquisition attributes and optimization attributes of each signal lamp according to association results; acquiring shooting parameters of shooting devices corresponding to the acquisition attribute and the optimization attribute; determining target attributes in acquisition attributes and optimization attributes of all signal lamps according to the shooting parameters; and obtaining the mapping information of each signal lamp according to the target attribute of each signal lamp.
Furthermore, it should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. Read Only Memory)/RAM, magnetic disk, optical disk) and including several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (10)

1. The traffic signal lamp mapping method is characterized by comprising the following steps of:
acquiring a current observation image frame acquired by a shooting device, and determining a plurality of current signal lamp boundary frames and current observation information of each current signal lamp boundary frame existing in the current observation image frame;
acquiring multi-frame historical observation image frames acquired by the same shooting device, and determining a plurality of historical signal lamp boundary frames existing in each frame of historical observation image frame;
associating each current signal lamp boundary frame with a plurality of historical signal lamp boundary frames to determine signal lamps respectively corresponding to each current signal lamp boundary frame;
when the mapping information of each signal lamp does not exist, determining the historical observation information of each signal lamp according to multi-frame historical observation image frames;
and carrying out feature fusion according to the current observation information of the boundary box of each current signal lamp and the historical observation information of each signal lamp to obtain the map building information of each signal lamp, so that each map building node carries out signal lamp binding according to the map building information of each signal lamp.
2. The traffic light mapping method according to claim 1, wherein the acquiring the current observation image frame acquired by the photographing device, determining a plurality of current signal light bounding boxes existing in the current observation image frame and current observation information of each current signal light bounding box, comprises:
Determining the boundary frame sizes and the image coordinates of a plurality of current signal lamp boundary frames existing in the current observation image frame;
when the boundary frame size of each current signal lamp boundary frame is a preset size, determining the image edge position of each current signal lamp boundary frame according to the image coordinates of each current signal lamp boundary frame;
and when the image edge position of each current signal lamp boundary frame is not the preset edge position, determining the current observation information of each current signal lamp boundary frame according to the current observation image frame.
3. The traffic light mapping method according to claim 1, wherein associating each current light bounding box with a plurality of history light bounding boxes, determining a respective corresponding light for each current light bounding box comprises:
acquiring vehicle motion information;
determining the moving speeds of a plurality of historical signal lamp boundary boxes according to the vehicle motion information;
position prediction is carried out on each history signal lamp boundary frame according to the moving speed of the plurality of history signal lamp boundary frames, and the predicted boundary frame position of each history signal lamp boundary frame is determined;
and calculating the position relation between the position of each prediction boundary frame and the boundary frame of each current signal lamp, and determining the signal lamp respectively corresponding to each current signal lamp boundary frame.
4. The traffic light mapping method according to claim 1, wherein the feature fusion is performed according to current observation information of each current light bounding box and historical observation information of each light to obtain mapping information of each light, comprising:
determining a plurality of observation states of each signal lamp according to the current observation information of the boundary box of each current signal lamp and the historical observation information of each signal lamp;
determining the observation attribute of each signal lamp according to a plurality of observation states of each signal lamp;
and triangulating according to the observation attribute of each signal lamp to obtain the drawing information of each signal lamp.
5. The traffic light mapping method according to claim 4, wherein the triangulating is performed according to the observation attribute of each signal light to obtain mapping information of each signal light, comprising:
determining the observation position of each signal lamp according to the observation attribute of each signal lamp;
grouping the signal lamps according to the observation positions of the signal lamps, and detecting whether the signal lamps have the same group of signal lamps or not;
when the same group of signal lamps exist in each signal lamp, synchronous triangularization is carried out according to the same group position of the same group of signal lamps and the observation position of each signal lamp, and the optimized position of each signal lamp is output;
According to the optimized positions of the signal lamps, carrying out position updating on the observed attributes of the signal lamps to obtain optimized attributes of the signal lamps;
and carrying out attribute association according to the optimized attribute of each signal lamp to obtain the drawing information of each signal lamp.
6. The traffic light mapping method according to claim 5, wherein after grouping the lights according to the observation positions of the lights and detecting whether the lights have the same group of lights, further comprising:
when the same group of signal lamps do not exist in each signal lamp, acquiring a historical observation image frame and a current observation image frame of each signal lamp;
determining a boundary frame position queue of each signal lamp according to the historical observation image frame and the current observation image frame of each signal lamp;
constructing a position matrix according to the boundary frame position queues of the signal lamps and solving to obtain the optimized positions of the signal lamps;
and carrying out position update on the observation attribute of each signal lamp according to the optimized position of each signal lamp to obtain the optimized attribute of each signal lamp.
7. The traffic light mapping method according to claim 5, wherein the performing attribute association according to the optimized attribute of each light to obtain mapping information of each light comprises:
Acquiring acquisition signal lamps and acquisition attributes of the acquisition signal lamps determined by other shooting devices;
performing signal lamp association on the acquisition signal lamps and each signal lamp, and determining acquisition attributes and optimization attributes of each signal lamp according to association results;
acquiring shooting parameters of shooting devices corresponding to the acquisition attribute and the optimization attribute;
determining target attributes in acquisition attributes and optimization attributes of all signal lamps according to the shooting parameters;
and obtaining the mapping information of each signal lamp according to the target attribute of each signal lamp.
8. The utility model provides a traffic signal lamp's picture device of establishing which characterized in that, traffic signal lamp's picture device of establishing includes:
the acquisition module is used for acquiring the current observation image frames acquired by the shooting device and determining a plurality of current signal lamp boundary frames and current observation information of each current signal lamp boundary frame in the current observation image frames;
the acquisition module is also used for acquiring multi-frame historical observation image frames acquired by the same shooting device and determining a plurality of historical signal lamp boundary boxes existing in each frame of historical observation image frame;
the association module is used for associating each current signal lamp boundary frame with a plurality of historical signal lamp boundary frames and determining signal lamps corresponding to the current signal lamp boundary frames respectively;
The determining module is used for determining the historical observation information of each signal lamp according to the multi-frame historical observation image frame when the mapping information of each signal lamp does not exist;
and the fusion module is used for carrying out feature fusion according to the current observation information of the boundary box of each current signal lamp and the historical observation information of each signal lamp to obtain the map building information of each signal lamp so that each map building node carries out signal lamp road binding according to the map building information of each signal lamp.
9. A mapping apparatus for traffic lights, the apparatus comprising: a memory, a processor and a traffic light mapping program stored on the memory and operable on the processor, the traffic light mapping program configured to implement the traffic light mapping method of any one of claims 1 to 7.
10. A storage medium, wherein a traffic light mapping program is stored on the storage medium, and when the traffic light mapping program is executed by a processor, the traffic light mapping method according to any one of claims 1 to 7 is implemented.
CN202310478031.7A 2023-04-27 2023-04-27 Traffic signal lamp graph building method, device, equipment and storage medium Pending CN116468868A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310478031.7A CN116468868A (en) 2023-04-27 2023-04-27 Traffic signal lamp graph building method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310478031.7A CN116468868A (en) 2023-04-27 2023-04-27 Traffic signal lamp graph building method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116468868A true CN116468868A (en) 2023-07-21

Family

ID=87182384

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310478031.7A Pending CN116468868A (en) 2023-04-27 2023-04-27 Traffic signal lamp graph building method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116468868A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105930819A (en) * 2016-05-06 2016-09-07 西安交通大学 System for real-time identifying urban traffic lights based on single eye vision and GPS integrated navigation system
US20170131719A1 (en) * 2015-11-05 2017-05-11 Ford Global Technologies, Llc Autonomous Driving At Intersections Based On Perception Data
CN110688992A (en) * 2019-12-09 2020-01-14 中智行科技有限公司 Traffic signal identification method and device, vehicle navigation equipment and unmanned vehicle
CN112307840A (en) * 2019-07-31 2021-02-02 浙江商汤科技开发有限公司 Indicator light detection method, device, equipment and computer readable storage medium
CN112580460A (en) * 2020-12-11 2021-03-30 西人马帝言(北京)科技有限公司 Traffic signal lamp identification method, device, equipment and storage medium
CN112700410A (en) * 2020-12-28 2021-04-23 北京百度网讯科技有限公司 Signal lamp position determination method, signal lamp position determination device, storage medium, program, and road side device
CN113840765A (en) * 2019-05-29 2021-12-24 御眼视觉技术有限公司 System and method for vehicle navigation
CN115249407A (en) * 2021-05-27 2022-10-28 上海仙途智能科技有限公司 Indicating lamp state identification method and device, electronic equipment, storage medium and product
CN115984823A (en) * 2023-02-27 2023-04-18 安徽蔚来智驾科技有限公司 Traffic signal lamp sensing method, vehicle control method, device, medium and vehicle

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170131719A1 (en) * 2015-11-05 2017-05-11 Ford Global Technologies, Llc Autonomous Driving At Intersections Based On Perception Data
CN105930819A (en) * 2016-05-06 2016-09-07 西安交通大学 System for real-time identifying urban traffic lights based on single eye vision and GPS integrated navigation system
CN113840765A (en) * 2019-05-29 2021-12-24 御眼视觉技术有限公司 System and method for vehicle navigation
CN112307840A (en) * 2019-07-31 2021-02-02 浙江商汤科技开发有限公司 Indicator light detection method, device, equipment and computer readable storage medium
CN110688992A (en) * 2019-12-09 2020-01-14 中智行科技有限公司 Traffic signal identification method and device, vehicle navigation equipment and unmanned vehicle
CN112580460A (en) * 2020-12-11 2021-03-30 西人马帝言(北京)科技有限公司 Traffic signal lamp identification method, device, equipment and storage medium
CN112700410A (en) * 2020-12-28 2021-04-23 北京百度网讯科技有限公司 Signal lamp position determination method, signal lamp position determination device, storage medium, program, and road side device
US20210334980A1 (en) * 2020-12-28 2021-10-28 Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. Method and apparatus for determining location of signal light, storage medium, program and roadside device
CN115249407A (en) * 2021-05-27 2022-10-28 上海仙途智能科技有限公司 Indicating lamp state identification method and device, electronic equipment, storage medium and product
WO2022247299A1 (en) * 2021-05-27 2022-12-01 上海仙途智能科技有限公司 Indicator lamp state recognition
CN115984823A (en) * 2023-02-27 2023-04-18 安徽蔚来智驾科技有限公司 Traffic signal lamp sensing method, vehicle control method, device, medium and vehicle

Similar Documents

Publication Publication Date Title
CN112069856B (en) Map generation method, driving control device, electronic equipment and system
KR102145109B1 (en) Methods and apparatuses for map generation and moving entity localization
CN102208013B (en) Landscape coupling reference data generation system and position measuring system
US11657319B2 (en) Information processing apparatus, system, information processing method, and non-transitory computer-readable storage medium for obtaining position and/or orientation information
WO2021051344A1 (en) Method and apparatus for determining lane lines in high-precision map
CN111310708B (en) Traffic signal lamp state identification method, device, equipment and storage medium
JP7011472B2 (en) Information processing equipment, information processing method
CN112652065A (en) Three-dimensional community modeling method and device, computer equipment and storage medium
US20240077331A1 (en) Method of predicting road attributers, data processing system and computer executable code
CN114639085A (en) Traffic signal lamp identification method and device, computer equipment and storage medium
CN114140592A (en) High-precision map generation method, device, equipment, medium and automatic driving vehicle
CN115164918A (en) Semantic point cloud map construction method and device and electronic equipment
CN115344655A (en) Method and device for finding change of feature element, and storage medium
CN116543361A (en) Multi-mode fusion sensing method and device for vehicle, vehicle and storage medium
CN112507887B (en) Intersection sign extracting and associating method and device
CN112381876B (en) Traffic sign marking method and device and computer equipment
JP7265961B2 (en) ANNOTATION SUPPORT METHOD, ANNOTATION SUPPORT DEVICE, AND ANNOTATION SUPPORT PROGRAM
CN112507891A (en) Method and device for automatically identifying high-speed intersection and constructing intersection vector
CN117152265A (en) Traffic image calibration method and device based on region extraction
CN113064415A (en) Method and device for planning track, controller and intelligent vehicle
CN116642490A (en) Visual positioning navigation method based on hybrid map, robot and storage medium
CN116245943A (en) Continuous frame point cloud data labeling method and device based on web
CN116468868A (en) Traffic signal lamp graph building method, device, equipment and storage medium
CN112632198A (en) Map data display method and device and electronic equipment
CN116958915B (en) Target detection method, target detection device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination