CN113963326A - Traffic sign detection method, device, equipment, medium and automatic driving vehicle - Google Patents

Traffic sign detection method, device, equipment, medium and automatic driving vehicle Download PDF

Info

Publication number
CN113963326A
CN113963326A CN202111039182.XA CN202111039182A CN113963326A CN 113963326 A CN113963326 A CN 113963326A CN 202111039182 A CN202111039182 A CN 202111039182A CN 113963326 A CN113963326 A CN 113963326A
Authority
CN
China
Prior art keywords
detection frame
detection
candidate
frames
traffic sign
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111039182.XA
Other languages
Chinese (zh)
Inventor
赵鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202111039182.XA priority Critical patent/CN113963326A/en
Publication of CN113963326A publication Critical patent/CN113963326A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides a traffic sign detection method, a device, equipment, a medium and an automatic driving vehicle, which relate to the technical field of computers, in particular to the technical field of artificial intelligence such as automatic driving, intelligent traffic, computer vision, deep learning and the like. The traffic sign detection method comprises the following steps: detecting an environment image acquired by an automatic driving vehicle to obtain a plurality of candidate detection frames corresponding to traffic signs in the environment image and information of the candidate detection frames, wherein the information comprises an initial confidence level; updating the initial confidence degrees of the candidate detection frames based on the overlapping degrees of the candidate detection frames to obtain updated confidence degrees of the candidate detection frames; and determining a final detection frame corresponding to the traffic sign based on the updated confidence degrees of the candidate detection frames. The traffic sign can be timely and accurately identified during automatic driving.

Description

Traffic sign detection method, device, equipment, medium and automatic driving vehicle
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to the field of artificial intelligence technologies such as unmanned, intelligent transportation, computer vision, and deep learning, and in particular, to a method, an apparatus, a device, a medium, and an autonomous vehicle for detecting a traffic sign.
Background
An automatic vehicle (Self-driving automatic vehicle) is also called as an unmanned vehicle, a computer-driven vehicle or a wheeled mobile robot, and is an intelligent vehicle which realizes unmanned driving through a computer system.
In the process of automatically driving the vehicle to move, the traffic sign needs to be accurately identified in time, and corresponding operation is executed based on the traffic sign.
Disclosure of Invention
The disclosure provides a traffic sign detection method, apparatus, device and storage medium.
According to an aspect of the present disclosure, there is provided a traffic sign detecting method including: detecting an environment image acquired by an automatic driving vehicle to obtain a plurality of candidate detection frames corresponding to traffic signs in the environment image and information of the candidate detection frames, wherein the information comprises an initial confidence level; updating the initial confidence degrees of the candidate detection frames based on the overlapping degrees of the candidate detection frames to obtain updated confidence degrees of the candidate detection frames; and determining a final detection frame corresponding to the traffic sign based on the updated confidence degrees of the candidate detection frames.
According to another aspect of the present disclosure, there is provided a traffic sign detecting device including: the detection module is used for detecting and processing an environment image acquired by an automatic driving vehicle to obtain a plurality of candidate detection frames corresponding to traffic signs in the environment image and information of the candidate detection frames, wherein the information comprises an initial confidence level; an updating module, configured to update the initial confidence degrees of the multiple candidate detection frames based on overlapping degrees between the multiple candidate detection frames to obtain updated confidence degrees of the multiple candidate detection frames; and the determining module is used for determining a final detection frame corresponding to the traffic sign based on the updated confidence degrees of the candidate detection frames.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the above aspects.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method according to any one of the above aspects.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of the above aspects.
According to another aspect of the present disclosure, there is provided an autonomous vehicle including: an electronic device as claimed in any one of the preceding aspects.
According to the technical scheme, the traffic sign can be timely and accurately identified during automatic driving.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram according to a first embodiment of the present disclosure;
FIG. 2 is a schematic diagram according to a second embodiment of the present disclosure;
FIG. 3 is a schematic diagram according to a third embodiment of the present disclosure;
FIG. 4 is a schematic diagram according to a fourth embodiment of the present disclosure;
FIG. 5 is a schematic diagram according to a fifth embodiment of the present disclosure;
FIG. 6 is a schematic diagram according to a sixth embodiment of the present disclosure;
fig. 7 is a schematic diagram of an electronic device for implementing any one of the traffic sign detection methods of embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a schematic diagram according to a first embodiment of the present disclosure, which provides a traffic sign detection method, including:
101. the method comprises the steps of detecting and processing an environment image acquired by an automatic driving vehicle to obtain a candidate detection frame corresponding to a traffic sign in the environment image and initial information of the candidate detection frame.
102. Updating the initial information of the candidate detection frames based on the overlapping degree of the candidate detection frames to obtain updated information of the candidate detection frames.
103. And determining a final detection frame corresponding to the traffic sign based on the updated information of the candidate detection frames.
Under the automatic driving scene, the traffic sign on the road is timely and accurately detected, and the method plays an important role in the safety and smooth operation of automatic driving.
The traffic sign can be classified into a warning sign, a prohibition sign, an indication sign, and the like. As shown in fig. 2, several examples of traffic signs are given.
In the automatic driving scene, a camera on the automatic driving vehicle can be adopted to acquire an environment image of the surrounding environment of the vehicle, and the environment image can comprise a traffic sign.
After the automatic driving vehicle acquires the environment image, as shown in fig. 3, the automatic driving vehicle can send the acquired environment image to the cloud end through the network, and the cloud end processes the environment image so as to detect the traffic sign in the environment image. And then, the cloud end can generate a corresponding control instruction based on the detected traffic sign and send the control instruction to the automatic driving vehicle, for example, if the traffic sign is a left turn sign, a control instruction for controlling the automatic driving vehicle to turn left can be generated.
The cloud end processes and controls the environment image as an example, and it can be understood that if the autonomous vehicle has a high-performance processor, the processor on the autonomous vehicle may also perform the processing and controlling.
For example, the cloud end is used for processing the environment image, after the cloud end receives the environment image acquired by the automatic driving vehicle, the environment image can be processed by adopting an existing target detection model, for example, the YOLO model, so as to obtain a plurality of detection frames corresponding to the traffic signs in the environment image, and the information of the detection frames can be obtained through the YOLO model.
The information of the detection box may include: position information of the detection frame, confidence of the detection frame, and the like. The position information may be represented by (x, yw, h), where (x, y) is a position coordinate of a center point of the detection frame, and w, h are a width and a height of the detection frame, respectively. The confidence of the detection box is a value between 0 and 1.
For the sake of distinction, the detection box at this time may be referred to as a candidate detection box, and the confidence may be referred to as an initial confidence.
In some embodiments, the overlapping degree is a cross-over ratio, and the updating the initial confidence degrees of the candidate detection frames based on the overlapping degrees between the candidate detection frames includes: taking each candidate detection frame in the plurality of candidate detection frames as a current detection frame respectively; selecting a candidate detection frame with an initial confidence degree larger than that of the current detection frame from the plurality of candidate detection frames to obtain a selected detection frame; determining the selected detection frame with the largest intersection ratio as a comparison detection frame based on the intersection ratio between the selected detection frame and the current detection frame; determining a suppression coefficient of the current detection frame based on the intersection ratio between the comparison detection frame and the current detection frame; updating the initial confidence of the current detection frame based on the suppression coefficient.
Further, the information further includes: position information, after obtaining the selection detection box, the method further comprising: and determining the intersection ratio between the selection detection frame and the current detection frame based on the position information of the selection detection frame and the position information of the current detection frame.
It is assumed that there are N candidate detection frames, where the jth candidate detection frame is denoted by Bj, its initial confidence is Sj, the location information is denoted by Lj, the ith candidate detection frame is denoted by Bi, its initial confidence is Si, the location information is denoted by Li, the kth candidate detection frame is denoted by Bk, its initial confidence is Sk, and i, j, and k are all located between [1, N ].
The detection box Bj can be used as the current detection box, and if Sk > Sj, Bk can be used as the selection detection box.
Then, the intersection ratio between Bj and Bk can be calculated.
Assuming that the initial confidence is greater than Bj and the detection frame with the largest intersection ratio is Bi, the detection frame Bi can be used as a comparison detection frame.
The calculation of the intersection ratio between Bi and Bj is described as an example, and the calculation processes of Bj and Bk are similar.
Herein, an Intersection over Union (IoU) of Bi and Bj can be calculated based on Li and Lj, and the Intersection ratio refers to a ratio of an Intersection part and a Union part of Bi and Bj.
As shown in fig. 4, assuming that the overlapping portion of Bi and Bj is represented by Y, the portion of Bi other than Y is represented by X, and the portion of Bj other than Y is represented by Z, the calculation formula of the intersection ratio of Bi and Bj can be expressed as: Y/(X + Y + Z).
The above X, Y, Z can be calculated based on the position information of Bi and Bj, i.e., Li and Lj.
According to the position information, the intersection ratio can be obtained, and after the intersection ratio is obtained, the suppression coefficient of the current detection frame can be calculated based on the intersection ratio.
The calculation formula of the suppression coefficient can be expressed as:
Figure BDA0003248595210000051
Figure BDA0003248595210000052
wherein ioui,jThe intersection and parallel ratio between the detection frame Bi and the detection frame Bj is shown, the parameters related to k are similar, and decapyj shows the suppression coefficient corresponding to the detection frame Bj.
f(ioui,j) The calculation can be obtained by adopting a linear or gaussian calculation mode, and taking a linear calculation as an example, the calculation formula can be: f (iou)i,j)=1-ioui,j
After the suppression coefficient is obtained, the suppression coefficient may be multiplied by the initial confidence to obtain an updated confidence.
The updated confidence of the corresponding detection box Bj can be represented as: decapyj Sj.
The confidence coefficient after updating is obtained by multiplying the inhibition coefficient and the initial confidence coefficient, so that the confidence coefficient can be updated simply, conveniently and quickly.
In a road scene, there may be a situation where the traffic signs overlap, for example, as shown in fig. 5, the environment image shown in fig. 5 includes three traffic signs, which are respectively represented by traffic sign-1, traffic sign-2, and traffic sign-3, and in the scene shown in fig. 5, there is an overlap between traffic sign-2 and traffic sign-3.
If a general Non-Maximum Suppression (NMS) algorithm is used, the detection frames corresponding to the Non-Maximum Suppression (NMS) are directly filtered, that is, the detection frames corresponding to the traffic sign-3 are filtered, so as to cause missed detection of the traffic sign-3.
In the embodiment, the initial confidence of the detection frame is restrained rather than directly filtered, so that missing detection can be avoided, and the accuracy of traffic sign detection is improved.
In some embodiments, the selecting the detection frame is multiple, and the determining the intersection ratio between the selection detection frame and the current detection frame based on the position information of the selection detection frame and the position information of the current detection frame includes: and performing parallel operation on the plurality of selection detection frames based on the position information of each selection detection frame in the plurality of selection detection frames and the position information of the current detection frame to respectively determine the intersection and parallel ratio of each selection detection frame in the plurality of selection detection frames and the current detection frame.
By parallel operation, the operation speed can be improved, and therefore the efficiency of traffic sign detection is improved.
In some embodiments, the traffic sign is multiple, each of the multiple traffic signs corresponds to multiple candidate detection boxes, and determining the final detection box corresponding to the traffic sign based on the updated confidence degrees of the multiple candidate detection boxes includes: and corresponding to each traffic sign in the plurality of traffic signs, and taking the candidate detection frame with the highest update confidence coefficient in the plurality of candidate detection frames corresponding to each traffic sign as a final detection frame corresponding to each traffic sign.
In this embodiment, the initial confidence updating and other processing may be performed on each traffic sign distinguished by the YOLO model to obtain a final detection frame corresponding to each traffic sign.
For example, the environment image shown in fig. 5 includes a plurality of traffic signs, which are respectively a traffic sign-1, a traffic sign-2, and a traffic sign-3, and through the above processing, a corresponding final detection frame can be determined for each traffic sign, and the final detection frame corresponding to each traffic sign is represented by a thick line frame in fig. 5.
By processing the corresponding traffic signs in the plurality of traffic signs, the final detection frame corresponding to each traffic sign can be obtained, missing detection is avoided, and the detection accuracy of the traffic signs is improved.
In the embodiment of the disclosure, the initial confidence degrees of the candidate detection frames are updated based on the overlapping degrees between the candidate detection frames corresponding to the traffic signs in the environment image by acquiring the environment image acquired by the automatic driving vehicle, so as to obtain the updated confidence degrees of the candidate detection frames, and the final detection frame corresponding to the traffic signs is determined based on the updated confidence degrees of the candidate detection frames, so that the traffic signs can be accurately identified in time during automatic driving.
Fig. 6 is a schematic diagram according to a sixth embodiment of the present disclosure, which provides a traffic sign detecting device. As shown in fig. 6, the apparatus 600 includes: a detection module 601, an update module 602, and a determination module 603.
The detection module 601 is configured to perform detection processing on an environment image acquired by an autonomous vehicle to obtain a plurality of candidate detection frames corresponding to traffic signs in the environment image and information of the candidate detection frames, where the information includes an initial confidence level; the updating module 602 is configured to perform an updating process on the initial confidence levels of the plurality of candidate detection frames based on the overlapping degrees between the plurality of candidate detection frames to obtain updated confidence levels of the plurality of candidate detection frames; the determining module 603 is configured to determine a final detection frame corresponding to the traffic sign based on the updated confidence levels of the plurality of candidate detection frames.
In some embodiments, the overlap degree is an intersection ratio, and the updating module 602 is specifically configured to: taking each candidate detection frame in the plurality of candidate detection frames as a current detection frame respectively; selecting a candidate detection frame with an initial confidence degree larger than that of the current detection frame from the plurality of candidate detection frames to obtain a selected detection frame; determining the selected detection frame with the largest intersection ratio as a comparison detection frame based on the intersection ratio between the selected detection frame and the current detection frame; determining a suppression coefficient of the current detection frame based on the intersection ratio between the comparison detection frame and the current detection frame; updating the initial confidence of the current detection frame based on the suppression coefficient.
In some embodiments, the information further comprises: location information, the update module 602 is further configured to: and determining the intersection ratio between the selection detection frame and the current detection frame based on the position information of the selection detection frame and the position information of the current detection frame.
In some embodiments, the selection detection box is multiple, and the update module 602 is specifically configured to: and performing parallel operation on the plurality of selection detection frames based on the position information of each selection detection frame in the plurality of selection detection frames and the position information of the current detection frame to respectively determine the intersection and parallel ratio of each selection detection frame in the plurality of selection detection frames and the current detection frame.
In some embodiments, the update module 602 is specifically configured to: and taking the product of the suppression coefficient and the initial confidence of the current detection frame as the update confidence of the current detection frame.
In some embodiments, the traffic sign is multiple, each of the multiple traffic signs corresponds to multiple candidate detection frames, and the determining module 603 is specifically configured to: and corresponding to each traffic sign in the plurality of traffic signs, and taking the candidate detection frame with the highest update confidence coefficient in the plurality of candidate detection frames corresponding to each traffic sign as a final detection frame corresponding to each traffic sign.
In the embodiment of the disclosure, the initial confidence degrees of the candidate detection frames are updated based on the overlapping degrees between the candidate detection frames corresponding to the traffic signs in the environment image by acquiring the environment image acquired by the automatic driving vehicle, so as to obtain the updated confidence degrees of the candidate detection frames, and the final detection frame corresponding to the traffic signs is determined based on the updated confidence degrees of the candidate detection frames, so that the traffic signs can be accurately identified in time during automatic driving.
It is to be understood that in the disclosed embodiments, the same or similar elements in different embodiments may be referenced.
It is to be understood that "first", "second", and the like in the embodiments of the present disclosure are used for distinction only, and do not indicate the degree of importance, the order of timing, and the like.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 7 illustrates a schematic block diagram of an example electronic device 700 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the electronic device 700 includes a computing unit 701, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)702 or a computer program loaded from a storage unit 707 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the electronic device 700 can also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
A number of components in the electronic device 700 are connected to the I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, or the like; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the electronic device 700 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Computing unit 701 may be a variety of general purpose and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 701 performs the respective methods and processes described above, such as the traffic sign detection method. For example, in some embodiments, the traffic sign detection method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 708. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 700 via the ROM 702 and/or the communication unit 709. When the computer program is loaded into the RAM 703 and executed by the computing unit 701, one or more steps of the traffic sign detection method described above may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured to perform the traffic sign detection method by any other suitable means (e.g. by means of firmware).
The present disclosure also provides an autonomous vehicle that may include an electronic device as shown in fig. 7, according to an embodiment of the present disclosure.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (16)

1. A traffic sign detection method, comprising:
detecting an environment image acquired by an automatic driving vehicle to obtain a plurality of candidate detection frames corresponding to traffic signs in the environment image and information of the candidate detection frames, wherein the information comprises an initial confidence level;
updating the initial confidence degrees of the candidate detection frames based on the overlapping degrees of the candidate detection frames to obtain updated confidence degrees of the candidate detection frames;
and determining a final detection frame corresponding to the traffic sign based on the updated confidence degrees of the candidate detection frames.
2. The method of claim 1, wherein the degree of overlap is a cross-over ratio, and the updating the initial confidence degrees of the candidate detection frames based on the degree of overlap between the candidate detection frames comprises:
taking each candidate detection frame in the plurality of candidate detection frames as a current detection frame respectively;
selecting a candidate detection frame with an initial confidence degree larger than that of the current detection frame from the plurality of candidate detection frames to obtain a selected detection frame;
determining the selected detection frame with the largest intersection ratio as a comparison detection frame based on the intersection ratio between the selected detection frame and the current detection frame;
determining a suppression coefficient of the current detection frame based on the intersection ratio between the comparison detection frame and the current detection frame;
updating the initial confidence of the current detection frame based on the suppression coefficient.
3. The method of claim 2, wherein the information further comprises: position information, after obtaining the selection detection box, the method further comprising:
and determining the intersection ratio between the selection detection frame and the current detection frame based on the position information of the selection detection frame and the position information of the current detection frame.
4. The method of claim 3, wherein the selection detection box is multiple, and the determining the intersection ratio between the selection detection box and the current detection box based on the position information of the selection detection box and the position information of the current detection box comprises:
and performing parallel operation on the plurality of selection detection frames based on the position information of each selection detection frame in the plurality of selection detection frames and the position information of the current detection frame to respectively determine the intersection and parallel ratio of each selection detection frame in the plurality of selection detection frames and the current detection frame.
5. The method of claim 2, wherein said updating the initial confidence level of the current detection box based on the suppression coefficient comprises:
and taking the product of the suppression coefficient and the initial confidence of the current detection frame as the update confidence of the current detection frame.
6. The method of any of claims 1-5, wherein the traffic sign is plural, each of the plural traffic signs corresponds to plural candidate detection boxes, and the determining the final detection box corresponding to the traffic sign based on the updated confidence degrees of the plural candidate detection boxes comprises:
and corresponding to each traffic sign in the plurality of traffic signs, and taking the candidate detection frame with the highest update confidence coefficient in the plurality of candidate detection frames corresponding to each traffic sign as a final detection frame corresponding to each traffic sign.
7. A traffic sign detection device comprising:
the detection module is used for detecting and processing an environment image acquired by an automatic driving vehicle to obtain a plurality of candidate detection frames corresponding to traffic signs in the environment image and information of the candidate detection frames, wherein the information comprises an initial confidence level;
an updating module, configured to update the initial confidence degrees of the multiple candidate detection frames based on overlapping degrees between the multiple candidate detection frames to obtain updated confidence degrees of the multiple candidate detection frames;
and the determining module is used for determining a final detection frame corresponding to the traffic sign based on the updated confidence degrees of the candidate detection frames.
8. The apparatus of claim 7, wherein the overlap is a cross-over ratio, and the update module is specifically configured to:
taking each candidate detection frame in the plurality of candidate detection frames as a current detection frame respectively;
selecting a candidate detection frame with an initial confidence degree larger than that of the current detection frame from the plurality of candidate detection frames to obtain a selected detection frame;
determining the selected detection frame with the largest intersection ratio as a comparison detection frame based on the intersection ratio between the selected detection frame and the current detection frame;
determining a suppression coefficient of the current detection frame based on the intersection ratio between the comparison detection frame and the current detection frame;
updating the initial confidence of the current detection frame based on the suppression coefficient.
9. The apparatus of claim 8, wherein the information further comprises: location information, the update module further to:
and determining the intersection ratio between the selection detection frame and the current detection frame based on the position information of the selection detection frame and the position information of the current detection frame.
10. The apparatus according to claim 9, wherein the selection detection box is plural, and the update module is specifically configured to:
and performing parallel operation on the plurality of selection detection frames based on the position information of each selection detection frame in the plurality of selection detection frames and the position information of the current detection frame to respectively determine the intersection and parallel ratio of each selection detection frame in the plurality of selection detection frames and the current detection frame.
11. The apparatus of claim 8, wherein the update module is specifically configured to:
and taking the product of the suppression coefficient and the initial confidence of the current detection frame as the update confidence of the current detection frame.
12. The apparatus according to any one of claims 7 to 11, wherein the traffic sign is plural, each of the plural traffic signs corresponds to plural candidate detection frames, and the determining module is specifically configured to:
and corresponding to each traffic sign in the plurality of traffic signs, and taking the candidate detection frame with the highest update confidence coefficient in the plurality of candidate detection frames corresponding to each traffic sign as a final detection frame corresponding to each traffic sign.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-6.
15. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-6.
16. An autonomous vehicle comprising: the electronic device of claim 13.
CN202111039182.XA 2021-09-06 2021-09-06 Traffic sign detection method, device, equipment, medium and automatic driving vehicle Pending CN113963326A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111039182.XA CN113963326A (en) 2021-09-06 2021-09-06 Traffic sign detection method, device, equipment, medium and automatic driving vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111039182.XA CN113963326A (en) 2021-09-06 2021-09-06 Traffic sign detection method, device, equipment, medium and automatic driving vehicle

Publications (1)

Publication Number Publication Date
CN113963326A true CN113963326A (en) 2022-01-21

Family

ID=79461158

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111039182.XA Pending CN113963326A (en) 2021-09-06 2021-09-06 Traffic sign detection method, device, equipment, medium and automatic driving vehicle

Country Status (1)

Country Link
CN (1) CN113963326A (en)

Similar Documents

Publication Publication Date Title
CN112560680A (en) Lane line processing method and device, electronic device and storage medium
CN114036253B (en) High-precision map data processing method, device, electronic equipment and medium
CN113392794B (en) Vehicle line crossing identification method and device, electronic equipment and storage medium
CN112597895A (en) Confidence determination method based on offset detection, road side equipment and cloud control platform
US20230072632A1 (en) Obstacle detection method, electronic device and storage medium
CN113313962A (en) Signal lamp fault monitoring method and device, electronic equipment and storage medium
CN113570727B (en) Scene file generation method and device, electronic equipment and storage medium
CN112595329B (en) Vehicle position determining method and device and electronic equipment
CN114299192B (en) Method, device, equipment and medium for positioning and mapping
CN113762397B (en) Method, equipment, medium and product for training detection model and updating high-precision map
CN115830268A (en) Data acquisition method and device for optimizing perception algorithm and storage medium
CN113963326A (en) Traffic sign detection method, device, equipment, medium and automatic driving vehicle
CN114647816A (en) Method, device and equipment for determining lane line and storage medium
CN114049615B (en) Traffic object fusion association method and device in driving environment and edge computing equipment
CN114506343B (en) Track planning method, device, equipment, storage medium and automatic driving vehicle
CN112710305B (en) Vehicle positioning method and device
CN114694138B (en) Road surface detection method, device and equipment applied to intelligent driving
CN113361379B (en) Method and device for generating target detection system and detecting target
CN113552879B (en) Control method and device of self-mobile device, electronic device and storage medium
CN114383600B (en) Processing method and device for map, electronic equipment and storage medium
CN117853614A (en) Method and device for detecting change condition of high-precision map element and vehicle
CN116382298A (en) Task processing system, method, electronic device and storage medium
CN116753965A (en) Map matching method, device, electronic equipment and storage medium
CN115855024A (en) Pose graph optimization method and device, electronic equipment and automatic driving vehicle
CN116642503A (en) Likelihood map updating method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination