TWI476701B - Light signal detection method and apparatus for light-triggered events - Google Patents

Light signal detection method and apparatus for light-triggered events Download PDF

Info

Publication number
TWI476701B
TWI476701B TW101146704A TW101146704A TWI476701B TW I476701 B TWI476701 B TW I476701B TW 101146704 A TW101146704 A TW 101146704A TW 101146704 A TW101146704 A TW 101146704A TW I476701 B TWI476701 B TW I476701B
Authority
TW
Taiwan
Prior art keywords
color
light
input image
area
color intensity
Prior art date
Application number
TW101146704A
Other languages
Chinese (zh)
Other versions
TW201423606A (en
Inventor
Duan Yu Chen
Chien Peng Ho
yan jie Peng
Jen Yu Yu
Original Assignee
Ind Tech Res Inst
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ind Tech Res Inst filed Critical Ind Tech Res Inst
Priority to TW101146704A priority Critical patent/TWI476701B/en
Publication of TW201423606A publication Critical patent/TW201423606A/en
Application granted granted Critical
Publication of TWI476701B publication Critical patent/TWI476701B/en

Links

Landscapes

  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Description

燈號觸發事件偵測方法與裝置Light trigger event detection method and device

本揭露係關於一種燈號觸發事件偵測方法與裝置。The disclosure relates to a method and device for detecting a light trigger event.

各式交通燈號,例如方向燈,指揮燈、方向指示燈等,是車輛必備的燈號之一,顏色通常以黃色為主。方向燈觸發時可預告汽車即將變換車道,可能往左邊或右邊方向行駛或發生狀況。方向燈的位置在車輛前頭燈、後照鏡(或前輪弧)及尾燈的左右側,左右側各有一組,閃爍頻率每分鐘在60次以上,120次以下。方向燈觸發事件偵測裝置之應用範圍非常廣泛,尤其在夜間較不明顯之環境,方向燈不容易辨識,可利用車輛方向燈觸發事件偵測方法即時掌握特定區域之車輛行進變化,作為避免轉彎時車輛太過靠近而發生車禍之預警,以達成交通安全的目的。All kinds of traffic lights, such as direction lights, command lights, direction lights, etc., are one of the necessary lights for vehicles, and the colors are usually yellow. When the direction light is triggered, it can be predicted that the car is about to change lanes, and it may be driving to the left or right or the situation may occur. The position of the direction light is on the left and right sides of the front light, the rear view mirror (or the front wheel arc) and the tail light of the vehicle, and there is one set on the left and right sides. The blinking frequency is 60 times or more per minute and 120 times or less. The direction of the directional light triggering event detecting device is very wide, especially in the less obvious environment at night, the directional light is not easy to identify, and the vehicle direction light triggering event detecting method can be used to instantly grasp the vehicle traveling change in a specific area as a turn avoidance When the vehicle is too close, an early warning of a car accident occurs to achieve traffic safety.

近年有幾個基於視訊分析之方向燈觸發事件偵測方法被提出,目前方向燈觸發事件偵測方法主要有兩種偵測方式。一種方式是以車道上前方車輛特徵為基礎的偵測方法,根據對車道上之前車偵測後,所得到的前車區塊,進行此區塊內像素點顏色分析,再用前後張車輛影像之車燈相對位置關係、車燈的所在位置、群聚性、及閃爍頻率等車輛特徵來偵測方向燈。此偵測方法通常第一個步驟就是做前車偵測,在前車偵測中把本車道、左車道、及右車道作為搜尋範圍,將車子的陰影當成去尋找車子的底線,接 著利用陰影線寬與車道寬的比例及車頂位置等資訊估計出興趣區塊(Regions of Interest,ROIs),用來偵測車輛的左右兩邊。為了增加處理速度和準確率,對已經偵測到的前車區塊進行追蹤,並輔以車寬與車道寬的比例、對稱性、及區塊色彩值的標準差,來判別找到的區塊是否為車輛。根據前車偵測與追蹤後,所得到的前車區塊進行區塊內像素點的顏色分析,找出鎖定顏色像素,如:橘色、黃色,最後利用車燈在連續影像上前後位置的差異、車燈的位置、群聚性、及閃爍頻率來偵測方向燈。In recent years, several directional light trigger event detection methods based on video analysis have been proposed. At present, there are mainly two detection methods for directional light trigger event detection methods. One method is a detection method based on the characteristics of the vehicle in front of the lane. According to the obtained front vehicle block after detecting the previous vehicle on the lane, the color analysis of the pixel in the block is performed, and the vehicle image is used before and after. The directional lights are detected by vehicle characteristics such as the relative positional relationship of the lights, the position of the lights, the clustering, and the flicker frequency. The first step of this detection method is to do the front car detection. In the front car detection, the lane, the left lane, and the right lane are used as the search range, and the shadow of the car is taken as the bottom line of the car to find the bottom line. Using information such as the ratio of the shadow line width to the lane width and the position of the roof, estimates of the Regions of Interest (ROIs) are used to detect the left and right sides of the vehicle. In order to increase the processing speed and accuracy, the detected front block is tracked, and the ratio of the vehicle width to the lane width, the symmetry, and the standard deviation of the block color values are used to discriminate the found block. Whether it is a vehicle. According to the detection and tracking of the preceding vehicle, the obtained front car block performs color analysis of the pixels in the block to find the locked color pixels, such as orange and yellow, and finally uses the headlights on the front and rear positions of the continuous image. The difference, the position of the lights, the clustering, and the blinking frequency are used to detect the directional lights.

在夜間偵測車輛及追蹤依賴於車輛的後方尾燈,此方法假設為車尾燈的顏色主要為紅色,並以此為特徵在HSV(色相/飽和度/色調,Hue/Saturation/Value)色彩模型下將車尾燈與其它區域作區隔開來。另外,針對車燈有成對之特性,亦即利用對稱性加上車燈彼此距離的限制,以此為特徵結合卡爾曼濾波(Kalman filtering)作車輛追蹤。另一些文件統計顏色特徵以偵測可能為車輛尾燈之區域,並且為了濾除其它非車燈所產生的雜訊,以車燈之圓形特徵作為過濾之依據;又一些文件則以車輛尾燈為主要偵測目標,首先亦使用HSV色彩模型設計一顏色過濾器,將可能是尾燈的區域凸顯出來,當中利用大多數尾燈之顏色特性,包含紅色以及白色為主。此外,尾燈之間距亦作為特徵,主要避免將分屬不同車輛之尾燈劃分為同一群體,以此濾除大部分雜訊,上述方法之準確性取決於車輛尾燈之 所呈現之外形以及對稱的特性,當多車輛接鄰時易造成誤判。又另一些文件以尾燈之顏色特性,尾燈之大小對稱等特徵來判斷車距,此種偵測方法,依據左右兩邊車燈之對稱性偵測車輛或者車燈大小以判斷車距,來決定方向燈位置,進而偵測方向燈觸發事件,此方法易因多車輛時前車車輛被部分遮蔽,造成車輛拍攝之完整度不足而產生誤差。Detecting vehicles at night and tracking the rear taillights that depend on the vehicle, this method assumes that the color of the taillights is predominantly red and is characterized by the HSV (Hue/Saturation/Value) color model. Separate the rear lights from other areas. In addition, for the car lights have a pair of characteristics, that is, the use of symmetry plus the limit of the distance between the lights, as a feature combined with Kalman filtering for vehicle tracking. Other documents collect color features to detect areas that may be taillights of vehicles, and to filter out noise generated by other non-lights, the circular features of the lights are used as a basis for filtering; others are based on vehicle taillights. The main detection target is to first design a color filter using the HSV color model to highlight the area that may be the tail light, which uses the color characteristics of most taillights, including red and white. In addition, the distance between the taillights is also a feature, which mainly avoids dividing the taillights belonging to different vehicles into the same group, thereby filtering out most of the noise. The accuracy of the above method depends on the taillights of the vehicle. The appearance of the shape and the symmetry are easy to cause misjudgment when multiple vehicles are connected. In addition, some documents judge the distance between the color characteristics of the tail light and the size of the tail light. This detection method determines the direction of the vehicle or the light according to the symmetry of the left and right lights to determine the distance. The position of the lamp, in turn, detects the triggering event of the directional light. This method is easy to partially obscured by the vehicle in front of the vehicle due to multiple vehicles, resulting in an error in the completeness of the vehicle shooting.

因此,一種基於各式車輛燈號特徵的主動偵測之方法,在日夜間環境下達成與前方車輛遠距離時提供預警,而於近距離時提供警示,以避免車輛碰撞意外事件發生,且適用於不同車型及排除天候影響,即使在部分可視之情況下,也能偵測燈號觸發事件,無需明確偵測畫面中之個別獨立車輛,避免多車輛追蹤時之誤判問題,進而提高準確度,在僅獲得單張影像時,若影像中之方向燈已觸發下,仍可偵測方向燈位置等將是迫切的需求。Therefore, an active detection method based on various vehicle signal characteristics provides an early warning when a long distance to the preceding vehicle is achieved in a daytime night environment, and a warning is provided at a close distance to avoid a vehicle collision accident, and is applicable. In the case of different models and exclusion of weather, even in the case of partial visibility, it can detect the triggering event of the signal, without having to explicitly detect the individual independent vehicles in the screen, avoiding the misjudgment problem when multi-vehicle tracking, thereby improving the accuracy. When only a single image is obtained, if the direction light in the image has been triggered, it is still an urgent need to detect the position of the directional light.

本揭露實施例可提供一種使用視訊來源的燈號觸發事件偵測方法與裝置。The disclosed embodiments may provide a signal triggering event detecting method and apparatus using a video source.

在一實施例中,本揭露係關於一種燈號觸發事件偵測方法,此方法包含:取得一輸入影像;強化此輸入影像的至少一亮燈區與至少一背景的一色彩強度對比,利用一散 射偵測,依據此色彩強度對比,決定出至少一燈號候選區,依據此燈號候選區,利用一色彩空間轉換與一線性劃分,以判斷出至少一方向燈或其它色燈區塊,以及,輸出一偵測結果。In one embodiment, the present disclosure is directed to a method for detecting a light trigger event, the method comprising: obtaining an input image; and enhancing a color intensity comparison of at least one light area of the input image with at least one background, using one Scatter According to the color intensity comparison, at least one signal candidate area is determined, and a color space conversion and a linear division are used according to the light number candidate area to determine at least one direction light or other color light block. And, output a detection result.

在另一實施例中,本揭露係關於一種方向燈觸發事件偵測裝置,此裝置包含至少一影像擷取裝置、一運算模組、以及至少一輸出裝置,此運算模組還包括一強化對比模組、一散射篩選模組、以及一顏色辨別模組。此影像擷取裝置取得一輸入影像;此強化對比模組強化此輸入影像的至少一亮燈區與至少一背景區的一色彩強度對比;此散射篩選模組利用一散射偵測,依據此色彩強度對比,決定出至少一燈號候選區;此顏色辨別模組依據此燈號候選區,利用一色彩空間轉換與一線性劃分,以判斷出至少一方向燈或其它色燈區塊;以及,此輸出裝置輸出一偵測結果。In another embodiment, the present disclosure is directed to a directional light trigger event detecting device, the device comprising at least one image capturing device, an arithmetic module, and at least one output device, the computing module further comprising a contrasting comparison The module, a scattering screening module, and a color discrimination module. The image capturing device obtains an input image; the enhanced contrast module enhances a color intensity contrast between at least one lighting area of the input image and at least one background area; the scattering screening module utilizes a scattering detection according to the color Intensity comparison, determining at least one signal candidate area; the color discrimination module uses a color space conversion and a linear division according to the light number candidate area to determine at least one direction light or other color light block; The output device outputs a detection result.

茲配合下列圖示、實施例之詳細說明及申請專利範圍,將上述及本揭露之其他優點詳述於後。The above and other advantages of the present disclosure will be described in detail below with reference to the following drawings, detailed description of the embodiments, and claims.

本揭露係關於一種燈號觸發事件偵測方法與裝置。The disclosure relates to a method and device for detecting a light trigger event.

本揭露各實施例中,舉例地運用了車輛尾燈及方向燈 之不變性,為初步濾除大量雜訊,以顏色為基礎之過濾器為首要步驟,在獲得可能為車燈之區域後,再以較精準之特徵選取方向燈候選區,進而在影像原始空間域以及轉換域偵測方向燈觸發事件。除了於影像原始域探討色彩上的差異之外,更重要的特性在於方向燈觸發時與非方向燈之散射程度不同,對於被觸發的方向燈來說,有著較大的散射範圍,散射程度較大之區域在頻率域中所呈現出來的結果有著較大的響應,但由於不同種類之車輛以及不同角度所獲得之車燈影像將有或多或少之差異,本揭露實施例可以有效區隔方向燈與尾燈,以達成方向燈觸發事件偵測目的,但不受限於方向燈與尾燈,其他燈號亦均可適用。In the embodiments of the disclosure, the taillights and the direction lights of the vehicle are used as an example. The invariance, in order to initially filter out a large amount of noise, the color-based filter is the first step. After obtaining the area that may be the lamp, the direction candidate area is selected with more precise features, and then in the image original space. The domain and transition domain detect direction light trigger events. In addition to discussing the difference in color in the original image field, the more important feature is that the direction of the light is different from that of the non-directional light. For the direction light that is triggered, there is a larger scattering range and the scattering degree is higher. The large area has a large response in the frequency domain, but the embodiment of the present invention can be effectively distinguished because different types of vehicles and different angles of the vehicle image obtained by different angles will be more or less different. The direction light and the tail light are used to trigger the event detection purpose of the direction light, but are not limited to the direction light and the tail light, and other light numbers are also applicable.

第一圖根據本揭露的一實施例,以車輛方向燈為範例說明燈號觸發事件偵測方法。如第一圖所示,此方法包含:取得一輸入影像,如步驟110所示;強化此輸入影像的至少一亮燈區與至少一背景的一色彩強度對比,如步驟120所示;利用一散射偵測,依據此色彩強度對比,決定出一燈號候選區,如步驟130所示,依據此燈號候選區,利用一色彩空間轉換與一線性劃分,以判斷出一方向燈或其它色燈區塊,如步驟140所示;以及輸出一偵測結果,如步驟150所示。The first figure illustrates a light source trigger event detecting method by using a vehicle direction light as an example according to an embodiment of the present disclosure. As shown in the first figure, the method includes: obtaining an input image, as shown in step 110; enhancing a color intensity contrast between the at least one lighting area of the input image and the at least one background, as shown in step 120; The scatter detection determines a candidate area for the light according to the contrast of the color intensity. As shown in step 130, a color space conversion and a linear division are used according to the candidate area of the light to determine a direction light or other color. The light block, as shown in step 140; and outputting a detection result, as shown in step 150.

第二圖是一流程示意圖,詳細說明第一圖中燈號觸發事件偵測方法。如第二圖所示,取得一輸入影像110步驟 中,以取得一視訊來源之一輸入影像。此強化對比120包含一影像強化121與一背景對比122。影像強化121,例如經由一過濾方式,將此輸入影像濾除雜訊;一背景對比122,例如以逐級函數計算,提高此輸入影像的一亮燈區和一背景的對比。The second figure is a flow diagram illustrating the method for detecting the triggering event of the light in the first figure. As shown in the second figure, taking an input image 110 step In order to obtain an input image of one of the video sources. This enhanced contrast 120 includes an image enhancement 121 and a background contrast 122. The image enhancement 121 filters the input image by a filtering method, for example, through a filtering method. A background contrast 122 is calculated, for example, by a stepwise function, to improve a contrast between a lighted area and a background of the input image.

散射篩選130還包括一參數計算131與一特徵篩選132。參數計算131,以建立模型化散射的特性,獲得該輸入影像的至少一個燈號候選區;一特徵篩選132,依據一特徵參數門檻值,判斷此多個燈號候選區的各個亮點,即是依散射程度大小上作出區分以判斷方向燈或其它色燈區塊。The scatter filter 130 also includes a parameter calculation 131 and a feature filter 132. The parameter calculation 131 is used to establish the characteristic of the modeled scattering to obtain at least one signal candidate area of the input image; a feature screening 132 determines the bright spots of the plurality of light number candidate areas according to a characteristic parameter threshold value, that is, Differentiate the degree of scattering to determine the direction light or other color light block.

顏色辨別140將燈號候選區經由一色彩空間轉換及線性劃分141以便於區隔散射程度較強的煞車燈,找出方向燈的區塊,進而由一判斷顏色142判斷該區塊的顏色。輸出一偵測結果150判斷上述步驟的結果,以決定方向燈觸發事件,例如區塊符合黃顏色的條件,則視為方向燈事件發生,以及輸出判斷結果,然其他顏色亦可適用,不因此而受限。The color discrimination 140 converts the lamp candidate area through a color space and linearly divides 141 to facilitate the division of the illuminating lamp with a relatively high degree of scattering, and finds the block of the directional light, and then determines the color of the block by a determination color 142. Outputting a detection result 150 to determine the result of the above step to determine a direction light trigger event. For example, if the block meets the condition of yellow color, it is regarded as a directional light event, and the judgment result is output, but other colors may also be applicable. Limited.

在取得一輸入影像110的步驟中,其中視訊來源是一視訊裝置、一影音檔案或網路串流等擷取RGB視訊之來源,RGB色彩模式是工業界的顏色標準之一,RGB代表 紅、綠、藍三個顏色,一RGB三個數值代表在影像中一個像素的紅、綠、藍顏色的數值。影像強化121之目的是初步濾除大量雜訊;在本揭露實施例中,此影像強化121可經由以顏色為基礎之過濾方式,此顏色為基礎之過濾方式例如是把整張影像的每一個像素的RGB三個數值以下列式子做影像強化, In the step of obtaining an input image 110, wherein the video source is a video device, a video file or a network stream, etc., the RGB color mode is one of the color standards of the industry, and RGB represents red, The three colors of green and blue, one RGB three values represent the values of the red, green and blue colors of one pixel in the image. The purpose of the image enhancement 121 is to initially filter out a large amount of noise; in the disclosed embodiment, the image enhancement 121 may be via a color-based filtering method, such as filtering each of the entire image. The three values of the RGB of the pixel are image-enhanced by the following formula.

其中Max(R,G,B)表示每一像素RGB三個數值中的最大值,即為該像素最有代表性的值,並且將其除以255作正規化,即選取該輸入影像的每一像素三原色數值的最大值為該像素的顏色值,而成為濾除雜訊的依據。背景對比122是以較精準之特徵,選取車輛亮燈區,在得到上述影像強化121強化後的影像後,提高此影像中亮燈區域和背景的對比度。由於在真實道路環境,除了來自車輛本身的車燈之外,尚有來自背景之光源,例如道路周遭之建築物、路燈以及反射光等干擾光源,當這些光源與車燈十分類似時,都可能對車輛亮燈例如尾燈或者方向燈的偵測造成極大之影響。因此,在本揭露實施例中,強化對比120包含的背景對比122針對上述影像強化121強化後的色彩強度影像,提高此影像中亮燈區和背景的對比度,以進一步減少非車輛亮燈的雜訊干擾。Where Max(R, G, B) represents the maximum of the three values of RGB for each pixel, that is, the most representative value of the pixel, and divides it by 255 for normalization, that is, selects each input image. The maximum value of one pixel and three primary colors is the color value of the pixel, which becomes the basis for filtering out the noise. The background contrast 122 is a more precise feature, and the vehicle lighting area is selected. After the image enhanced by the image enhancement 121 is obtained, the contrast between the lighting area and the background in the image is improved. In the real road environment, in addition to the lights from the vehicle itself, there are light sources from the background, such as buildings around the road, street lights and reflected light, etc., when these lights are very similar to the lights, it is possible It has a great impact on the detection of vehicle lighting such as taillights or directional lights. Therefore, in the embodiment of the disclosure, the background contrast 122 included in the enhancement contrast 120 is used to enhance the contrast between the illumination area and the background in the enhanced image of the image enhancement 121 to further reduce the non-vehicle lighting. Interference.

如第三圖所示者是說明背景對比122如何提高影像中亮燈區和背景的對比度。背景對比122例如使用逐級函數 (step function)處理,如下列所式: As shown in the third figure, it is illustrated how the background contrast 122 improves the contrast of the lighting area and the background in the image. The background contrast 122 is processed, for example, using a step function, as follows:

其中參數θu 是車輛亮燈的色彩強度門檻值,也就是說色彩強度若大於色彩強度門檻值θu 的影像像素將列為處理的目標;因此在第二圖中,判斷該輸入影像中至少一像素之色彩強度是否高於色彩強度門檻值,若是高於此色彩強度門檻值,則取用該像素,若不是高於此色彩強度門檻值,則不取用該像素,最後即得到一去除雜訊的影像。Wherein the parameter θ u is a color intensity threshold value of the vehicle lighting, that is, the image pixel whose color intensity is greater than the color intensity threshold θ u will be listed as the processing target; therefore, in the second figure, it is determined that the input image is at least Whether the color intensity of one pixel is higher than the threshold value of the color intensity, if it is higher than the threshold value of the color intensity, the pixel is taken, and if it is not higher than the threshold value of the color intensity, the pixel is not used, and finally, a pixel is removed. An image of noise.

在散射篩選130還包含一參數計算131與一特徵篩選132。其中,參數計算131先建立一模型化散射的特性,舉例來說,運用中上(Nakagmi)統計模型或其他模型以建立模型化散射的特性。中上統計模型的分佈模型如下列所式: The scatter filter 130 also includes a parameter calculation 131 and a feature filter 132. Among them, the parameter calculation 131 first establishes a characteristic of modeling scattering, for example, using a mid-up (Nakagmi) statistical model or other models to establish the characteristics of modeled scattering. The distribution model of the upper-middle statistical model is as follows:

其中,Γ(.)為gamma函數,m為中上參數,Ω為比率參數。比率參數Ω和中上分佈模型的中上參數m如下列所式:Ω=E(R2 )Where Γ(.) is the gamma function, m is the upper-middle parameter, and Ω is the ratio parameter. The upper-middle parameter m of the ratio parameter Ω and the upper-middle distribution model is as follows: Ω=E(R 2 )

其中R為被散射封包,E(.)為統計的平均值。中上參數m是形狀參數,從被散射封包之機率密度函數決定。Where R is the scattering envelope and E(.) is the statistical average. The upper middle parameter m is a shape parameter determined by the probability density function of the scattered packet.

上述的中上分佈(Nakagami distribution)的特性是用來模型化尾燈,以找出可能為燈號的區域,應用中上m參數能夠在影像中的散射程度大小上作出區分,基本上閃爍的方向燈和剎車燈比起一般狀態的其他尾燈,其散射的程度會明顯的大上許多。在本揭露實施例中,定義兩個特徵函數DR 與SR ,在中上m參數的影像中,DR 表示中上影像局部區域的密度,而SR 表示局部區域中上參數的一致性,兩個特徵函數DR 與SR 如下列所示: The above-mentioned characteristics of the Nakagami distribution are used to model the taillights to find the area that may be the signal number. The upper m parameter can be used to distinguish the degree of scattering in the image, basically the direction of the blinking. The degree of scattering of the lights and brake lights is significantly greater than other taillights in the normal state. In the disclosed embodiment, two feature functions D R and S R are defined. In the image of the upper middle m parameter, D R represents the density of the local region of the upper middle image, and S R represents the consistency of the upper parameter in the local region. The two characteristic functions D R and S R are as follows:

其中Pi 表示影像中第i個像素,pR,B(pi ){B(pi <0.9)=0,B(pi 0.9)=1},and n2 為正方形掃描視窗中像素的總數。再者藉由觀察在方向燈亮起跟未亮起的統計模型中兩個特徵函數DR 與SR 的視覺顯著圖(Visual Saliency Image),可以發現到車輛尾燈的中上m值會明顯不同,其中視覺顯著圖例如影像視覺特徵圖(Visual Characteristics Map),即是在一張影像上表示出視覺顯著區域的對應圖。因此在參數計算131中,藉由建立模型化散射的特性可以獲得該輸入影像的多個具備形狀參數(Nakagami m值)的燈號候選區。Where P i represents the ith pixel in the image, p R, B(p i ) {B(p i <0.9)=0, B(p i 0.9)=1}, and n 2 is the total number of pixels in the square scan window. Furthermore, by observing the visual saliency images of the two characteristic functions D R and S R in the directional light and the unlit statistical model, it can be found that the middle and upper m values of the vehicle taillights are significantly different. The visual saliency map, for example, the visual characteristics map, is a corresponding map showing a visually significant area on one image. Therefore, in the parameter calculation 131, a plurality of signal candidate regions having shape parameters (Nakagami m values) of the input image can be obtained by establishing the characteristics of the modeled scattering.

其中,特徵篩選132可依據一特徵參數門檻值,判斷此多個燈號候選區的各個亮點。其中特徵參數門檻值是形 狀參數(Nakagami m值),即是依散射程度大小上作出區分以判斷燈號亮起。但是由於光線具有隨著距離增加而衰減的特性,車燈隨著與攝影機的距離愈遠,則車燈訊號將隨之衰減;因此,此特徵參數門檻值可依據一被拍攝物,例如車輛與一影像擷取裝置,例如攝影機之間的距離作調整,根據下列式子作適應性調整方向燈偵測之特徵參數門檻值, The feature screening 132 may determine each bright spot of the plurality of light number candidate regions according to a characteristic parameter threshold value. The threshold value of the characteristic parameter is a shape parameter (Nakagami m value), that is, the difference in the degree of scattering is determined to judge that the light is on. However, since the light has a characteristic that the light is attenuated as the distance increases, the distance of the lamp from the camera is further attenuated; therefore, the threshold value of the characteristic parameter can be based on a subject, such as a vehicle and An image capturing device, such as a distance between cameras, is adjusted, and the threshold value of the characteristic parameter of the directional light detection is adaptively adjusted according to the following formula.

其中TM 是特徵參數門檻值,Ddis 表示距離,Hd 表示距離上限,Ld 表示距離下限。此特徵篩選132檢查中上m參數的矩陣,如果有大於特徵參數門檻值的像素出現,則記錄其座標,如第四圖所示。Where T M is the characteristic parameter threshold value, D dis represents the distance, H d represents the upper limit of the distance, and L d represents the lower limit of the distance. This feature filter 132 checks the matrix of the upper m-parameters, and if there are pixels larger than the feature parameter threshold, the coordinates are recorded, as shown in the fourth figure.

在找到可能為燈號亮點的區域後,顏色辨別140的色彩空間轉換及線性劃分141將上述處理過的輸入影像由RGB(紅綠藍三原色)色彩空間轉換至CIE(國際照明委員會,International Commission on Illumination)色彩空間,再使用顏色來區隔方向燈和同樣為散射程度強的煞車燈。在CIE XYZ色彩空間中,三色刺激值是一組稱為X、Y和Z的值,計算方式如下:X=xY/y,Z=((1-x-y)Y)/y,Y(XYZ) =Y(xyY) .After finding an area that may be a bright spot of the light, the color space conversion and linear division 141 of the color discrimination 140 converts the processed input image from the RGB (red, green, and blue primary colors) color space to the CIE (International Commission on Illumination, International Commission on Illumination) The color space is then used to separate the directional lights and the illuminating lights that are also highly diffuse. In the CIE XYZ color space, the tristimulus values are a set of values called X, Y, and Z, calculated as follows: X=xY/y, Z=((1-xy)Y)/y, Y (XYZ ) =Y (xyY) .

其中,XYZ色彩空間可以與RGB色彩空間相互轉換,把RGB的輸入影像轉換至CIE的色彩空間,經過轉換矩陣M把RGB圖像的三通道r,g,b轉換成X,Y,Z的參數,如以下式子:[X Y Z]=[r g b][M],其中M為轉換矩陣, Among them, the XYZ color space can be converted with the RGB color space, and the RGB input image is converted into the CIE color space, and the three channels r, g, b of the RGB image are converted into X, Y, Z parameters by the conversion matrix M. , as in the following formula: [XYZ]=[rgb][M], where M is the transformation matrix,

而[r g b]And [r g b]

R=r1/γ ,G=g1/γ ,B=b1/γR = r 1 / γ , G = g 1 / γ , B = b 1 / γ .

承上述,輸入影像經由上述的RGB色彩空間轉換至CIE色彩空間後,色彩空間轉換及線性劃分141再以線性的方式劃分出某特定顏色,例如黃色區塊是由判別達成黃色區塊的條件而定,即判斷該燈號候選區的顏色是依CIE色彩空間的XYZ參數符合某特定顏色的條件作判斷。According to the above, after the input image is converted into the CIE color space via the RGB color space, the color space conversion and linear division 141 divides a specific color in a linear manner, for example, the yellow block is determined by the condition that the yellow block is determined. The determination, that is, the color of the candidate area of the light is determined according to the condition that the XYZ parameter of the CIE color space conforms to a specific color.

如第五圖所示,是以線性的方式劃分出例如黃色的區塊以判別達成黃色區塊的條件。此條件為: As shown in the fifth figure, a block such as yellow is divided in a linear manner to discriminate the condition for reaching a yellow block. This condition is:

換句話說,轉換為CIE色彩空間的三個X,Y,Z參數若同時符合上述三式子,則經由判斷顏色142認定為黃色的 方向燈,惟其它顏色可依此換算而得,如第六圖所示。In other words, if the three X, Y, and Z parameters converted to the CIE color space simultaneously satisfy the above three formulas, it is determined to be yellow by the judgment color 142. Directional lights, but other colors can be converted according to this, as shown in the sixth picture.

上述中取得的輸入影像經由散射篩選130和顏色辨別140後之結果,結果輸出150判斷上述程序的結果,以決定方向燈觸發事件,例如燈號候選區符合黃顏色的條件時,則視為方向燈事件發生,以及輸出判斷結果,若未符合則為其它色燈。The result of the above-mentioned input image is determined by the scatter filter 130 and the color discrimination 140, and the result output 150 determines the result of the program to determine the directional light trigger event. For example, when the signal candidate area meets the condition of the yellow color, the direction is regarded as the direction. The lamp event occurs, and the judgment result is output. If it is not met, it is another color lamp.

第七圖是與本揭露另一實施範例一致的一示意圖,說明燈號觸發事件偵測方法的流程圖。如第七圖所示,取得一輸入影像,如步驟110所示;強化此輸入影像的至少一亮燈區與至少一背景的一色彩強度對比,如步驟120所示;利用一色彩空間轉換與一線性劃分,依據該色彩強度對比,決定出至少一燈號候選區,例如全為黃顏色燈號,如步驟140所示;依據該燈號候選區,利用一散射偵測,來判斷出至少一方向燈或其它色燈區塊,如步驟130所示;以及輸出一偵測結果,如步驟150所示;第七圖意在說明步驟130與步驟140之實施順序可與第二圖所示者前後更動,惟此並不影響二實施範例之功效。FIG. 7 is a schematic diagram consistent with another embodiment of the present disclosure, illustrating a flowchart of a method for detecting a trigger event of a light. As shown in the seventh figure, an input image is obtained, as shown in step 110; enhancing at least one lighting area of the input image and a color intensity contrast of at least one background, as shown in step 120; using a color space conversion and a linear division, according to the color intensity comparison, determining at least one signal candidate area, for example, all yellow color lights, as shown in step 140; according to the light number candidate area, using a scattering detection to determine at least a direction light or other color light block, as shown in step 130; and outputting a detection result, as shown in step 150; the seventh figure is intended to illustrate that the order of execution of steps 130 and 140 can be as shown in the second figure. The change is before and after, but this does not affect the efficacy of the second implementation paradigm.

第八圖是在說明當上述判斷出一方向燈區塊後,如何決定車輛之左或右方向燈,左右方向燈判斷810採用以燈光星狀散射的概念,在經由散射篩選130和顏色辨別140找到方向燈區塊之後,以該方向燈區塊為中心向左下方、 右下方或二者延伸尋找具備較弱散射值的區域,即保險桿或車牌。左右方向燈判斷程序810處理方式和特徵篩選132相同,依據不相同或相同之特徵參數門檻值判斷該區域的亮點,即使用Nakagami Imaging的方式處理,得到小區域的視覺顯著圖,此是因為前方車輛車牌以及保險桿的部分,因為反光而有較弱散射值,即分佈統計模型的形狀參數值(如:Nakagami的m值)。左右方向燈判斷810尋找到具備較弱散射值的一區域後,再以相對位置的特性來區分左方向燈或右方向燈,即若是在左下方處尋找到具備較弱散射值的區域,則判斷為右方向燈觸發事件,若是在右下方處尋找到具備較弱散射值的區域,則判斷為左方向燈觸發事件。最後將判斷結果輸出到結果輸出150。The eighth figure is to explain how to determine the left or right direction of the vehicle after determining the direction of the light block. The left and right direction light determination 810 adopts the concept of scattering by light star, and is filtered through the scattering filter 130 and the color discrimination 140. After finding the direction light block, the direction light block is centered to the lower left, The lower right or both extend to find areas with weaker scattering values, ie bumpers or license plates. The left and right direction light determining program 810 is processed in the same manner as the feature filtering 132. The bright point of the area is determined according to different or the same characteristic parameter threshold value, that is, the method of Nakagami Imaging is used to obtain a visual saliency map of the small area, because the front is The part of the vehicle license plate and the bumper has a weak scattering value due to reflection, that is, the shape parameter value of the distribution statistical model (for example, the m value of Nakagami). The left and right direction lights determine 810 to find an area with a weaker scattering value, and then distinguish the left direction light or the right direction light by the characteristics of the relative position, that is, if an area having a weaker scattering value is found at the lower left side, It is determined that the right direction light trigger event, and if an area having a weaker scattering value is found at the lower right side, it is determined that the left direction light trigger event. Finally, the judgment result is output to the result output 150.

第九圖根據本揭露的一實施例,說明燈號觸發事件偵測裝置。如第九圖所示,此裝置包含至少一影像擷取裝置910、一運算模組920、以及至少一輸出裝置930。運算模組920還包括一強化對比模組921、一散射篩選模組922、以及一顏色辨別模組923,請參考第十圖。影像擷取裝置910取得一輸入影像;強化對比模組921強化此輸入影像的至少一亮燈區與至少一背景的一色彩強度對比;散射篩選模組922利用一散射偵測,依據此色彩強度對比,決定出至少一燈號候選區;顏色辨別模組923依據此燈號候選區,利用一色彩空間轉換與一線性劃分,判斷出至少一方向燈或其它色燈區塊;以及,此輸出裝置930輸出一偵測 結果。The ninth figure illustrates a light source trigger event detecting apparatus according to an embodiment of the present disclosure. As shown in the ninth figure, the device includes at least one image capturing device 910, a computing module 920, and at least one output device 930. The computing module 920 further includes an enhanced contrast module 921, a scattering screening module 922, and a color discrimination module 923. Please refer to the tenth figure. The image capturing device 910 obtains an input image; the enhanced contrast module 921 enhances a color intensity contrast between the at least one lighting region of the input image and the at least one background; and the scattering screening module 922 utilizes a scattering detection according to the color intensity. Comparing, determining at least one signal candidate area; the color discrimination module 923 determines at least one direction light or other color light block by using a color space conversion and a linear division according to the light number candidate area; and, the output Device 930 outputs a detection result.

第十圖是一示意圖,詳細說明第九圖中方向燈觸發事件偵測裝置。如第十圖所示,運算模組920還包括一強化對比模組921、一散射篩選模組922、以及一顏色辨別模組923。視訊擷取裝置910取得一視訊來源的一輸入影像,並且將此輸入影像輸出到此運算模組920。強化對比模組921將此輸入影像經由一過濾方式將此輸入影像濾除部分雜訊,以及提高該輸入影像的亮燈區和背景的對比。散射篩選模組922建立模型化散射特性,並依據一特徵參數門檻值判斷出至少一燈號候選區。顏色辨別模組923再將該候選區做一色彩空間轉換與線性劃分以判斷該區的顏色。該結果輸出裝置930判斷方向燈觸發事件以及輸出判斷結果至外接裝置。此方向燈觸發事件偵測裝置之運算模組920還被配置以方向燈為中心向左下方、右下方或二者延伸尋找一區域,依據一較弱散射值特徵參數門檻值來判斷該區域的亮點,再以相對位置來區分左方向燈或右方向燈。The tenth figure is a schematic diagram illustrating the directional light trigger event detecting device in the ninth figure. As shown in the tenth figure, the computing module 920 further includes an enhanced contrast module 921, a scattering screening module 922, and a color discrimination module 923. The video capture device 910 obtains an input image of a video source and outputs the input image to the computing module 920. The enhanced contrast module 921 filters the input image to filter the partial noise by a filtering method, and improves the contrast between the lighting area and the background of the input image. The scatter screening module 922 establishes a modeled scatter characteristic and determines at least one signal candidate area based on a characteristic parameter threshold. The color discrimination module 923 then performs a color space conversion and linear division on the candidate area to determine the color of the area. The result output means 930 judges the directional light triggering event and outputs the determination result to the external device. The operation module 920 of the directional light triggering event detecting device is further configured to extend to the lower left, the lower right, or both of the direction lights to find an area, and determine the area according to a weak scatter value characteristic parameter threshold value. Highlights, then distinguish the left or right direction lights by relative position.

綜上所述,本揭露之一種燈號觸發事件偵測方法與裝置,舉例利用車輛尾燈及方向燈之不變性,先以顏色或強弱為基礎之過濾方式初步濾除大量雜訊,在獲得可能為車燈之區域後,再以較精準之特徵選取方向燈候選區,進而在影像原始空間域以及轉換域偵測方向燈觸發事件。以及 利用方向燈觸發時與非方向燈之散射程度不同,並且有效區隔方向燈與尾燈間之差異,以達成方向燈觸發事件偵測目的。In summary, the method and device for detecting the triggering event of the light source of the present invention, for example, utilizes the invariance of the taillights and the directional lights of the vehicle, and initially filters out a large amount of noise according to the filtering method based on color or strength, and obtains possible After the area of the lamp is used, the direction candidate area is selected with a more precise feature, and the direction light trigger event is detected in the image original space domain and the conversion domain. as well as When the direction light is triggered, the degree of scattering is different from that of the non-directional light, and the difference between the direction light and the tail light is effectively separated to achieve the purpose of the direction light trigger event detection.

以上所述者皆僅為本揭露實施例,不能依此限定本揭露實施之範圍。大凡本發明申請專利範圍所作之均等變化與修飾,皆應屬於本發明專利涵蓋之範圍。The above is only the embodiment of the disclosure, and the scope of the disclosure is not limited thereto. All changes and modifications made to the scope of the patent application of the present invention are intended to fall within the scope of the invention.

110‧‧‧取得一輸入影像110‧‧‧Get an input image

120‧‧‧強化此輸入影像的至少一亮燈區與至少一背景區的一色彩強度對比120‧‧‧Strengthen the intensity of at least one lighting area of the input image and at least one background area

121‧‧‧影像強化121‧‧·Image enhancement

122‧‧‧背景對比122‧‧‧Background comparison

130‧‧‧利用一散射偵測,依據此色彩強度對比 決定出一燈號候選區130‧‧‧Use a scatter detection based on this color intensity contrast Decide on a candidate area

131‧‧‧參數計算131‧‧‧Parameter calculation

132‧‧‧特徵篩選132‧‧‧Feature screening

140‧‧‧依據此燈號候選區,利用一色彩空間轉換與一線性劃分,以判斷出一方向燈或其它色燈區塊140‧‧‧Based on the candidate area of the light, a color space conversion and a linear division are used to determine a direction light or other color light block

141‧‧‧色彩空間轉換及線性劃分141‧‧‧Color space conversion and linear division

142‧‧‧判斷顏色142‧‧‧Determination of color

150‧‧‧輸出一偵測結果150‧‧‧ Output a detection result

810‧‧‧左右方向燈判斷810‧‧‧ Left and right direction lights

910‧‧‧視訊擷取裝置910‧‧‧Video capture device

920‧‧‧運算模組920‧‧‧ Computing Module

921‧‧‧強化對比模組921‧‧‧Enhanced contrast module

922‧‧‧散射篩選模組922‧‧‧Drop screening module

923‧‧‧顏色辨別模組923‧‧‧Color Identification Module

930‧‧‧輸出裝置930‧‧‧ Output device

第一圖根據本揭露的一實施例,以方向燈為範例說明燈號觸發事件偵測方法。The first figure illustrates a method for detecting a light trigger event by using a direction light as an example according to an embodiment of the present disclosure.

第二圖是一流程示意圖,詳細說明第一圖中燈號觸發事件偵測方法。The second figure is a flow diagram illustrating the method for detecting the triggering event of the light in the first figure.

第三圖所示者是與本揭露的一實施範例一致的一示意圖,說明背景對比提高影像中亮燈區域和背景的對比度。The third figure is a schematic diagram consistent with an embodiment of the present disclosure, illustrating that the background contrast improves the contrast of the illuminated area and the background in the image.

第四圖是與所揭露的一實施範例一致的一示意圖,說明特徵篩選檢查中上m參數的矩陣,如果有大於特徵參數門檻值的像素出現,則記錄其座標。The fourth figure is a schematic diagram consistent with an embodiment of the disclosure, illustrating a matrix of upper m parameters in the feature screening check, and if there are pixels larger than the threshold of the feature parameter, the coordinates are recorded.

第五圖是與所揭露的一實施範例一致的一範例示意圖,說明以線性的方式劃分出黃色的區塊以判別達成黃色區塊的條件。The fifth figure is an exemplary diagram consistent with an embodiment of the disclosure, illustrating that the yellow blocks are divided in a linear manner to determine the condition for reaching the yellow block.

第六圖是與所揭露的一實施範例一致的一範例示意圖,說明轉換為CIE色彩空間的三個X,Y,Z參數若符合黃色的區塊的條件,則認定為黃色的方向燈。The sixth figure is an example schematic diagram consistent with an embodiment of the disclosure, illustrating that the three X, Y, and Z parameters converted to the CIE color space are considered to be yellow directional lights if they meet the conditions of the yellow block.

第七圖是與本揭露另一實施範例一致的一示意圖,說明燈號觸發事件偵測方法的流程圖。FIG. 7 is a schematic diagram consistent with another embodiment of the present disclosure, illustrating a flowchart of a method for detecting a trigger event of a light.

第八圖是與本揭露一實施範例一致的一示意圖,說明車輛左右方向燈觸發事件偵測方法的流程圖。The eighth figure is a schematic diagram consistent with an embodiment of the present disclosure, illustrating a flowchart of a method for detecting a trigger event of a left and right direction of a vehicle.

第九圖根據本揭露的一實施例,以方向燈為範例說明燈號觸發事件偵測裝置。The ninth embodiment illustrates a light-trigger event detecting device by using a direction light as an example according to an embodiment of the present disclosure.

第十圖是一示意圖,詳細說明第九圖中燈號發事件偵 測裝置。The tenth figure is a schematic diagram detailing the signal detection in the ninth picture Measuring device.

110‧‧‧取得一輸入影像110‧‧‧Get an input image

120‧‧‧強化此輸入影像的至少一亮燈區與至少一背景的一色彩強度對比120‧‧‧Enhance the contrast between at least one lighting area of the input image and at least one background color intensity

130‧‧‧利用一散射偵測,依據此色彩強度對比,決定出至少一燈號候選區130‧‧‧ Using a scatter detection, based on this color intensity contrast, determine at least one signal candidate area

140‧‧‧依據此燈號候選區,利用一色彩空間轉換與一線性劃分,以判斷出至少一方向燈或其它色燈區塊140‧‧‧According to the candidate area of the signal, a color space conversion and a linear division are used to determine at least one direction light or other color light block

150‧‧‧輸出一偵測結果150‧‧‧ Output a detection result

Claims (22)

一種燈號觸發事件偵測方法,該方法包含:取得一輸入影像;強化該輸入影像的至少一亮燈區與至少一背景的一色彩強度對比;利用一散射偵測,依據該色彩強度對比,決定出至少一燈號候選區;依據該燈號候選區,利用一色彩空間轉換與一線性劃分,以判斷出至少一色燈區塊;以及輸出一偵測結果。A method for detecting a triggering event of a light source, the method comprising: obtaining an input image; enhancing a color intensity contrast between at least one lighting area of the input image and at least one background; using a scattering detection, according to the color intensity contrast, Determining at least one signal candidate area; determining, according to the signal candidate area, a color space conversion and a linear division to determine at least one color light block; and outputting a detection result. 如申請專利範圍第1項所述之方法,其中該輸入影像是一影音檔案。The method of claim 1, wherein the input image is an audiovisual file. 如申請專利範圍第1項所述之方法,其中該色彩強度對比還包括一影像強化與一背景對比。The method of claim 1, wherein the color intensity comparison further comprises an image enhancement and a background contrast. 如申請專利範圍第3項所述之方法,其中該影像強化是經由一過濾方法將該輸入影像濾除雜訊。The method of claim 3, wherein the image enhancement is to filter the input image by a filtering method. 如申請專利範圍第4項所述之方法,其中該過濾方法是以顏色為基礎,選取該輸入影像每一像素三原色數值的最大值為該像素的強化後數值。The method of claim 4, wherein the filtering method is based on color, and the maximum value of the three primary colors of each pixel of the input image is selected as the enhanced value of the pixel. 如申請專利範圍第3項所述之方法,其中該背景對比是該輸入影像的該亮燈區和該背景的對比。The method of claim 3, wherein the background comparison is a comparison of the lighting area of the input image with the background. 如申請專利範圍第3項所述之方法,其中該背景對比是使用一逐級函數,判斷該輸入影像之至少一像素的一色彩強度。The method of claim 3, wherein the background comparison is to determine a color intensity of at least one pixel of the input image using a stepwise function. 如申請專利範圍第7項所述之方法,當該至少一像素的該色彩強度高於一色彩強度門檻值時,則取用該像素;當該至少一像素的該色彩強度低於該色彩強度門檻值時,則不取用該像素。The method of claim 7, wherein when the color intensity of the at least one pixel is higher than a color intensity threshold, the pixel is taken; when the color intensity of the at least one pixel is lower than the color intensity When the threshold is exceeded, the pixel is not taken. 如申請專利範圍第1項所述之方法,其中該散射偵測還包括一參數計算與一特徵篩選。The method of claim 1, wherein the scatter detection further comprises a parameter calculation and a feature selection. 如申請專利範圍第9項所述之方法,其中該參數計算是運用一中上(Nakagmi)統計模型以建立一模型化散射的特性,獲得該輸入影像的該燈號候選區。The method of claim 9, wherein the parameter calculation is to use a Nakagmi statistical model to establish a modeled scattering characteristic, and obtain the signal candidate area of the input image. 如申請專利範圍第9項所述之方法,其中該特徵篩選依據一特徵參數門檻值判斷該亮燈區。The method of claim 9, wherein the feature screening determines the lighting area based on a characteristic parameter threshold value. 如申請專利範圍第11項所述之方法,其中該特徵參數門檻值是該中上統計模型之一形狀參數值。The method of claim 11, wherein the characteristic parameter threshold is one of the shape parameter values of the upper middle statistical model. 如申請專利範圍第11項所述之方法,其中該特徵參數門檻值依據一被拍攝物與一影像擷取裝置之間的距離作調整。The method of claim 11, wherein the characteristic parameter threshold is adjusted according to a distance between a subject and an image capturing device. 如申請專利範圍第1項所述之方法,其中該色彩空間轉換與該線性劃分是用以判斷該燈號候選區的顏色。The method of claim 1, wherein the color space conversion and the linear division are used to determine a color of the light candidate area. 如申請專利範圍第14項所述之方法,其中該色彩空間轉換是將該輸入影像由一RGB色彩空間轉換至一CIE色彩空間。The method of claim 14, wherein the color space conversion is to convert the input image from an RGB color space to a CIE color space. 如申請專利範圍第15項所述之方法,其中判斷該燈號候選區的顏色是依據該CIE色彩空間的一XYZ參數是否符合一特定顏色的條件作判斷。The method of claim 15, wherein determining the color of the candidate area of the light is determined according to whether a XYZ parameter of the CIE color space conforms to a specific color. 如申請專利範圍第1項所述之方法,還包含一左右方向判斷,以該至少一色燈區塊為中心,向左下方或右下方延伸尋找一區域,依據一較弱散射值特徵參數門檻值來判斷該區域,再以相對位置來區分左右方向。The method of claim 1, further comprising determining a left-right direction, centering on the at least one color light block, extending to the lower left or the lower right to find an area, according to a weaker scattering value characteristic parameter threshold value To judge the area, and then distinguish the left and right direction by relative position. 一種燈號觸發事件偵測之裝置,包含:至少一影像擷取裝置以取得至少一輸入影像;一運算模組,再包含:一強化對比模組用以強化該輸入影像的至少一亮燈區與至少一背景的一色彩強度對比;一散射篩選模組,以一散射偵測,依據該色彩強度對比,決定出至少一燈號候選區;以及一顏色辨別模組,依據該燈號候選區,以一色彩空間轉換與一線性劃分,判斷出至少一色燈區塊;以及至少一輸出裝置,輸出至少一偵測結果。A device for detecting a triggering event includes: at least one image capturing device for acquiring at least one input image; and an computing module, further comprising: an enhanced contrast module for enhancing at least one lighting region of the input image Comparing with a color intensity of at least one background; a scattering screening module determines a minimum candidate area according to the color intensity comparison according to the color intensity comparison; and a color discrimination module according to the light number candidate area And determining, by a color space conversion and a linear division, at least one color light block; and at least one output device outputting at least one detection result. 如申請專利範圍第18項所述之裝置,其中該影像擷取裝置是一視訊裝置、一攝影機或一照相機之其中之一。The device of claim 18, wherein the image capturing device is one of a video device, a camera or a camera. 如申請專利範圍第18項所述之裝置,其中該輸出裝置輸出至一外接裝置。The device of claim 18, wherein the output device is output to an external device. 一種燈號觸發事件偵測方法,該方法包含:取得一輸入影像;強化該輸入影像的至少一亮燈區與至少一背景的一色彩強度對比;利用一色彩空間轉換與一線性劃分,依據該色彩強度對比,決定出至少一燈號候選區; 依據該些燈號候選區,利用一散射偵測,以判斷出至少一色燈區塊;以及輸出一偵測結果。A method for detecting a signal triggering event, the method comprising: obtaining an input image; enhancing a color intensity contrast between at least one lighting area of the input image and at least one background; using a color space conversion and a linear division, according to the Color intensity comparison, determining at least one signal candidate area; According to the signal candidate regions, a scatter detection is used to determine at least one color light block; and a detection result is output. 如申請專利範圍第21項所述之方法,還包含一左右方向判斷,以該些色燈區塊為中心,向左下方或右下方延伸尋找一區域,依據一較弱散射值特徵參數門檻值來判斷該區域,再以相對位置來區分左右方向。The method of claim 21, further comprising determining a left-right direction, centering on the color light blocks, extending to the lower left or the lower right to find an area, according to a weaker scattering value characteristic parameter threshold value To judge the area, and then distinguish the left and right direction by relative position.
TW101146704A 2012-12-11 2012-12-11 Light signal detection method and apparatus for light-triggered events TWI476701B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW101146704A TWI476701B (en) 2012-12-11 2012-12-11 Light signal detection method and apparatus for light-triggered events

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW101146704A TWI476701B (en) 2012-12-11 2012-12-11 Light signal detection method and apparatus for light-triggered events

Publications (2)

Publication Number Publication Date
TW201423606A TW201423606A (en) 2014-06-16
TWI476701B true TWI476701B (en) 2015-03-11

Family

ID=51394059

Family Applications (1)

Application Number Title Priority Date Filing Date
TW101146704A TWI476701B (en) 2012-12-11 2012-12-11 Light signal detection method and apparatus for light-triggered events

Country Status (1)

Country Link
TW (1) TWI476701B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040143380A1 (en) * 2002-08-21 2004-07-22 Stam Joseph S. Image acquisition and processing methods for automatic vehicular exterior lighting control
US20080180528A1 (en) * 2007-01-31 2008-07-31 Toru Saito Preceding Vehicle Detection System
US20080181461A1 (en) * 2007-01-31 2008-07-31 Toru Saito Monitoring System
TWI302879B (en) * 2006-05-12 2008-11-11 Univ Nat Chiao Tung Real-time nighttime vehicle detection and recognition system based on computer vision

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040143380A1 (en) * 2002-08-21 2004-07-22 Stam Joseph S. Image acquisition and processing methods for automatic vehicular exterior lighting control
TWI302879B (en) * 2006-05-12 2008-11-11 Univ Nat Chiao Tung Real-time nighttime vehicle detection and recognition system based on computer vision
US20080180528A1 (en) * 2007-01-31 2008-07-31 Toru Saito Preceding Vehicle Detection System
US20080181461A1 (en) * 2007-01-31 2008-07-31 Toru Saito Monitoring System

Also Published As

Publication number Publication date
TW201423606A (en) 2014-06-16

Similar Documents

Publication Publication Date Title
CN110197589B (en) Deep learning-based red light violation detection method
TWI302879B (en) Real-time nighttime vehicle detection and recognition system based on computer vision
O'Malley et al. Vehicle detection at night based on tail-light detection
EP3480057A1 (en) Rear obstruction detection
US9349070B2 (en) Vehicle external environment recognition device
JP3716623B2 (en) Thermal detector
CN102556021B (en) Control device for preventing cars from running red light
JP6068833B2 (en) Car color detector
JP6034923B1 (en) Outside environment recognition device
CN110688907B (en) Method and device for identifying object based on night road light source
JP6420650B2 (en) Outside environment recognition device
CN103544480A (en) Vehicle color recognition method
JP2009157492A (en) Vehicle detection device, vehicle detection system, and vehicle detection method
JP6508134B2 (en) Object discrimination device
CN104463170A (en) Unlicensed vehicle detecting method based on multiple detection under gate system
JP4936045B2 (en) Vehicle color discrimination device, method and program
CN102169583A (en) Vehicle shielding detection and segmentation method based on vehicle window positioning
CN109800693B (en) Night vehicle detection method based on color channel mixing characteristics
US10977500B2 (en) Street marking color recognition
TWI476701B (en) Light signal detection method and apparatus for light-triggered events
JP6329417B2 (en) Outside environment recognition device
US20230368545A1 (en) Method for processing images
JP6378547B2 (en) Outside environment recognition device
JP6654870B2 (en) Outside environment recognition device
JP2001216597A (en) Method and device for processing picture