WO2016165064A1 - 基于多视角学习的鲁棒性前景检测方法 - Google Patents

基于多视角学习的鲁棒性前景检测方法 Download PDF

Info

Publication number
WO2016165064A1
WO2016165064A1 PCT/CN2015/076533 CN2015076533W WO2016165064A1 WO 2016165064 A1 WO2016165064 A1 WO 2016165064A1 CN 2015076533 W CN2015076533 W CN 2015076533W WO 2016165064 A1 WO2016165064 A1 WO 2016165064A1
Authority
WO
WIPO (PCT)
Prior art keywords
foreground
background
feature
pixel
probability
Prior art date
Application number
PCT/CN2015/076533
Other languages
English (en)
French (fr)
Inventor
王坤峰
王飞跃
刘玉强
苟超
Original Assignee
中国科学院自动化研究所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院自动化研究所 filed Critical 中国科学院自动化研究所
Priority to PCT/CN2015/076533 priority Critical patent/WO2016165064A1/zh
Publication of WO2016165064A1 publication Critical patent/WO2016165064A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Definitions

  • the invention relates to an intelligent video monitoring technology, in particular to a robust foreground detection method based on multi-view learning.
  • Intelligent video surveillance is an important means of information collection, and foreground detection or background reduction is a challenging underlying problem in intelligent video surveillance research.
  • other applications such as target tracking, recognition, and anomaly detection can be realized.
  • the basic principle of foreground detection is to compare the current image of the video scene with the background model to detect areas with significant differences.
  • foreground detection often encounters three challenges in practical applications: motion shadows, illumination changes, and image noise.
  • the motion shadow is caused by the occlusion of the light source by the foreground target, which is a hard shadow on a sunny day and a soft shadow on a cloudy day.
  • the motion shadow is easily detected as foreground, interfering with the extraction of the size and shape information of the segmented foreground object.
  • Light changes are common in traffic scenes. For example, as the sun moves through the sky, the light changes slowly; as the sun enters or moves out of the clouds, the light may change rapidly.
  • noise is inevitably introduced during image acquisition, compression, and transmission. If the signal-to-noise ratio is too low, it will be difficult to distinguish the foreground target from the background scene.
  • the sparse model mainly uses various variants of principal component analysis and matrix decomposition to model the background as a low rank representation and the foreground as a sparse outlier.
  • the parametric model uses a certain probability distribution to model the background.
  • Nonparametric models have greater flexibility in probability density estimation.
  • the machine learning model uses machine learning methods such as support vector machines and neural networks to classify foreground and background.
  • the prior art has the following problems. First, only use the brightness feature, but the brightness feature is on the light Sensitive to changes and motion shadows. Second, only the background model is established, and the foreground pixels are identified as outliers, making it difficult to distinguish foregrounds that are similar to the background color. Third, the spatiotemporal consistency constraints in the video sequence are not utilized.
  • the robust foreground detection method based on multi-view learning provided by the invention can accurately realize the segmentation of the foreground and the background.
  • a robust foreground detection method based on multi-view learning including:
  • the energy function of the Markov random field model is constructed by the posterior probability of the foreground, the posterior probability of the background, and the spatiotemporal consistency constraint, and the energy function is minimized by the belief propagation algorithm to obtain the segmentation result of the foreground and the background. .
  • the multi-view learning based robust foreground detection method can calculate the posterior probability of the foreground and the posterior probability of the background by using the Bayesian rule according to the foreground likelihood, the background likelihood and the prior probability, and
  • the energy function of the Markov random field model is constructed by the posterior probability of the foreground, the posterior probability of the background and the spatio-temporal consistency constraint, so that the foreground and background segmentation can be accurately realized by the belief propagation algorithm.
  • FIG. 1 is a flowchart of a method for detecting a robust foreground based on multi-view learning according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of an input video image and a reference background image according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram of a pyramid search template according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of texture variation features based on iterative search and multi-scale fusion according to an embodiment of the present invention
  • FIG. 5 is a schematic diagram of an RGB color model according to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of brightness variation characteristics and chromaticity change characteristics according to an embodiment of the present invention.
  • FIG. 7 is a flowchart of a method for acquiring a candidate background according to an embodiment of the present invention.
  • FIG. 8 is a heterogeneous characteristic frequency histogram according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of image marking results according to an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of segmentation results of foreground and background according to an embodiment of the present invention.
  • FIG. 1 is a flowchart of a method for detecting a robust foreground based on multi-view learning according to an embodiment of the present invention.
  • step S101 an input video is acquired by a time domain value filtering method to obtain a reference background image, and an iterative search and multi-scale fusion of the current image and the reference background image are performed to obtain a heterogeneous feature.
  • step S102 the conditional probability density of the foreground class and the conditional probability density of the background class are calculated using the conditional independence of the heterogeneous feature, and the foreground is calculated using the Bayesian rule according to the foreground likelihood, the background likelihood, and the prior probability. Probability and background posterior probability.
  • step S103 an energy function of a Markov random field model is constructed by a posterior probability of the foreground, a posterior probability of the background, and a spatiotemporal consistency constraint, and the energy function is minimized by a belief propagation algorithm to obtain a foreground and The segmentation result of the background.
  • the obtaining, by the input domain video, the reference background image by using a time domain value filtering method includes:
  • the reference background image is obtained according to a median value of each of the pixels.
  • the threshold time window is the duration of the image of 500 frames, with specific reference to the input video image and the reference background image provided by the embodiment of the present invention as shown in FIG. 2, (a) is an input video image, and (b) is a reference background. image.
  • the heterogeneous feature is a texture change feature
  • the iterative search and multi-scale fusion of the current image and the reference background image to obtain the heterogeneous feature includes:
  • TV i is the texture change feature
  • i is a current pixel
  • [I R (i), I G (i), I B (i)] is a color value of a color model RGB of the current pixel
  • j is a background pixel corresponding to the current pixel
  • [E R (j), E G (j), E B (j)] is a color value of RGB of the background pixel
  • m ⁇ N(i) is the current pixel
  • the spatial neighborhood, n ⁇ N (j) is the spatial neighborhood of the background pixel.
  • Texture variation features are robust to motion shadows and illumination changes, but are sensitive to dynamic backgrounds. If not properly processed, a swaying textured background area can result in large texture changes. Therefore, in order to solve the above problem, the pixel i in the current image is matched with the pixel j in the reference background image by an iterative search and a multi-scale fusion strategy.
  • FIG. 3 is a schematic diagram of a pyramid search template according to an embodiment of the present invention. As shown in FIG. 3, (a) is a large pyramid search template, and (b) is a small pyramid search template.
  • the specific search process is as follows: First, the large pyramid template is used for rough search.
  • the pixel point i is initialized to the center point of the search template, and each iteration needs to examine 9 positions, and the optimal position (ie, Minimizing the position of TV i ) is set to the center point of the next iteration, repeating this iterative process until the optimal position happens to be the center point of the search template; secondly, using the small pyramid template for fine search, in the fine search process It is only necessary to examine 5 positions, and determine which pixel of the TV i is minimized as the optimal position; finally, the pixel j in the reference background image corresponding to the pixel i of the current image is acquired.
  • the optimal position ie, Minimizing the position of TV i
  • the present invention utilizes the complementary information contained in the multi-scale image for feature extraction, and specifically refers to the texture change based on iterative search and multi-scale fusion provided by the embodiment of the present invention as shown in FIG. Feature map.
  • the size of the current image and the reference background image are sequentially scaled to 1/2 times and 1/4 times of the original size, and feature extraction is performed on both the original size image and the scaled image;
  • the features of the three scales are original Fusion is performed on the scale, and the fusion operator is a median operator.
  • the heterogeneous feature is a brightness change feature
  • the iterative search and multi-scale fusion of the current image and the reference background image to obtain the heterogeneous feature includes:
  • BV i is the brightness variation characteristic
  • ⁇ i is a ratio of a brightness of the current pixel to a brightness of the background pixel
  • E j is a color value of RGB of the background pixel
  • the difference between the current pixel and the reference background pixel in the RGB space is decomposed into the luminance variation feature BV and the chrominance variation feature CV, with reference to the RGB color model diagram provided by the embodiment of the present invention as shown in FIG. 5 .
  • the change in luminance of I i with respect to the reference background pixel value E j is calculated.
  • [I R (i), I G (i), I B (i)] represent the RGB color value of the current pixel i
  • the specific process is: first calculating the ratio of the current pixel brightness to the background brightness ⁇ i , ⁇ i is known by the formula (3); secondly, the brightness variation characteristic BV i of the pixel i is the signed distance of ⁇ i E j with respect to E j , Specifically, it can be known from formula (2):
  • the heterogeneous feature is a chroma change feature
  • the iterative search and multi-scale fusion of the current image and the reference background image to obtain the heterogeneous feature includes:
  • CV i is the chromaticity change characteristic
  • ⁇ i is a ratio of the brightness of the current pixel to the brightness of the background pixel
  • [E R (j), E G (j), E B (j)] is the color value of RGB of the background pixel.
  • the specific process of the luminance variation feature and the chrominance variation feature based on the iterative search and the multi-scale fusion is as follows: First, the current image and the reference background image are sequentially scaled to 1/2 times and 1/4 times of the original size, in the original Feature extraction is performed on both the size image and the scaled image. Secondly, the features of the three scales are fused on the original scale to obtain the final brightness variation feature and chromaticity variation feature. For details, refer to the luminance variation feature and the chrominance variation feature provided by the embodiment of the present invention as shown in FIG. 6 .
  • Both BV i and CV i are distances in the RGB color space and have the same unit of measure.
  • the invention directly quantizes the values of these two features into integers, and can achieve high efficiency kernel density estimation.
  • the luminance variation characteristic the chrominance variation feature and the texture variation feature reflect the characteristics of different sides of the image, in the case of a given pixel class marker C, the probability distribution conditions of the three features are independent, and the formula (5) shows that:
  • the category tag C may be a foreground class or a background class.
  • calculating the conditional probability density of the foreground class and the conditional probability density of the background class by using the conditional independence of the heterogeneous feature includes:
  • FG) p(BV
  • FG) p(TV
  • FG is the foreground class
  • FG) is a probability density of the brightness change characteristic under the condition of the foreground class
  • FG) is the condition of the foreground class
  • FG) is the probability density of the texture variation feature under the condition of the foreground class
  • ⁇ BV is the threshold of the luminance variation feature
  • ⁇ CV is the chromaticity
  • ⁇ TV is the threshold of the texture variation feature.
  • the luminance variation feature, the chrominance variation feature, and the texture variation feature select a trusted foreground pixel in the current image, accumulate and continuously update the frequency histogram of the three features, and use the multi-view learning method to estimate the conditional probability density of the foreground class. .
  • the calculating the conditional probability density of the foreground class and the conditional probability density of the background class by using the conditional independence of the heterogeneous feature further includes:
  • an area outside the expanded trusted foreground area is used as a candidate background area, and a conditional probability of the background class is calculated according to the candidate background area. degree.
  • the candidate background acquisition method is specifically referred to the flowchart of the candidate background acquisition method provided by the embodiment of the present invention as shown in FIG. 7 . If the features of certain pixels in the current image satisfy BV > ⁇ BV or CV > ⁇ CV or TV > ⁇ TV , then these pixels belong to the trusted foreground region.
  • FIG. 8 is a heterogeneous characteristic frequency histogram according to an embodiment of the present invention.
  • figures a and d are luminance change characteristics
  • figures b and e are chromaticity change characteristics
  • figures c and f are texture change characteristics.
  • Figures a, b and c are feature frequency histograms based on ground-truth
  • figures d, e and f are feature frequency histograms based on multi-view learning.
  • the kernel density estimation modeling the foreground-like conditional probability density and the background-like conditional probability density, quantizing the values of the luminance change and the chrominance change into integers, and quantifying the value of the texture change feature to 0.1 interval, using a Gaussian kernel
  • the calculating the posterior probability of the foreground and the posterior probability of the background by using the Bayes rule according to the foreground likelihood, the background likelihood, and the prior probability include:
  • x) is the posterior probability of the foreground
  • C) is the foreground likelihood or background likelihood
  • P i (C) is the prior probability or the foreground of the foreground The prior probability of the background.
  • the calculating the posterior probability of the foreground and the posterior probability of the background by using the Bayes rule according to the foreground likelihood, the background likelihood, and the prior probability include:
  • x) is the posterior probability of the foreground
  • x) is the posterior probability of the background
  • the prior probabilities can be spatially distinct, with trees, buildings, and sky in the scene. Compared to the area, the road area should have a greater prospective prior probability.
  • the prior probability can also be changed with time. If a pixel is marked as foreground more frequently than the previous period in the recent period, its foreground prior probability increases, otherwise its foreground prior probability decreases. . Therefore, the present invention constructs a dynamic prior model based on the marking result of the previous image, which is specifically known by the formula (9):
  • P i,t+1 (FG) is the foreground prior probability of pixel i at time t+1
  • P i,t (FG) is the foreground prior probability of pixel i at time t
  • L i,t represents pixel i is the mark at time t
  • is the learning rate parameter.
  • FIG. 9 is a schematic diagram of image marking results according to an embodiment of the present invention.
  • graph a is the foreground prior probability of the pixel
  • graph b is the foreground posterior probability of the pixel.
  • the road area has a greater foreground prior probability than the tree area.
  • the real foreground target area has a greater foreground a posteriori probability than other areas.
  • the constructing the energy function of the Markov random field model by using the posterior probability of the foreground, the posterior probability of the background, and the space-time consistency constraint comprises:
  • E(f) is the energy function
  • D i (f i ) is the data item
  • W(f i , f u ) is the smoothing term.
  • I be the set of pixels in the current image
  • L be the set of marks. Marked as an estimate for each pixel, the estimate of the foreground is labeled 1 and the estimate of the background is labeled 0.
  • the marking process f is to assign a flag f i ⁇ L to each pixel i ⁇ I.
  • the markers can change slowly in the image space, but at some locations, such as target boundaries, the markers can change rapidly, and the quality of the markers depends on the energy function E(f).
  • N represents the edge set in the graph model structure
  • D i (f i ) is the data item, which measures the cost of assigning the label f i to the pixel i
  • W(f i , f u ) is smoothed.
  • Item which measures the cost of assigning the flags f i and f u to two pixels i and u that are spatially adjacent.
  • the marker that minimizes the energy function corresponds to the maximum a posteriori estimate of the Markov random field.
  • the data item D i (f i ) consists of two parts.
  • first part A posteriori probability that each pixel belongs to the foreground and a posterior probability that belongs to the background, namely:
  • the data item D i (f i ) imposes a constraint on each pixel, and the encouragement flag is consistent with the pixel observation value.
  • the second part Apply a time domain consistency constraint to the tag. It is assumed that a pair of associated pixels should have the same mark in a continuous image.
  • the current image ie, the image at time t
  • the previous frame image ie, the image at time t-1
  • each current pixel i ⁇ I and the pixel in the previous frame image v association Since the mark f v is known, It can be known from formula (12):
  • ⁇ >0 is a weight parameter.
  • the smoothing term W(f i , f u ) encourages the spatial consistency of the markers. If two spatially adjacent pixels have different markings, there is a price to pay. Specifically, it can be known from formula (13):
  • ⁇ I is the variance parameter and ⁇ I is set to 400.
  • FIG. 10 is a schematic diagram of segmentation results of foreground and background according to an embodiment of the present invention.
  • the first column is the embodiment number
  • the second column is the original image
  • the third column is the foreground detection result
  • the fourth column is the ground-truth.
  • the average recall rate (recall) of the present invention was 0.8271
  • the average precision was 0.8316
  • the average F-measure was 0.8252.
  • the figure includes motion shadow, illumination variation, image noise and other interference.
  • the robust foreground detection method based on multi-view learning proposed by the present invention has strong robustness, can overcome these interferences, and accurately obtain foreground detection results.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供的基于多视角学习的鲁棒性前景检测方法,包括:将输入视频通过时域中值滤波方法获取参考背景图像,对当前图像和所述参考背景图像进行迭代搜索和多尺度融合获取异类特征;利用所述异类特征的条件独立性计算前景类的条件概率密度和背景类的条件概率密度,并且根据前景似然、背景似然和先验概率利用贝叶斯规则计算前景的后验概率和背景的后验概率;通过所述前景的后验概率、所述背景的后验概率和时空一致性约束构建马尔科夫随机场模型的能量函数,利用置信传播算法将所述能量函数最小化得到前景和背景的分割结果。本发明可以在复杂挑战环境下,实现鲁棒性前景检测。

Description

基于多视角学习的鲁棒性前景检测方法 技术领域
本发明涉及智能视频监控技术,特别是涉及一种基于多视角学习的鲁棒性前景检测方法。
背景技术
智能视频监控是一种重要的信息采集手段,而前景检测或背景消减是智能视频监控研究中一个很有挑战性的底层问题。在前景检测的基础上,可以实现目标跟踪、识别、异常检测等其他应用。前景检测的基本原则是将视频场景的当前图像与背景模型相比较,检测有显著区别的区域。虽然看似简单,前景检测在实际应用中经常遇到三种挑战:运动阴影、光照变化和图像噪声。运动阴影是由于光源被前景目标遮挡造成的,在晴天时为硬阴影,在阴天时为软阴影。无论何种形式,运动阴影容易被检测为前景,干扰对被分割前景目标的尺寸和形状信息的提取。光照变化在交通场景中很常见。例如,随着太阳在天空中移动,光照也缓慢变化;当太阳进入或移出云层时,光照也可能发生快速变化。另外,在图像的采集、压缩和传输过程中,不可避免会引入噪声。如果信噪比太低,将难以从背景场景中区分出前景目标。
我们将前景检测技术分为稀疏模型、参数模型、非参数模型、机器学习模型等。稀疏模型主要利用主成分分析和矩阵分解的各种变体,将背景建模为低秩表示,将前景建模为稀疏离群点。但是这类方法的计算复杂性较高,并且难以检测与背景颜色相似的前景。参数模型利用某种概率分布来建模背景。非参数模型在概率密度估计中具有更高的灵活性。机器学习模型利用支持向量机、神经网络等机器学习方法进行前景和背景的分类。
现有技术存在以下问题。第一,只利用亮度特征,但是亮度特征对光 照变化和运动阴影比较敏感。第二,只建立背景模型,将前景像素识别为离群点,难以区分与背景颜色相似的前景。第三,没有利用视频序列中的时空一致性约束。
发明内容
本发明提供的基于多视角学习的鲁棒性前景检测方法,可以准确地实现前景和背景的分割。
根据本发明的一方面,提供一种基于多视角学习的鲁棒性前景检测方法,包括:
将输入视频通过时域中值滤波方法获取参考背景图像,对当前图像和所述参考背景图像进行迭代搜索和多尺度融合获取异类特征;
利用所述异类特征的条件独立性计算前景类的条件概率密度和背景类的条件概率密度,并且根据前景似然、背景似然和先验概率利用贝叶斯规则计算前景的后验概率和背景的后验概率;
通过所述前景的后验概率、所述背景的后验概率和时空一致性约束构建马尔科夫随机场模型的能量函数,利用置信传播算法将所述能量函数最小化得到前景和背景的分割结果。
本发明实施例提供的基于多视角学习的鲁棒性前景检测方法,可以根据前景似然、背景似然和先验概率利用贝叶斯规则计算前景的后验概率和背景的后验概率,并通过前景的后验概率、背景的后验概率和时空一致性约束构建马尔科夫随机场模型的能量函数,从而利用置信传播算法准确地实现前景和背景的分割。
附图说明
图1为本发明实施例提供的基于多视角学习的鲁棒性前景检测方法流程图;
图2为本发明实施例提供的输入视频图像和参考背景图像示意图;
图3为本发明实施例提供的金字塔搜索模板示意图;
图4为本发明实施例提供的基于迭代搜索和多尺度融合的纹理变化特征示意图;
图5为本发明实施例提供的RGB颜色模型示意图;
图6为本发明实施例提供的亮度变化特征和色度变化特征示意图;
图7为本发明实施例提供的候选背景获取方法流程图;
图8为本发明实施例提供的异类特征频率直方图;
图9为本发明实施例提供的图像标记结果示意图;
图10为本发明实施例提供的前景和背景的分割结果示意图。
具体实施方式
下面结合附图对本发明实施例提供的基于多视角学习的鲁棒性前景检测方法进行详细描述。
图1为本发明实施例提供的基于多视角学习的鲁棒性前景检测方法流程图。
参照图1,在步骤S101,将输入视频通过时域中值滤波方法获取参考背景图像,对当前图像和所述参考背景图像进行迭代搜索和多尺度融合获取异类特征。
在步骤S102,利用所述异类特征的条件独立性计算前景类的条件概率密度和背景类的条件概率密度,并且根据前景似然、背景似然和先验概率利用贝叶斯规则计算前景的后验概率和背景的后验概率。
在步骤S103,通过所述前景的后验概率、所述背景的后验概率和时空一致性约束构建马尔科夫随机场模型的能量函数,利用置信传播算法将所述能量函数最小化得到前景和背景的分割结果。
进一步地,所述将输入视频通过时域中值滤波方法获取参考背景图像包括:
读取所述输入视频的每帧图像;
将所述每帧图像通过时域中值滤波方法获取阈值时间窗口内各个像素 的中值;
根据所述各个像素的中值得到所述参考背景图像。
这里,阈值时间窗口为500帧图像的持续时间,具体参照如图2所示的本发明实施例提供的输入视频图像和参考背景图像示意图,(a)为输入视频图像,(b)为参考背景图像。
进一步地,所述异类特征为纹理变化特征,所述对当前图像和所述参考背景图像进行迭代搜索和多尺度融合获取异类特征包括:
根据公式(1)计算所述纹理变化特征:
Figure PCTCN2015076533-appb-000001
其中,TVi为所述纹理变化特征,i为当前像素,[IR(i),IG(i),IB(i)]为所述当前像素的颜色模型RGB的颜色值,j为所述当前像素对应的背景像素,[ER(j),EG(j),EB(j)]为所述背景像素的RGB的颜色值,m∈N(i)为所述当前像素的空间邻域,n∈N(j)为所述背景像素的空间邻域。
这里,对于当前图像的任意像素i,假设它的空间邻域N(i)为8邻域。
纹理变化特征对运动阴影和光照变化有很强的鲁棒性,但是对动态背景却很敏感。如果不加以适当处理,晃动的有纹理背景区域可以导致大的纹理变化。因此,为解决上述问题,通过迭代搜索和多尺度融合策略将当前图像中的像素i与参考背景图像中的像素j进行匹配。
图3为本发明实施例提供的金字塔搜索模板示意图。如图3所示,图(a)为大金字塔搜索模板,图(b)为小金字塔搜索模板。具体搜索过程如下:首先,利用大金字塔模板进行粗搜索,在第一轮迭代前,将像素点i初始化为搜索模板的中心点,每次迭代最多需要考察9个位置,将最优位置(即最小化TVi的位置)设为下一轮迭代的中心点,重复执行此迭代过程,直到最优位置恰好为搜索模板的中心点;其次,利用小金字塔模板进行细 搜索,在细搜索过程中,只需考察5个位置,将最小化TVi的那个像素确定为最优位置;最后获取与当前图像的像素i相对应的参考背景图像中的像素j。
为了处理迭代搜索陷入局部极小值的问题,本发明利用多尺度图像包含的互补信息进行特征提取,具体参照如图4所示的本发明实施例提供的基于迭代搜索和多尺度融合的纹理变化特征示意图。
首先,将当前图像和参考背景图像的尺寸依次缩放为原始尺寸的1/2倍和1/4倍,在原始尺寸图像和缩放图像上都进行特征提取;其次,将三种尺度的特征在原始尺度上进行融合,融合算子为中值运算子。
进一步地,所述异类特征为亮度变化特征,所述对当前图像和所述参考背景图像进行迭代搜索和多尺度融合获取异类特征包括:
根据公式(2)计算所述亮度变化特征:
BVi=(αi-1)||OEj||             (2)
其中,BVi为所述亮度变化特征,αi为所述当前像素的亮度与所述背景像素的亮度的比值,Ej为所述背景像素的RGB的颜色值,||OEj||为原点O与Ej的直线距离。
这里,将RGB空间中当前像素与参考背景像素的差异分解为亮度变化特征BV和色度变化特征CV,具体参照如图5所示的本发明实施例提供的RGB颜色模型示意图。如图5所示,对于当前图像I中的像素i∈I,计算Ii相对于参考背景像素值Ej的亮度变化。令[IR(i),IG(i),IB(i)]表示当前像素i的RGB颜色值,[ER(j),EG(j),EB(j)]表示对应背景像素j的RGB颜色值。具体过程为:首先计算当前像素亮度与背景亮度的比值αi,αi由公式(3)可知;其次,像素i的亮度变化特征BVi为αiEj相对于Ej的有符号距离,具体由公式(2)可知:
Figure PCTCN2015076533-appb-000002
由公式(2)可知,||OEj||表示原点O与Ej的直线距离,如果当前像素亮度等于背景亮度,则BVi=0。如果当前像素亮度小于背景亮度,则BVi<0。如果当前像素亮度大于背景亮度,则BVi>0。因此,亮度变化BVi反映了当前像素和对应背景像素在亮度上的差异。
进一步地,所述异类特征为色度变化特征,所述对当前图像和所述参考背景图像进行迭代搜索和多尺度融合获取异类特征包括:
根据公式(4)计算所述色度变化特征:
Figure PCTCN2015076533-appb-000003
其中,CVi为所述色度变化特征,αi为所述当前像素的亮度与所述背景像素的亮度的比值,[IR(i),IG(i),IB(i)]为所述当前像素的RGB的颜色值,[ER(j),EG(j),EB(j)]为所述背景像素的RGB的颜色值。
这里,基于迭代搜索和多尺度融合的亮度变化特征和色度变化特征的具体过程如下:首先,将当前图像和参考背景图像依次缩放为原始尺寸的1/2倍和1/4倍,在原始尺寸图像和缩放图像上都进行特征提取;其次,将三种尺度的特征在原始尺度上进行融合,得到最终的亮度变化特征和色度变化特征。具体参照如图6所示的本发明实施例提供的亮度变化特征和色度变化特征示意图。
BVi和CVi都是RGB颜色空间中的距离,有相同的测量单位。本发明将这两个特征的取值直接量化为整数,可以实现高效率的核密度估计。
由于亮度变化特征、色度变化特征和纹理变化特征反映了图像不同侧面的特性,在给定像素类别标记C的情况下,这三种特征的概率分布条件独立,由公式(5)可知:
p(BV,CV,TV|C)=p(BV|C)p(CV|C)p(TV|C)         (5)
其中,所述类别标记C可以是前景类或背景类。
进一步地,所述利用所述异类特征的条件独立性计算前景类的条件概率密度和背景类的条件概率密度包括:
根据公式(6)计算所述前景类的条件概率密度:
p(BV|FG)=p(BV|CV>τCV或TV>τTV),
p(CV|FG)=p(CV|BV>τBV或TV>τTV),           (6)
p(TV|FG)=p(TV|BV>τBV或CV>τCV),
其中,FG为所述前景类,p(BV|FG)为在所述前景类的条件下所述亮度变化特征的概率密度,p(CV|FG)为在所述前景类的条件下所述色度变化特征的概率密度,p(TV|FG)为在所述前景类的条件下所述纹理变化特征的概率密度,τBV为所述亮度变化特征的阈值,τCV为所述色度变化特征的阈值,τTV为所述纹理变化特征的阈值。
这里,亮度变化特征、色度变化特征和纹理变化特征在当前图像中,选择可信前景像素,累积并持续更新三种特征的频率直方图,利用多视角学习方法来估计前景类的条件概率密度。
由公式(6)可知,如果亮度变化特征、色度变化特征和纹理变化特征中的其中一个特征的取值足够大,表明该像素是可信前景像素,可以加入频率直方图,用来估计其他特征的前景类条件概率密度。在本发明实施例中,设定
Figure PCTCN2015076533-appb-000004
τCV=20、τTV=3.6,这里
Figure PCTCN2015076533-appb-000005
表示在整幅图像中BV的中值,用于补偿图像的全局亮度变化。
进一步地,所述利用所述异类特征的条件独立性计算前景类的条件概率密度和背景类的条件概率密度还包括:
从所述当前图像中获取可信前景区域;
对所述可信前景区域进行膨胀得到膨胀的可信前景区域;
从所述当前图像中,将位于所述膨胀的可信前景区域之外的区域作为候选背景区域,并且根据所述候选背景区域计算所述背景类的条件概率密 度。
这里,候选背景获取方法具体参照如图7所示的本发明实施例提供的候选背景获取方法流程图。如果当前图像中的某些像素的特征满足BV>τBV或CV>τCV或TV>τTV,那么这些像素属于可信前景区域。
图8为本发明实施例提供的异类特征频率直方图。如图8所示,图a和图d为亮度变化特征,图b和图e为色度变化特征,图c和图f为纹理变化特征。图a、图b和图c是基于ground-truth的特征频率直方图,图d、图e和图f是基于多视角学习的特征频率直方图。
这里,利用核密度估计,建模前景类条件概率密度和背景类条件概率密度,将亮度变化和色度变化的取值量化为整数,将纹理变化特征的取值量化为0.1间隔,采用高斯核函数,将三种特征的核宽度分别设为σBV=2.0、σCV=2.0和σTV=0.2。
进一步地,所述根据前景似然、背景似然和先验概率利用贝叶斯规则计算前景的后验概率和背景的后验概率包括:
根据公式(7)计算所述前景的后验概率:
Figure PCTCN2015076533-appb-000006
其中,Pi(FG|x)为所述前景的后验概率,p(x|C)为所述前景似然或背景似然,Pi(C)为所述前景的先验概率或所述背景的先验概率。
进一步地,所述根据前景似然、背景似然和先验概率利用贝叶斯规则计算前景的后验概率和背景的后验概率包括:
根据公式(8)计算所述背景的后验概率:
Pi(BG|x)=1-Pi(FG|x)             (8)
其中,Pi(FG|x)为所述前景的后验概率,Pi(BG|x)为所述背景的后验概率。
这里,先验概率可以是空间相异的,与场景中的树木、建筑物、天空 等区域相比,道路区域应该有更大的前景先验概率。先验概率还可以是随时间变化的,如果最近一段时间,某个像素比前一段时间更加频繁地被标记为前景,则它的前景先验概率增大,否则它的前景先验概率减小。因此,本发明基于先前图像的标记结果,构建一个动态先验模型,具体由公式(9)可知:
Pi,t+1(FG)=(1-ρ)Pi,t(FG)+ρLi,t           (9)
其中,Pi,t+1(FG)为像素i在t+1时刻的前景先验概率,Pi,t(FG)为像素i在t时刻的前景先验概率,Li,t表示像素i在t时刻的标记,ρ为学习率参数。
如果像素i在t时刻被标记为前景,则Li,t=1;如果像素i在t时刻被标记为背景,则Li,t=0。ρ是学习率参数,将ρ设定为0.001。在***启动时,Pi,t(FG)设定为0.2。
图9为本发明实施例提供的图像标记结果示意图。如图9所示,图a为像素的前景先验概率,图b为像素的前景后验概率。从图a可以看出,道路区域比树木区域有更大的前景先验概率。从图b可以看出,真实的前景目标区域比其他区域有更大的前景后验概率。
进一步地,所述利用所述前景的后验概率、所述背景的后验概率和时空一致性约束构建马尔科夫随机场模型的能量函数包括:
根据公式(10)计算所述能量函数:
Figure PCTCN2015076533-appb-000007
其中,f为标记过程,E(f)为所述能量函数,Di(fi)为数据项,W(fi,fu)为平滑项。
这里,令I为当前图像中的像素集,L为标记集。标记为每个像素的估计值,将前景的估计值标记为1,背景的估计值标记为0。所述标记过程f即是为每个像素i∈I赋予一个标记fi∈L。在马尔科夫随机场框架下,标记可以在图像空间中缓慢变化,但是在一些位置,例如目标边界,标记可以 快速变化,标记的质量取决于能量函数E(f)。
由公式(10)可知,N表示图模型结构中的边集,Di(fi)是数据项,它衡量将标记fi赋给像素i的代价,W(fi,fu)是平滑项,它衡量将标记fi和fu赋给空间相邻的两个像素i和u的代价。使能量函数最小化的标记对应于马尔科夫随机场的最大后验估计。
数据项Di(fi)由两部分组成。第一部分
Figure PCTCN2015076533-appb-000008
与每个像素属于前景的后验概率和属于背景的后验概率有关,即:
Figure PCTCN2015076533-appb-000009
其中,数据项Di(fi)对每个像素施加约束,鼓励标记与像素观测值一致。
第二部分
Figure PCTCN2015076533-appb-000010
为标记施加时域一致性约束。假设在连续图像中一对关联像素应该有相同的标记。在计算光流时,将当前图像(即t时刻的图像)反向映射到前一帧图像(即t-1时刻的图像),使每个当前像素i∈I与前一帧图像中的像素v关联。由于标记fv已知,
Figure PCTCN2015076533-appb-000011
由公式(12)可知:
Figure PCTCN2015076533-appb-000012
其中,γ>0是一个权值参数。由于噪声、大运动、边界效应等的影响,将γ设定为γ=0.5。
将两部分结合,数据项变为
Figure PCTCN2015076533-appb-000013
但是应当注意,如果视频的帧率很低,时域一致性约束将不可用,于是
Figure PCTCN2015076533-appb-000014
平滑项W(fi,fu)鼓励标记的空间一致性。如果两个空间相邻像素有不同的标记,需要付出代价。具体由公式(13)可知:
Figure PCTCN2015076533-appb-000015
其中,φ=5.0是权值参数,Z(Ii,Iu)是受像素i和u的亮度差控制的递减函数。函数Z由公式(14)可知:
Figure PCTCN2015076533-appb-000016
其中,σI为方差参数,σI设定为400。
图10为本发明实施例提供的前景和背景的分割结果示意图。
如图10所示,第一列为实施例编号,第二列为原始图像,第三列为前景检测结果,第四列为ground-truth。根据定量分析,本发明的平均召回率(recall)为0.8271,平均精度(precision)为0.8316,平均F-measure为0.8252。
图中包含运动阴影、光照变化、图像噪声等干扰,本发明提出的基于多视角学习的鲁棒性前景检测方法具有较强的鲁棒性,可以克服这些干扰,并准确得到前景检测结果。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。

Claims (10)

  1. 一种基于多视角学习的鲁棒性前景检测方法,其特征在于,所述方法包括:
    将输入视频通过时域中值滤波方法获取参考背景图像,对当前图像和所述参考背景图像进行迭代搜索和多尺度融合获取异类特征;
    利用所述异类特征的条件独立性计算前景类的条件概率密度和背景类的条件概率密度,并且根据前景似然、背景似然和先验概率利用贝叶斯规则计算前景的后验概率和背景的后验概率;
    通过所述前景的后验概率、所述背景的后验概率和时空一致性约束构建马尔科夫随机场模型的能量函数,利用置信传播算法将所述能量函数最小化得到前景和背景的分割结果。
  2. 根据权利要求1所述的方法,其特征在于,所述将输入视频通过时域中值滤波方法获取参考背景图像包括:
    读取所述输入视频的每帧图像;
    将所述每帧图像通过时域中值滤波方法获取阈值时间窗口内各个像素的中值;
    根据所述各个像素的中值得到所述参考背景图像。
  3. 根据权利要求1所述的方法,其特征在于,所述异类特征为纹理变化特征,所述对当前图像和所述参考背景图像进行迭代搜索和多尺度融合获取异类特征包括:
    根据下式计算所述纹理变化特征:
    Figure PCTCN2015076533-appb-100001
    其中,TVi为所述纹理变化特征,i为当前像素,[IR(i),IG(i),IB(i)]为所述当前像素的颜色模型RGB的颜色值,j为所述当前像素对应的背景像素,[ER(j),EG(j),EB(j)]为所述背景像素的RGB的颜色值,m∈N(i)为所述当前像 素的空间邻域,n∈N(j)为所述背景像素的空间邻域。
  4. 根据权利要求3所述的方法,其特征在于,所述异类特征为亮度变化特征,所述对当前图像和所述参考背景图像进行迭代搜索和多尺度融合获取异类特征包括:
    根据下式计算所述亮度变化特征:
    BVi=(αi-1)||OEj||
    其中,BVi为所述亮度变化特征,αi为所述当前像素的亮度与所述背景像素的亮度的比值,Ej为所述背景像素的RGB的颜色值,||OEj||为原点O与Ej的直线距离。
  5. 根据权利要求4所述的方法,其特征在于,所述异类特征为色度变化特征,所述对当前图像和所述参考背景图像进行迭代搜索和多尺度融合获取异类特征包括:
    根据下式计算所述色度变化特征:
    Figure PCTCN2015076533-appb-100002
    其中,CVi为所述色度变化特征,αi为所述当前像素的亮度与所述背景像素的亮度的比值,[IR(i),IG(i),IB(i)]为所述当前像素的RGB的颜色值,[ER(j),EG(j),EB(j)]为所述背景像素的RGB的颜色值。
  6. 根据权利要求1所述的方法,其特征在于,所述利用所述异类特征的条件独立性计算前景类的条件概率密度和背景类的条件概率密度包括:
    根据下式计算所述前景类的条件概率密度:
    p(BV|FG)=p(BV|CV>τCV或TV>τTV),
    p(CV|FG)=p(CV|BV>τBV或TV>τTV),
    p(TV|FG)=p(TV|BV>τBV或CV>τCV),
    其中,FG为所述前景类,p(BV|FG)为在所述前景类的条件下所述亮度变化特征的概率密度,p(CV|FG)为在所述前景类的条件下所述色度变化特征的概率密度,p(TV|FG)为在所述前景类的条件下所述纹理变化特征的概 率密度,τBV为所述亮度变化特征的阈值,τCV为所述色度变化特征的阈值,τTV为所述纹理变化特征的阈值。
  7. 根据权利要求6所述的方法,其特征在于,所述利用所述异类特征的条件独立性计算前景类的条件概率密度和背景类的条件概率密度还包括:
    从所述当前图像中获取可信前景区域;
    对所述可信前景区域进行膨胀得到膨胀的可信前景区域;
    从所述当前图像中,将位于所述膨胀的可信前景区域之外的区域作为候选背景区域,并且根据所述候选背景区域计算所述背景类的条件概率密度。
  8. 根据权利要求1所述的方法,其特征在于,所述根据前景似然、背景似然和先验概率利用贝叶斯规则计算前景的后验概率和背景的后验概率包括:
    根据下式计算所述前景的后验概率:
    Figure PCTCN2015076533-appb-100003
    其中,Pi(FG|x)为所述前景的后验概率,p(x|C)为所述前景似然或背景似然,Pi(C)为所述前景的先验概率或所述背景的先验概率。
  9. 根据权利要求8所述的方法,其特征在于,所述根据前景似然、背景似然和先验概率利用贝叶斯规则计算前景的后验概率和背景的后验概率包括:
    根据下式计算所述背景的后验概率:
    Pi(BG|x)=1-Pi(FG|x)
    其中,Pi(FG|x)为所述前景的后验概率,Pi(BG|x)为所述背景的后验概率。
  10. 根据权利要求1所述的方法,其特征在于,所述利用所述前景的 后验概率、所述背景的后验概率和时空一致性约束构建马尔科夫随机场模型的能量函数包括:
    根据下式计算所述能量函数:
    Figure PCTCN2015076533-appb-100004
    其中,f为标记过程,E(f)为所述能量函数,Di(fi)为数据项,W(fi,fu)为平滑项。
PCT/CN2015/076533 2015-04-14 2015-04-14 基于多视角学习的鲁棒性前景检测方法 WO2016165064A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/076533 WO2016165064A1 (zh) 2015-04-14 2015-04-14 基于多视角学习的鲁棒性前景检测方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/076533 WO2016165064A1 (zh) 2015-04-14 2015-04-14 基于多视角学习的鲁棒性前景检测方法

Publications (1)

Publication Number Publication Date
WO2016165064A1 true WO2016165064A1 (zh) 2016-10-20

Family

ID=57125468

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/076533 WO2016165064A1 (zh) 2015-04-14 2015-04-14 基于多视角学习的鲁棒性前景检测方法

Country Status (1)

Country Link
WO (1) WO2016165064A1 (zh)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108269273A (zh) * 2018-02-12 2018-07-10 福州大学 一种全景纵向漫游中极线匹配的置信传播方法
CN111091540A (zh) * 2019-12-11 2020-05-01 西安科技大学 一种基于马尔科夫随机场的主动悬架控制方法
CN111208568A (zh) * 2020-01-16 2020-05-29 中国科学院地质与地球物理研究所 一种时间域多尺度全波形反演方法及***
CN111368914A (zh) * 2020-03-04 2020-07-03 西安电子科技大学 基于全概率协同分割的极化合成孔径雷达变化检测方法
CN111461011A (zh) * 2020-04-01 2020-07-28 西安电子科技大学 基于概率化管道滤波的弱小目标检测方法
CN113160098A (zh) * 2021-04-16 2021-07-23 浙江大学 一种光照不均匀条件下密集微粒图像的处理方法
CN113947569A (zh) * 2021-09-30 2022-01-18 西安交通大学 一种基于计算机视觉的梁型结构多尺度微弱损伤定位方法
CN114155425A (zh) * 2021-12-13 2022-03-08 中国科学院光电技术研究所 基于高斯马尔可夫随机场运动方向估计的弱小目标检测方法
CN115082507A (zh) * 2022-07-22 2022-09-20 聊城扬帆田一机械有限公司 一种路面切割机智能调控***

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060126933A1 (en) * 2004-12-15 2006-06-15 Porikli Fatih M Foreground detection using intrinsic images
CN102222214A (zh) * 2011-05-09 2011-10-19 苏州易斯康信息科技有限公司 快速物体识别算法
CN102509105A (zh) * 2011-09-30 2012-06-20 北京航空航天大学 一种基于贝叶斯推理的图像场景分层处理方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060126933A1 (en) * 2004-12-15 2006-06-15 Porikli Fatih M Foreground detection using intrinsic images
CN102222214A (zh) * 2011-05-09 2011-10-19 苏州易斯康信息科技有限公司 快速物体识别算法
CN102509105A (zh) * 2011-09-30 2012-06-20 北京航空航天大学 一种基于贝叶斯推理的图像场景分层处理方法

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
SONG, XIAOFENG ET AL.: "SAR Image Segmentation Using Markov Random Field Based on Regions and Bayes Belief Propagation", CHINESE JOURNAL OF ELECTRONICS, vol. 12, no. 38, 31 December 2010 (2010-12-31) *
WANG, ZHILING ET AL.: "Analysis of Robust Background Modeling Techniques for Different Information Levels", PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, vol. 2, no. 22, 30 April 2009 (2009-04-30), XP055321059 *
ZHU, YIPING ET AL.: "Video Foreground and Shadow Automatic Segmentation Based on Discriminative Model", PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, vol. 6, no. 21, 31 December 2008 (2008-12-31), XP055321061 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108269273B (zh) * 2018-02-12 2021-07-27 福州大学 一种全景纵向漫游中极线匹配的置信传播方法
CN108269273A (zh) * 2018-02-12 2018-07-10 福州大学 一种全景纵向漫游中极线匹配的置信传播方法
CN111091540A (zh) * 2019-12-11 2020-05-01 西安科技大学 一种基于马尔科夫随机场的主动悬架控制方法
CN111091540B (zh) * 2019-12-11 2023-04-07 西安科技大学 一种基于马尔科夫随机场的主动悬架控制方法
CN111208568A (zh) * 2020-01-16 2020-05-29 中国科学院地质与地球物理研究所 一种时间域多尺度全波形反演方法及***
CN111368914A (zh) * 2020-03-04 2020-07-03 西安电子科技大学 基于全概率协同分割的极化合成孔径雷达变化检测方法
CN111461011B (zh) * 2020-04-01 2023-03-24 西安电子科技大学 基于概率化管道滤波的弱小目标检测方法
CN111461011A (zh) * 2020-04-01 2020-07-28 西安电子科技大学 基于概率化管道滤波的弱小目标检测方法
CN113160098A (zh) * 2021-04-16 2021-07-23 浙江大学 一种光照不均匀条件下密集微粒图像的处理方法
CN113947569A (zh) * 2021-09-30 2022-01-18 西安交通大学 一种基于计算机视觉的梁型结构多尺度微弱损伤定位方法
CN113947569B (zh) * 2021-09-30 2023-10-27 西安交通大学 一种基于计算机视觉的梁型结构多尺度微弱损伤定位方法
CN114155425A (zh) * 2021-12-13 2022-03-08 中国科学院光电技术研究所 基于高斯马尔可夫随机场运动方向估计的弱小目标检测方法
CN114155425B (zh) * 2021-12-13 2023-04-07 中国科学院光电技术研究所 基于高斯马尔可夫随机场运动方向估计的弱小目标检测方法
CN115082507A (zh) * 2022-07-22 2022-09-20 聊城扬帆田一机械有限公司 一种路面切割机智能调控***
CN115082507B (zh) * 2022-07-22 2022-11-18 聊城扬帆田一机械有限公司 一种路面切割机智能调控***

Similar Documents

Publication Publication Date Title
WO2016165064A1 (zh) 基于多视角学习的鲁棒性前景检测方法
CN104766065B (zh) 基于多视角学习的鲁棒性前景检测方法
US10198823B1 (en) Segmentation of object image data from background image data
CN110119728B (zh) 基于多尺度融合语义分割网络的遥感图像云检测方法
CN109241913B (zh) 结合显著性检测和深度学习的船只检测方法及***
US10497126B2 (en) Producing a segmented image using markov random field optimization
Feng et al. Local background enclosure for RGB-D salient object detection
Ju et al. Depth-aware salient object detection using anisotropic center-surround difference
CN110111338B (zh) 一种基于超像素时空显著性分割的视觉跟踪方法
CN102542571B (zh) 一种运动目标检测方法及装置
KR20230084486A (ko) 이미지 효과를 위한 세그먼트화
CN109919053A (zh) 一种基于监控视频的深度学习车辆停车检测方法
CN107506792B (zh) 一种半监督的显著对象检测方法
CN112465021B (zh) 基于图像插帧法的位姿轨迹估计方法
CN110443228B (zh) 一种行人匹配方法、装置、电子设备及存储介质
CN108647605B (zh) 一种结合全局颜色与局部结构特征的人眼凝视点提取方法
EP3343504B1 (en) Producing a segmented image using markov random field optimization
CN107704864B (zh) 基于图像对象性语义检测的显著目标检测方法
CN112037230B (zh) 一种基于超像素和超度量轮廓图的林区图像分割方法
Schulz et al. Object-class segmentation using deep convolutional neural networks
CN115601834A (zh) 基于WiFi信道状态信息的跌倒检测方法
Feng et al. HOSO: Histogram of surface orientation for RGB-D salient object detection
Lin et al. Foreground object detection in highly dynamic scenes using saliency
Wong et al. Development of a refined illumination and reflectance approach for optimal construction site interior image enhancement
Rahimi et al. Single image ground plane estimation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15888769

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15888769

Country of ref document: EP

Kind code of ref document: A1