WO2016037422A1 - 一种视频场景变化的检测方法 - Google Patents

一种视频场景变化的检测方法 Download PDF

Info

Publication number
WO2016037422A1
WO2016037422A1 PCT/CN2014/092640 CN2014092640W WO2016037422A1 WO 2016037422 A1 WO2016037422 A1 WO 2016037422A1 CN 2014092640 W CN2014092640 W CN 2014092640W WO 2016037422 A1 WO2016037422 A1 WO 2016037422A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
histogram
threshold
hue
pixel
Prior art date
Application number
PCT/CN2014/092640
Other languages
English (en)
French (fr)
Inventor
刘鹏
Original Assignee
刘鹏
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 刘鹏 filed Critical 刘鹏
Publication of WO2016037422A1 publication Critical patent/WO2016037422A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region

Definitions

  • the present invention relates to video image analysis technology, and in particular to a method for detecting video scene changes.
  • the content type of the video is also different during the playback of the video.
  • the type conversion of the video often occurs at the moment of the video scene change.
  • the scene change of the video often causes the content type of the video to change.
  • Existing video scene detection methods mainly include:
  • Chinese patent application CN201310332133.4 proposes a dynamic video scene change detection method, comprising the steps of: acquiring a current frame of a dynamic video image in real time; calculating a scene transformation feature parameter ti(n) of the current frame; and according to the dynamic video image
  • the scene transformation feature parameter of the previous one or several frames is calculated corresponding to the dynamic threshold threshold(n) of the current frame; determining whether the scene transformation feature parameter ti(n) of the current frame is less than or equal to its corresponding dynamic threshold, and if so, Then, it is determined that it is not a scene change frame; otherwise, it is determined to be a scene change frame.
  • a scene detection method based on an undirected weighted graph.
  • the method treats all video sequences as the endpoints of the image, uses the similarity of the video sequence in the spatial and temporal domains as the distance between each edge, and loops through the end points of the graph in a tree stripping manner, each time determining a most likely scene. Boundary until the end of the graph The points are all stripped.
  • the existing detection methods can detect the change of the video scene
  • the existing video scene change detection method has the defects of complicated processing and low detection efficiency.
  • the present invention provides a video scene change detection method that is simple to implement and fast to detect.
  • the invention adopts the following technical solutions: a method for detecting a video scene change, which comprises the steps of:
  • the method for detecting a video scene change further includes the step of performing pixel preprocessing on the image frame of the video file before the step B.
  • the step of preprocessing the pixel specifically includes:
  • the saturation S of a certain pixel point is less than the preset first threshold T1 and the brightness V of the pixel is less than the preset second threshold T2, the pixel is discarded;
  • the remaining pixels in the image frame of the video sequence are preserved.
  • the fourth threshold T4 0.2.
  • the histograms of the hue H components of each image frame are superimposed and their mean values are respectively taken, and the histograms of the average hue of each video sequence are respectively calculated.
  • the step C specifically includes:
  • the total number of pixels counted by the histogram of the hue tone is calculated
  • the normalized histogram of the histogram of the histogram is divided by the number of pixels of each field of the hue histogram divided by the total number of pixels.
  • the present invention has the following beneficial effects:
  • the invention provides a scene detection method based on a histogram of histograms, firstly determining the main color of the background color of the video sequence according to the cumulative histogram corresponding to the hue components of the color categories of each video sequence, according to the adjacent video sequence.
  • the main difference in hue between the video sequences enables fast video scene detection.
  • the invention can also be further applied to other fields of image detection, and has high application value.
  • Figure 1 is a flow chart showing an embodiment of the present invention.
  • the present invention expresses colors according to each video sequence.
  • the cumulative histogram corresponding to the hue component of the category determines the main hue of the background color of the video sequence, and fast video scene detection is implemented on the basis of the video sequence according to the main hue difference between adjacent video sequences.
  • a preferred embodiment of the present invention includes the following implementation steps:
  • Step S1 Convert the image frame of the video file from the RGB space to the HSV space.
  • the RGB color model is usually used, which adopts the three primary color mechanism of color. Although it has a very clear physical meaning, it is not suitable for human visual features.
  • the HSV color model is more suitable for human visual features.
  • the HSV color model determines one color using three parameters: hue H (Hue), saturation S (Saturation), and brightness V (Value).
  • hue H Hue
  • saturation S saturation
  • brightness V Value
  • the color type represented by the hue H can directly reflect the color values of the corresponding wavelengths in the color and the spectrum, such as red, orange, yellow, green, blue, purple, etc.
  • the saturation S represents the vividness of the color, which can be understood as a certain
  • the proportion of the white component in the color the larger the S, the less the white component, the brighter the color
  • the brightness V represents the degree of lightness and darkness of the color, and there is no direct relationship between the light intensity and the light intensity.
  • Step S2 performing pixel preprocessing on the image frame of the video file.
  • each image frame of the pre-video sequence needs to be pre-processed to filter out pixels whose colors can be recognized by the human eye.
  • the pixel pre-processing process determines whether a pixel point can be recognized by setting a certain threshold value for the saturation S and the brightness V: when the saturation S of a certain pixel point is smaller than a preset first threshold value T1 and the pixel point is When the brightness V is less than the preset second threshold T2, the pixel point is discarded; when the saturation S of a certain pixel point is greater than the preset third threshold T3 and the brightness V of the pixel is less than the preset fourth threshold T4, Pixels are discarded; the remaining pixels in the image frame of the video sequence are preserved.
  • Step S3 Dividing the video file into a plurality of video sequences, and calculating a perforated histogram of each video sequence.
  • the average hue histogram refers to the cumulative average histogram of the H component of the video sequence. It counts the total number of pixels corresponding to each tone level of a multi-frame image within a certain range.
  • the average histogram can also be regarded as A histogram of the H component is obtained for all pixels of a video.
  • the entire video sequence (or video file) is divided into a plurality of video sequences by a predetermined length, and each video sequence includes N image frames. Therefore, if you want to detect faster, you can choose a larger N value. If the detection result is more accurate, you can choose a relatively small N value.
  • the perforation histogram L m (K) of the video sequence can be expressed as the following formula (4):
  • a video sequence containing N image frames actually calculates the hue H component histogram of each image frame as H n (K) and then takes the mean value after superposition, and obtains the average color tone of the video sequence.
  • Figure L m (K)
  • step S4 the averaging hue histogram is normalized.
  • step S2 After the image frame is preprocessed in step S2, the number of remaining pixels in each image frame is also different, which causes the total number of statistical pixel points of the perforation histogram of each video sequence to be different. Therefore, it is necessary to normalize the histogram of the hue tone of each video sequence to facilitate comparison of the hue histogram between each video sequence.
  • the present invention employs a normalization process based on total pixel points. After obtaining the corresponding histogram of the hue of the video sequence, the total number of pixels counted by the histogram of the average hue of the field is calculated, and then the number of pixels H(K) of the field histogram is divided by the pixel point. The number of totals is the normalized histogram of the histogram of the histogram.
  • Step S5 Perform matching calculation on the average hue histogram of the adjacent two video sequences to obtain a matching coefficient ⁇ .
  • the distribution of the permeation histogram H1 represented by the matching coefficient ⁇ deviates from the distribution of the histogram H2 of the averaging hue, and the smaller the matching coefficient ⁇ indicates that the lower the degree of deviation, the more the two histograms H1 and H2 match.
  • Step S6 sequentially determining whether the matching coefficient ⁇ between the histograms of the adjacent two video sequences is greater than a preset matching threshold. If yes, the adjacent two video sequences are considered to be video sequences of different scenes, otherwise Is a video sequence of the same scene.
  • the invention provides a scene detection method based on a histogram of histograms, firstly determining the main color of the background color of the video sequence according to the cumulative histogram corresponding to the hue components of the color categories of each video sequence, according to the adjacent video sequence.
  • the main difference in hue between the video sequences enables fast video scene detection.
  • the invention can also be further applied to other fields of image detection, and has high application value.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开一种视频场景变化的检测方法,其包括步骤:将视频文件的图像帧从RGB空间转换成HSV空间;将视频文件分成若干视频序列,计算每个视频序列的场均色调直方图;对场均色调直方图进行归一化处理;将相邻两段视频序列的场均色调直方图进行匹配计算,得到匹配系数;若相邻两段视频序列的场均色调直方图之间的匹配系数大于预设匹配阈值,若则认为这相邻两段视频序列为不同场景的视频序列,否则认为是相同场景的视频序列。根据相邻视频序列之间的主要色调差异在视频序列的基础上实现了快速的视频场景检测。本发明同时也可以进一步应用于其他各项图像检测领域,具有很高的应用价值。

Description

一种视频场景变化的检测方法
相关申请的交叉引用
本申请要求2014年9月11日提交的中国专利申请号CN2014104612825的优先权利益,在此通过引用将该优先权文件的全部内容合并至本文中。
技术领域
本发明涉及视频图像分析技术,尤其是涉及一种视频场景变化的检测方法。
背景技术
而视频的内容类型在视频的播放过程中也是不尽相同的,视频的类型变换往往发生在视频场景变换的时刻,视频的场景变换往往会导致视频的内容类型变化。为了保证一段视频在视觉效果的连贯性,需要针对不同视频场景进行融合处理,前提是有效的检测视频场景变换。
现有视频场景检测方法主要包括:
1、基于视频帧间差异的判断方法。比如,中国专利申请CN201310332133.4提出了一种动态视频场景变换检测方法,包括步骤:实时获取动态视频图像的当前帧;计算当前帧的场景变换特征参数ti(n);根据所述动态视频图像的之前一个或数个帧的场景变换特征参数计算对应于当前帧的动态阈值threshold(n);判断当前帧的场景变换特征参数ti(n)是否小于或等于其对应的动态阈值,如果是,则判断为不是场景变换帧,否则,判断为是场景变换帧。
2、基于无向加权图的场景检测方法。该方法将所有视频序列当做图像的端点,用视频序列在空域和时域的相似性作为每条边距离,采用树形剥离的方式循环遍历图的端点,每次都确定一个最有可能的场景边界,直到图的端 点全部被剥离。
虽然现有的检测方法均能实现对视频场景变化检测,但现有的视频场景变换检测方法存在处理复杂、检测效率不高的缺陷。
发明内容
为克服现有技术的缺陷,本发明提出一种实现简单、检测快速的视频场景变化检测方法。
本发明采用如下技术方案实现:一种视频场景变化的检测方法,其包括步骤:
A、将视频文件的图像帧从RGB空间转换成HSV空间;
B、将视频文件分成若干视频序列,计算每个视频序列的场均色调直方图;
C、对场均色调直方图进行归一化处理;
D、将相邻两段视频序列的场均色调直方图进行匹配计算,得到匹配系数;
E、若相邻两段视频序列的场均色调直方图之间的匹配系数大于预设匹配阈值,若则认为这相邻两段视频序列为不同场景的视频序列,否则认为是相同场景的视频序列。
其中,所述一种视频场景变化的检测方法在所述步骤B之前还包括对视频文件的图像帧进行像素预处理的步骤。
其中,所述像素预处理的步骤具体包括:
当某个像素点的饱和度S小于预设第一阈值T1且该像素点的明度V小于预设第二阈值T2时,将该像素点舍弃;
当某个像素点的饱和度S大于预设第三阈值T3且该像素点的明度V小于预设第四阈值T4,将该像素点舍弃;
保留视频序列的图像帧中的其余像素点。
其中,预先设定第一阈值T1=0.2,第二阈值T2=0.8,第三阈值T3=0.8, 第四阈值T4=0.2。
其中,预先设定第一阈值T1=0.14,第二阈值T2=0.92,第三阈值T3=0.94,第四阈值T4=0.13。
其中,所述计算每个视频序列的场均色调直方图的步骤具体包括:
分别计算每个视频序列中各个图像帧的调H分量直方图;
分别将每个图像帧的色调H分量直方图叠加后取其均值,分别计算得到每个视频序列的场均色调直方图。
其中,所述步骤C具体包括:
在得到一段视频序列相应的场均色调直方图后,计算出该场均色调直方图所统计的像素点总数;
对该场均色调直方图各级的像素点数除以像素点总数的个数,则归一化后的直方场均色调直方图。
其中,所述步骤D计算场均色调直方图H1(K)与场均色调直方图H2(K)之间的匹配系数ξ采用如下公式:
Figure PCTCN2014092640-appb-000001
K代表的是像素的色调级,K=1,2,3,…,Q,Q是最大色调级数。
与现有技术相比,本发明具有如下有益效果:
本发明给出了一种基于场均色调直方图的场景检测方法,首先根据各个视频序列表示颜色类别的色调分量对应的累计直方图确定视频序列的背景颜色的主要色调,根据相邻视频序列之间的主要色调差异在视频序列的基础上实现了快速的视频场景检测。本发明同时也可以进一步应用于其他各项图像检测领域,具有很高的应用价值。
附图说明
图1是本发明一个实施例的流程示意图。
具体实施方式
鉴于一个场景内的视频往往具有相同的环境背景,所得到的画面颜色基调比较一致,而不同的场景环境会有较大的差异,背景颜色也会不同,因此,本发明根据各个视频序列表示颜色类别的色调分量对应的累计直方图确定视频序列的背景颜色的主要色调,根据相邻视频序列之间的主要色调差异在视频序列的基础上实现了快速的视频场景检测。
如图1所示,本发明的一个优选实施例包括如下实现步骤:
步骤S1、将视频文件的图像帧从RGB空间转换成HSV空间。
在计算机图像处理中,通常使用的是RGB颜色模型,它采用了颜色的三原色机理,虽然有着非常明确的物理含义,但是不适合人的视觉特征。
而HSV颜色模型更适合人的视觉特征。HSV颜色模型用色调H(Hue),饱和度S(Saturation)和明度V(Value)三个参数来确定一种颜色。色调H表示的色彩的类别,能够直接反映色彩与光谱上对应波长的颜色值,如红色、橙色、黄色、绿色、蓝色、紫色等等;饱和度S代表色彩的鲜艳程度,可以理解为某种颜色中白色分量所占的比重,S越大,白色分量越少,颜色越鲜艳;而明度V代表颜色的明暗程度,它与光强度之间没有直接联系。
以8位(bit)的像素值为例,将图像帧中每一像素点从RGB空间转换成HSV空间的计算公式如下:
Figure PCTCN2014092640-appb-000002
Figure PCTCN2014092640-appb-000003
Figure PCTCN2014092640-appb-000004
步骤S2、对视频文件的图像帧进行像素预处理。
在视频文件的图像帧中,有些像素点的颜色变化是不能被人眼所察觉的,这些像素点不仅会增加场景变化检测的计算难度,还会降低检测结果的准确度。因此,需要预先视频序列的各个图像帧进行预处理,筛选出颜色能够被人眼识别的像素点。
像素预处理过程是通过对饱和度S和明度V设定一定的阈值,来判断一个像素点是否可被识别:当某个像素点的饱和度S小于预设第一阈值T1且该像素点的明度V小于预设第二阈值T2时,将该像素点舍弃;当某个像素点的饱和度S大于预设第三阈值T3且该像素点的明度V小于预设第四阈值T4,将该像素点舍弃;保留视频序列的图像帧中的其余像素点。
并且,当某个像素点的饱和度S∈(0.8,1]且明度V∈[0,0.2)时,认为该像素点为黑色像素点;当某个像素点的饱和度S∈[0,0.2)且明度V∈(0.8,1]时,认为该像素点为白色像素点。据此,可以预先设定第一阈值T1=0.2,第二阈值T2=0.8,第三阈值T3=0.8,第四阈值T4=0.2。
在一个优选实施例中,预先设定第一阈值T1=0.14,第二阈值T2=0.92,第三阈值T3=0.94,第四阈值T4=0.13。
步骤S3、将视频文件分成若干视频序列,计算每个视频序列的场均色调直方图。
将色调H分量直方图表示为H(K),其中K代表的是像素的色调级,K=1,2,3,…,Q,Q是色调H的色调级总数(最大色调级数);且色调H的取值范围是[0,2π]。由于人眼对颜色的鉴别能力有限,可以按照人眼对颜色的识别能力,将色调H分量非均匀量化为Q个等级,分别代表Q种不同的可被人眼识别的颜色,比如Q=8,则Q的取值范围为[0,7]
场均色调直方图是指视频序列的H分量的场均累计直方图,它统计的是一定范围内的多帧图像的各个色调级所对应像素点的总数,场均直方图也可以看成是对一段视频的所有像素点求H分量的直方图。
将整个视频序列(或视频文件)按预设长度分割成若干段视频序列,每段视频序列中包含有N个图像帧。因此,若想要检测更快速,可以选择N值较大,若要检测结果更准确,可以选择N值相对较小。
假设第m段视频序列包含有N个图像帧,依次计算第n图像帧的色调H分量直方图为Hn(K),其中n=1,2,3,…,N,则该第m段视频序列的场均色调直方图Lm(K)可以表示为如下公式(4):
Figure PCTCN2014092640-appb-000005
即,一段含有N个图像帧的视频序列,实际上是分别计算每个图像帧的色调H分量直方图为Hn(K)后叠加后取其均值,得到该段视频序列的场均色调直方图Lm(K)。
步骤S4、对场均色调直方图进行归一化处理。
步骤S2对图像帧进行预处理后,每个图像帧中剩余像素点的个数也不尽相同,这会造成每段视频序列的场均色调直方图的统计的像素点总个数不同。因此,需要对每段视频序列的场均色调直方图进行归一化处理,便于每段视频序列之间的场均色调直方图进行比较。
本发明采用基于总像素点的归一化处理。在得到一段视频序列相应的场均色调直方图后,计算出该场均色调直方图所统计的像素点总数,然后对该场均色调直方图各级的像素点数H(K)除以像素点总数的个数,则归一化后的直方场均色调直方图。
步骤S5、将相邻两段视频序列的场均色调直方图进行匹配计算,得到匹配系数ξ。
比如,计算场均色调直方图H1(K)与场均色调直方图H2(K)之间的匹配系数ξ采用如下公式(5):
Figure PCTCN2014092640-appb-000006
匹配系数ξ表示的场均色调直方图H1的分布偏离场均色调直方图H2的分布的程度,匹配系数ξ越小表示偏离程度越低,则这两个直方图H1与H2之间越匹配。
步骤S6、依次判断相邻两段视频序列的场均色调直方图之间的匹配系数ξ是否大于预设匹配阈值,若是,则认为这相邻两段视频序列为不同场景的视频序列,否则认为是相同场景的视频序列。
本发明给出了一种基于场均色调直方图的场景检测方法,首先根据各个视频序列表示颜色类别的色调分量对应的累计直方图确定视频序列的背景颜色的主要色调,根据相邻视频序列之间的主要色调差异在视频序列的基础上实现了快速的视频场景检测。本发明同时也可以进一步应用于其他各项图像检测领域,具有很高的应用价值。
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。

Claims (8)

  1. 一种视频场景变化的检测方法,其特征在于,包括步骤:
    A、将视频文件的图像帧从RGB空间转换成HSV空间;
    B、将视频文件分成若干视频序列,计算每个视频序列的场均色调直方图;
    C、对场均色调直方图进行归一化处理;
    D、将相邻两段视频序列的场均色调直方图进行匹配计算,得到匹配系数;
    E、若相邻两段视频序列的场均色调直方图之间的匹配系数大于预设匹配阈值,若则认为这相邻两段视频序列为不同场景的视频序列,否则认为是相同场景的视频序列。
  2. 根据权利要求1所述一种视频场景变化的检测方法,其特征在于,所述步骤B之前还包括对视频文件的图像帧进行像素预处理的步骤。
  3. 根据权利要求2所述一种视频场景变化的检测方法,其特征在于,所述像素预处理的步骤具体包括:
    当某个像素点的饱和度S小于预设第一阈值T1且该像素点的明度V小于预设第二阈值T2时,将该像素点舍弃;
    当某个像素点的饱和度S大于预设第三阈值T3且该像素点的明度V小于预设第四阈值T4,将该像素点舍弃;
    保留视频序列的图像帧中的其余像素点。
  4. 根据权利要求3所述一种视频场景变化的检测方法,其特征在于,预先设定第一阈值T1=0.2,第二阈值T2=0.8,第三阈值T3=0.8,第四阈值T4=0.2。
  5. 根据权利要求3所述一种视频场景变化的检测方法,其特征在于,预先设定第一阈值T1=0.14,第二阈值T2=0.92,第三阈值T3=0.94,第四阈值T4=0.13。
  6. 根据权利要求1所述一种视频场景变化的检测方法,其特征在于,所述计算每个视频序列的场均色调直方图的步骤具体包括:
    分别计算每个视频序列中各个图像帧的调H分量直方图;
    分别将每个图像帧的色调H分量直方图叠加后取其均值,分别计算得到每个视频序列的场均色调直方图。
  7. 根据权利要求1所述一种视频场景变化的检测方法,其特征在于,所述步骤C具体包括:
    在得到一段视频序列相应的场均色调直方图后,计算出该场均色调直方图所统计的像素点总数;
    对该场均色调直方图各级的像素点数除以像素点总数的个数,则归一化后的直方场均色调直方图。
  8. 根据权利要求1所述一种视频场景变化的检测方法,其特征在于,所述步骤D计算场均色调直方图H1(K)与场均色调直方图H2(K)之间的匹配系数ξ采用如下公式:
    Figure PCTCN2014092640-appb-100001
    K代表的是像素的色调级,K=1,2,3,…,Q,Q是最大色调级数。
PCT/CN2014/092640 2014-09-11 2014-12-01 一种视频场景变化的检测方法 WO2016037422A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410461282.5A CN104184925A (zh) 2014-09-11 2014-09-11 一种视频场景变化的检测方法
CN201410461282.5 2014-09-11

Publications (1)

Publication Number Publication Date
WO2016037422A1 true WO2016037422A1 (zh) 2016-03-17

Family

ID=51965639

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/092640 WO2016037422A1 (zh) 2014-09-11 2014-12-01 一种视频场景变化的检测方法

Country Status (2)

Country Link
CN (1) CN104184925A (zh)
WO (1) WO2016037422A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108184078A (zh) * 2017-12-28 2018-06-19 可贝熊(湖北)文化传媒股份有限公司 一种视频处理***及其方法
CN110930464A (zh) * 2019-06-27 2020-03-27 北京中科慧眼科技有限公司 一种基于色调直方图统计的颜色检测方法,装置与***
CN113591564A (zh) * 2021-06-24 2021-11-02 贵州国致科技有限公司 一种场景异常状态检测方法
CN114155254A (zh) * 2021-12-09 2022-03-08 成都智元汇信息技术股份有限公司 基于图像校正的切图方法、电子设备及介质
CN116612110A (zh) * 2023-07-14 2023-08-18 微山县振龙纺织品有限公司 用于渐变印染效果的质量智能评估方法

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105005772B (zh) * 2015-07-20 2018-06-12 北京大学 一种视频场景检测方法
CN105912981A (zh) * 2016-03-31 2016-08-31 乐视控股(北京)有限公司 视频转场判断方法及装置
CN106686452B (zh) * 2016-12-29 2020-03-27 北京奇艺世纪科技有限公司 一种动态图片的生成方法及装置
CN108280386B (zh) * 2017-01-05 2020-08-28 浙江宇视科技有限公司 监控场景检测方法及装置
CN107277650B (zh) * 2017-07-25 2020-01-21 中国华戎科技集团有限公司 视频文件切割方法及装置
CN114120197B (zh) * 2021-11-27 2024-03-29 中国传媒大学 2si模式传输的超高清视频异态信号检测方法

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1471306A (zh) * 2002-07-09 2004-01-28 ���ǵ�����ʽ���� 场景改变检测器及其方法
CN102333174A (zh) * 2011-09-02 2012-01-25 深圳市万兴软件有限公司 一种视频图像处理方法和装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8040361B2 (en) * 2005-04-11 2011-10-18 Systems Technology, Inc. Systems and methods for combining virtual and real-time physical environments
US8300890B1 (en) * 2007-01-29 2012-10-30 Intellivision Technologies Corporation Person/object image and screening

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1471306A (zh) * 2002-07-09 2004-01-28 ���ǵ�����ʽ���� 场景改变检测器及其方法
CN102333174A (zh) * 2011-09-02 2012-01-25 深圳市万兴软件有限公司 一种视频图像处理方法和装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LIU, MENG ET AL.: "A Scene Detection Algorithm Based on Averaging Hue Histogram", CHINA SCIENCE PAPER, 26 March 2014 (2014-03-26) *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108184078A (zh) * 2017-12-28 2018-06-19 可贝熊(湖北)文化传媒股份有限公司 一种视频处理***及其方法
CN110930464A (zh) * 2019-06-27 2020-03-27 北京中科慧眼科技有限公司 一种基于色调直方图统计的颜色检测方法,装置与***
CN113591564A (zh) * 2021-06-24 2021-11-02 贵州国致科技有限公司 一种场景异常状态检测方法
CN113591564B (zh) * 2021-06-24 2024-05-03 贵州国致科技有限公司 一种场景异常状态检测方法
CN114155254A (zh) * 2021-12-09 2022-03-08 成都智元汇信息技术股份有限公司 基于图像校正的切图方法、电子设备及介质
CN114155254B (zh) * 2021-12-09 2022-11-08 成都智元汇信息技术股份有限公司 基于图像校正的切图方法、电子设备及介质
CN116612110A (zh) * 2023-07-14 2023-08-18 微山县振龙纺织品有限公司 用于渐变印染效果的质量智能评估方法
CN116612110B (zh) * 2023-07-14 2023-10-24 微山县振龙纺织品有限公司 用于渐变印染效果的质量智能评估方法

Also Published As

Publication number Publication date
CN104184925A (zh) 2014-12-03

Similar Documents

Publication Publication Date Title
WO2016037422A1 (zh) 一种视频场景变化的检测方法
WO2016037423A1 (zh) 基于自适应阈值的视频场景变化检测方法
CN105631880B (zh) 车道线分割方法和装置
CN108010024B (zh) 一种盲参考色调映射图像质量评价方法
US9152878B2 (en) Image processing apparatus, image processing method, and storage medium
WO2017092431A1 (zh) 基于肤色的人手检测方法及装置
CN107507144B (zh) 肤色增强的处理方法、装置及图像处理装置
Marcial-Basilio et al. Detection of pornographic digital images
CN104504722B (zh) 一种利用灰色点校正图像颜色的方法
CN108062554B (zh) 一种车辆年检标签颜色的识别方法及装置
WO2018010386A1 (zh) 元件反件检测方法和***
CN103974053A (zh) 一种基于灰点提取的自动白平衡矫正方法
Lee et al. Color image enhancement using histogram equalization method without changing hue and saturation
Li et al. A color cast detection algorithm of robust performance
US9824454B2 (en) Image processing method and image processing apparatus
CN110599553B (zh) 一种基于YCbCr的肤色提取及检测方法
Zangana et al. A new algorithm for human face detection using skin color tone
WO2017101347A1 (zh) 动画视频识别与编码方法及装置
CN108550155B (zh) 一种彩色林火遥感图像的目标区域分割方法
Yuan et al. Color image quality assessment with multi deep convolutional networks
Powar et al. Skin detection for forensic investigation
CN113658157A (zh) 一种基于hsv空间的颜色分割方法及装置
Hong et al. Saliency-based feature learning for no-reference image quality assessment
CN106709425B (zh) 基于增量自步学习和区域色彩量化的金丝猴面部检测方法
Tomaschitz et al. Skin detection applied to multi-racial images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14901711

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25.07.2017)

122 Ep: pct application non-entry in european phase

Ref document number: 14901711

Country of ref document: EP

Kind code of ref document: A1