CN103533237A - Method for extracting video key frame from video - Google Patents
Method for extracting video key frame from video Download PDFInfo
- Publication number
- CN103533237A CN103533237A CN201310456215.XA CN201310456215A CN103533237A CN 103533237 A CN103533237 A CN 103533237A CN 201310456215 A CN201310456215 A CN 201310456215A CN 103533237 A CN103533237 A CN 103533237A
- Authority
- CN
- China
- Prior art keywords
- video
- frame
- information
- filming apparatus
- key frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Television Signal Processing For Recording (AREA)
Abstract
The invention relates to a method for extracting a video key frame from a video and belongs to the technical field of image processing. According to the method for extracting the video key frame from the video, an operator carries out video shooting on interested scenes by a device. In the shooting process, video frames, accelerated speed information, azimuth information and scale information of the video are recorded synchronously by the device. After the shooting is finished, weight of each frame of the video frame is calculated directly according to the accelerated speed information, the azimuth information and the scale information. Finally, an expected video key frame is extracted according to the weight and the expected key frame quantity. The method for extracting the video key frame from the video can be used for accurately extracting the video key frame from the video by using a smaller calculation amount.
Description
Technical field
The present invention relates to a kind of method of extracting key frame of video from video, belong to technical field of image processing.
Background technology
Along with the increase of hand-hold type capture apparatus (as mobile phone, digital camera and hand-hold type video camera) quantity in recent years, the number of videos that personal user uses portable equipment to take also rolls up.Various video captures application on smart mobile phone, as Instagram etc., has also stimulated the propagation of these videos, and Instagram, in 24 hours of reaching the standard grade, just has 5,000,000 videos to be uploaded.
So many application causes to produce a large amount of videos every day.The places different from Word message are, video information cannot directly be retrieved, and therefore from a large amount of videos, finding Useful Information is the work extremely taking time and effort.The method of taking is at present mainly to rely on the content of manually checking video, and it is marked, and this is the unusual method of poor efficiency undoubtedly.
Therefore the extraction of key frame becomes very significant work.The Key-frame Extraction Algorithm existing at present mainly contains following several: 1) according to regular time interval extract key frame, 2) calculate the difference of adjacent a few frame in color (or gray scale), determine whether it is key frame.3) method based on motion analysis.
It is the most directly perceived according to Fixed Time Interval, extracting key frame, calculates the simplest way, but the shortcoming of this method also clearly, and it is equally distributed that key frame may not be certain.Therefore to be applicable to content simple for these class methods, the video that the time is shorter.
The method of calculating adjacent frame difference is more reasonable, but more difficult choosing of threshold value, and threshold value value is too small may cause selecting too many key frame, and value is excessive likely can cause leaking choosing.And this method is calculated upper more complicated.
Main method based on motion analysis is mainly to calculate the amount of exercise in camera lens by optical flow analysis, in the place of amount of exercise minimum, chooses key frame, and its shortcoming is also that amount of calculation is excessive.
Summary of the invention
The present invention seeks to propose a kind of method of extracting key frame of video from video, to solve the shortcoming of existing extraction method of key frame, in content, the behavior of camera zoom and the video of taking in conjunction with camera each time the acceleration information of engraving device and filming apparatus towards etc. information, intention according to video capture person, calculates key frame.
The method of extracting key frame of video from video that the present invention proposes, comprises the following steps:
(1) use video capture device, floor scene, obtains video, sets total T frame frame of video in video, and records each and take the convergent-divergent yardstick information of filming apparatus camera constantly;
(2) use the frequency identical with capture video, record respectively each and take moment filming apparatus linear acceleration information along x, y, z axle in rectangular coordinate system;
(3) use the frequency identical with capture video, with aspect sensor, record each shooting and constantly take and be equipped in the azimuth information in above-mentioned rectangular coordinate system;
(4) according to the azimuth information of above-mentioned record, linear acceleration information and yardstick information, from video, extract key frame, comprise the following steps:
(4-1) characteristic information of the device while taking k frame frame of video in extraction video, comprising: k frame frame of video is taken filming apparatus azimuth information o constantly
k=[o
x,k, o
y,k, o
z,k]
t, o wherein
x,krepresent the k frame frame of video shooting rolling angle of filming apparatus constantly, install the angle of minor face and horizontal plane, o
y,krepresent the k frame frame of video shooting luffing angle of filming apparatus constantly, install the angle of long limit and horizontal plane, o
z,krepresent the k frame frame of video shooting angle of vacillating now to the left, now to the right of filming apparatus constantly, the direction that immediate shipment top set end points to and the angle of direct north; K frame frame of video is taken the acceleration information a of filming apparatus constantly
k=[a
x,k, a
y,k, a
z,k]
t, a wherein
x,k, a
y,k, a
z,kfor device is respectively at the x of rectangular coordinate system, y, the acceleration on z axle, yardstick information s
kbe illustrated in the convergent-divergent yardstick of camera while taking k frame;
(4-2) adopt discrete cosine transform, video obtained above is carried out to feature information extraction, obtain the frame of video characteristic information f of k frame frame of video in video
k;
(4-3) repeating step (4-1) and step (4-2), obtain the filming apparatus azimuth information of each frame frame of video in above-mentioned video, the convergent-divergent yardstick of the acceleration information of filming apparatus, camera and frame of video characteristic information;
(4-5) calculate the acceleration weights omega of each frame frame of video in video
ak: ω
ak=exp (λ
1|| a
k||
2), λ wherein
1for acceleration regulates parameter, || a
k||
2represent acceleration information a
ktwo norms of vector, λ
1span can determine according to the order of magnitude of acceleration, generally 0.1~1;
(4-6) calculate each frame frame of video in video yardstick weights omega
sk: ω
sk=exp (λ
2s
k), λ wherein
2for yardstick regulates parameter, λ
2span be: 0.5~1;
(4-7) calculate total weights omega of each frame frame of video in video
k: ω
k=ω
akω
sk;
(4-8) adopt K mean algorithm, all frame of video in above-mentioned video are taken to filming apparatus azimuth information constantly and carry out cluster, obtain C cluster centre, the parameter of C for choosing according to information such as video lengths, the span of C is: 1~T, T is the frame number of all frame of video in video, and all frame of video are referred in the class under the immediate cluster centre of azimuth information with corresponding filming apparatus;
(4-9) set up an optimization aim function as follows:
Constraints is:
The sequence number that wherein k is frame of video, the classification that j is cluster centre, j ∈ [1, C], μ
kjparameter to be solved, υ
jcluster centre, p is current iteration number of times;
(4-11) calculate μ
kj:
(4-12), according to above-mentioned result of calculation, upgrade μ
kjvalue, calculate μ
kj:
(4-14) set an iteration and stop threshold epsilon, if
make p=p+1, and return to step (4-11), if
carry out step (4-15), the span of ε is: 0.01~0.001;
(4-15) by following formula, obtain an initial key frame set K={t
1, t
1..., t
c}:
J ∈ [1, C] wherein;
(4-16) calculate the similarity of the frame of video characteristic information of any two width frame of video in above-mentioned initial key frame set K
I wherein, j ∈ [1, C];
(4-17) set a similarity threshold, any two frames in the initial key frame set K that traversal step (4-16) calculates, calculate the similarity of the frame of video characteristic information of any two frames
compare with similarity threshold, if
and
from above-mentioned initial key frame set K, delete t
j; If
and
from above-mentioned initial key frame set K, delete t
i; If
in above-mentioned initial key frame set K, retain t
iand t
j, repeating this step, the set K obtaining is key frame of video, and the span of δ is: 0.2~0.3.
The method of extracting key frame of video from video that the present invention proposes, its advantage is, can be in user's capture video, the motion of records photographing video timer, towards and the variation of focal length, and according to the information of filming apparatus, infer and the intention of user when taking, for example the acceleration of device is less, and when keeping certain numerical value to continue for some time, can think that user is taking a certain scene targetedly, also this scene is crucial to user.Hypothesis can be extracted key frame of video more accurately by less amount of calculation from video accordingly.
Accompanying drawing explanation
Fig. 1 is the FB(flow block) of the method for extracting key frame of video from video that proposes of the present invention.
Embodiment
The method of extracting key frame of video from video that the present invention proposes, its FB(flow block) as shown in Figure 1, comprises the following steps:
(1) use video capture device, floor scene, obtains video, sets total T frame frame of video in video, and records each and take the convergent-divergent yardstick information of filming apparatus camera constantly;
(2) use the frequency identical with capture video, record respectively each and take moment filming apparatus linear acceleration information along x, y, z axle in rectangular coordinate system;
(3) use the frequency identical with capture video, with aspect sensor, record each shooting and constantly take and be equipped in the azimuth information in above-mentioned rectangular coordinate system;
(4) according to the azimuth information of above-mentioned record, linear acceleration information and yardstick information, from video, extract key frame, comprise the following steps:
(4-1) characteristic information of the device while taking k frame frame of video in extraction video, comprising: k frame frame of video is taken filming apparatus azimuth information o constantly
k=[o
x,k, o
y,k, o
z,k]
t, o wherein
x,krepresent the k frame frame of video shooting rolling angle of filming apparatus constantly, install the angle of minor face and horizontal plane, o
y,krepresent the k frame frame of video shooting luffing angle of filming apparatus constantly, install the angle of long limit and horizontal plane, o
z,krepresent the k frame frame of video shooting angle of vacillating now to the left, now to the right of filming apparatus constantly, the direction that immediate shipment top set end points to and the angle of direct north; K frame frame of video is taken the acceleration information a of filming apparatus constantly
k=[a
x,k, a
y,k, a
z,k]
t, a wherein
x,k, a
y,k, a
z,kfor device is respectively at the x of rectangular coordinate system, y, the acceleration on z axle, yardstick information s
kbe illustrated in the convergent-divergent yardstick of camera while taking k frame;
(4-2) adopt discrete cosine transform, video obtained above is carried out to feature information extraction, obtain the frame of video characteristic information f of k frame frame of video in video
k;
(4-3) repeating step (4-1) and step (4-2), obtain the filming apparatus azimuth information of each frame frame of video in above-mentioned video, the convergent-divergent yardstick of the acceleration information of filming apparatus, camera and frame of video characteristic information;
(4-5) calculate the acceleration weights omega of each frame frame of video in video
ak: ω
ak=exp (λ
1|| a
k||
2), λ wherein
1for acceleration regulates parameter, || a
k||
2represent acceleration information a
ktwo norms of vector, λ
1span can determine according to the order of magnitude of acceleration, generally 0.1~1;
(4-6) calculate each frame frame of video in video yardstick weights omega
sk: ω
sk=exp (λ
2s
k), λ wherein
2for yardstick regulates parameter, λ
2span be: 0.5~1;
(4-7) calculate total weights omega of each frame frame of video in video
k: ω
k=ω
akω
sk;
(4-8) adopt K mean algorithm, all frame of video in above-mentioned video are taken to filming apparatus azimuth information constantly and carry out cluster, obtain C cluster centre, the parameter of C for choosing according to information such as video lengths, the span of C is: 1~T, T is the frame number of all frame of video in video, and all frame of video are referred in the class under the immediate cluster centre of azimuth information with corresponding filming apparatus;
(4-9) set up an optimization aim function as follows:
Constraints is:
The sequence number that wherein k is frame of video, the classification that j is cluster centre, j ∈ [1, C], μ
kjparameter to be solved, υ
jcluster centre, p is current iteration number of times;
(4-11) calculate μ
kj:
(4-12), according to above-mentioned result of calculation, upgrade μ
kjvalue, calculate μ
kj:
(4-14) set an iteration and stop threshold epsilon, if
make p=p+1, and return to step (4-11), if
carry out step (4-15), the span of ε is: 0.01~0.001;
(4-15) by following formula, obtain an initial key frame set K={t
1, t
1..., t
c}:
J ∈ [1, C] wherein;
(4-16) calculate the similarity of the frame of video characteristic information of any two width frame of video in above-mentioned initial key frame set K
I wherein, j ∈ [1, C];
(4-17) set a similarity threshold, any two frames in the initial key frame set K that traversal step (4-16) calculates, calculate the similarity of the frame of video characteristic information of any two frames
compare with similarity threshold, if
and
from above-mentioned initial key frame set K, delete t
j; If
and
from above-mentioned initial key frame set K, delete t
i; If
in above-mentioned initial key frame set K, retain t
iand t
j, repeating this step, the set K obtaining is key frame of video, and the span of δ is: 0.2~0.3.
The method of extracting key frame of video from video of the present invention, operating personnel carry out video capture by device to interested scene.In shooting process, install frame of video, acceleration information, azimuth information and the yardstick information of synchronous recording video.After completing, shooting directly utilize acceleration information, azimuth information and yardstick information to calculate its weight to each frame frame of video.Finally according to the number of key frames of weight and expectation, extract the key frame of video of expectation.
Claims (1)
1. from video, extract a method for key frame of video, it is characterized in that, the method comprises the following steps:
(1) use video capture device, floor scene, obtains video, sets total T frame frame of video in video, and records each and take the convergent-divergent yardstick information of filming apparatus camera constantly;
(2) use the frequency identical with capture video, record respectively each and take moment filming apparatus linear acceleration information along x, y, z axle in rectangular coordinate system;
(3) use the frequency identical with capture video, with aspect sensor, record each shooting and constantly take and be equipped in the azimuth information in above-mentioned rectangular coordinate system;
(4) according to the azimuth information of above-mentioned record, linear acceleration information and yardstick information, from video, extract key frame, comprise the following steps:
(4-1) characteristic information of the device while taking k frame frame of video in extraction video, comprising: k frame frame of video is taken filming apparatus azimuth information o constantly
k=[o
x,k, o
y,k, o
z,k]
t, o wherein
x,krepresent the k frame frame of video shooting rolling angle of filming apparatus constantly, install the angle of minor face and horizontal plane, o
y,krepresent the k frame frame of video shooting luffing angle of filming apparatus constantly, install the angle of long limit and horizontal plane, o
z,krepresent the k frame frame of video shooting angle of vacillating now to the left, now to the right of filming apparatus constantly, the direction that immediate shipment top set end points to and the angle of direct north; K frame frame of video is taken the acceleration information α of filming apparatus constantly
k=[a
x,k, a
y,k, a
z,k]
t, a wherein
x,k, a
y,k, a
z,kfor device is respectively at the x of rectangular coordinate system, y, the acceleration on z axle, yardstick information s
kbe illustrated in the convergent-divergent yardstick of camera while taking k frame;
(4-2) adopt discrete cosine transform, video obtained above is carried out to feature information extraction, obtain the frame of video characteristic information f of k frame frame of video in video
k;
(4-3) repeating step (4-1) and step (4-2), obtain the filming apparatus azimuth information of each frame frame of video in above-mentioned video, the convergent-divergent yardstick of the acceleration information of filming apparatus, camera and frame of video characteristic information;
(4-5) calculate the acceleration weights omega of each frame frame of video in video
ak: ω
ak=exp (λ
1|| a
k||
2), λ wherein
1for acceleration regulates parameter, || a
k||
2represent acceleration information a
ktwo norms of vector, λ
1span can determine according to the order of magnitude of acceleration, generally 0.1~1;
(4-6) calculate each frame frame of video in video yardstick weights omega
sk: ω
sk=exp (λ
2s
k), λ wherein
2for yardstick regulates parameter, λ
2span be: 0.5~1;
(4-7) calculate total weights omega of each frame frame of video in video
k: ω
k=ω
akω
sk;
(4-8) adopt K mean algorithm, all frame of video in above-mentioned video are taken to filming apparatus azimuth information constantly and carry out cluster, obtain C cluster centre, the parameter of C for choosing according to information such as video lengths, the span of C is: 1~T, T is the frame number of all frame of video in video, and all frame of video are referred in the class under the immediate cluster centre of azimuth information with corresponding filming apparatus;
(4-9) set up an optimization aim function as follows:
Constraints is:
The sequence number that wherein k is frame of video, the classification that j is cluster centre, j ∈ [1, C], μ
kjparameter to be solved, υ
jcluster centre, p is current iteration number of times;
(4-10), during initialization, establish p=0,
the initial value vector that is j cluster centre;
(4-11) calculate μ
kj:
(4-12), according to above-mentioned result of calculation, upgrade μ
kjvalue, calculate μ
kj:
(4-14) set an iteration and stop threshold epsilon, if
make p=p+1, and return to step (4-11), if
carry out step (4-15), the span of ε is: 0.01~0.001;
(4-15) by following formula, obtain an initial key frame set K={t
1, t
1..., t
c}:
J ∈ [1, C] wherein;
(4-16) calculate the similarity of the frame of video characteristic information of any two width frame of video in above-mentioned initial key frame set K
I wherein, j ∈ [1, C];
(4-17) set a similarity threshold, any two frames in the initial key frame set K that traversal step (4-16) calculates, calculate the similarity of the frame of video characteristic information of any two frames
compare with similarity threshold, if
from above-mentioned initial key frame set K, delete t
j; If
and
from above-mentioned initial key frame set K, delete t
i; If
in above-mentioned initial key frame set K, retain t
iand t
j, repeating this step, the set K obtaining is key frame of video, and the span of δ is: 0.2~0.3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310456215.XA CN103533237B (en) | 2013-09-29 | 2013-09-29 | A kind of method extracting key frame of video from video |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310456215.XA CN103533237B (en) | 2013-09-29 | 2013-09-29 | A kind of method extracting key frame of video from video |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103533237A true CN103533237A (en) | 2014-01-22 |
CN103533237B CN103533237B (en) | 2016-08-17 |
Family
ID=49934874
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310456215.XA Active CN103533237B (en) | 2013-09-29 | 2013-09-29 | A kind of method extracting key frame of video from video |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103533237B (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104284240A (en) * | 2014-09-17 | 2015-01-14 | 小米科技有限责任公司 | Video browsing method and device |
CN106534949A (en) * | 2016-11-25 | 2017-03-22 | 济南中维世纪科技有限公司 | Method for prolonging video storage time of video monitoring system |
CN106528586A (en) * | 2016-05-13 | 2017-03-22 | 上海理工大学 | Human behavior video identification method |
CN107197162A (en) * | 2017-07-07 | 2017-09-22 | 盯盯拍(深圳)技术股份有限公司 | Image pickup method, filming apparatus, video storaging equipment and camera terminal |
US9799376B2 (en) | 2014-09-17 | 2017-10-24 | Xiaomi Inc. | Method and device for video browsing based on keyframe |
CN108140032A (en) * | 2015-10-28 | 2018-06-08 | 英特尔公司 | Automatic video frequency is summarized |
CN108364338A (en) * | 2018-02-06 | 2018-08-03 | 阿里巴巴集团控股有限公司 | A kind of processing method of image data, device and electronic equipment |
CN109920518A (en) * | 2019-03-08 | 2019-06-21 | 腾讯科技(深圳)有限公司 | Medical image analysis method, apparatus, computer equipment and storage medium |
CN110448870A (en) * | 2019-08-16 | 2019-11-15 | 深圳特蓝图科技有限公司 | A kind of human body attitude training method |
WO2020052272A1 (en) * | 2018-09-11 | 2020-03-19 | Boe Technology Group Co., Ltd. | Methods and systems for generating picture set from video |
CN112288838A (en) * | 2020-10-27 | 2021-01-29 | 北京爱奇艺科技有限公司 | Data processing method and device |
CN116939197A (en) * | 2023-09-15 | 2023-10-24 | 海看网络科技(山东)股份有限公司 | Live program head broadcasting and replay content consistency monitoring method based on audio and video |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7469010B2 (en) * | 2001-01-08 | 2008-12-23 | Canon Kabushiki Kaisha | Extracting key frames from a video sequence |
CN101398855A (en) * | 2008-10-24 | 2009-04-01 | 清华大学 | Video key frame extracting method and system |
CN101807198A (en) * | 2010-01-08 | 2010-08-18 | 中国科学院软件研究所 | Video abstraction generating method based on sketch |
-
2013
- 2013-09-29 CN CN201310456215.XA patent/CN103533237B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7469010B2 (en) * | 2001-01-08 | 2008-12-23 | Canon Kabushiki Kaisha | Extracting key frames from a video sequence |
CN101398855A (en) * | 2008-10-24 | 2009-04-01 | 清华大学 | Video key frame extracting method and system |
CN101807198A (en) * | 2010-01-08 | 2010-08-18 | 中国科学院软件研究所 | Video abstraction generating method based on sketch |
Non-Patent Citations (2)
Title |
---|
YANG SHUPING,LIN XINGGANG: "Key Frame Extraction Using Unsupervised Clustering Based On a Statistical Model", 《TSINGHUA SCIENCE AND TECHNOLOGY》, vol. 10, no. 2, 30 April 2005 (2005-04-30), pages 169 - 173, XP011375667, DOI: doi:10.1109/TST.2005.6076014 * |
陆伟艳,夏定元,刘毅: "基于内容的视频检索的关键帧提取", 《微计算机信息》, vol. 23, no. 33, 16 April 2008 (2008-04-16), pages 298 - 300 * |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104284240A (en) * | 2014-09-17 | 2015-01-14 | 小米科技有限责任公司 | Video browsing method and device |
WO2016041311A1 (en) * | 2014-09-17 | 2016-03-24 | 小米科技有限责任公司 | Video browsing method and device |
US9799376B2 (en) | 2014-09-17 | 2017-10-24 | Xiaomi Inc. | Method and device for video browsing based on keyframe |
CN104284240B (en) * | 2014-09-17 | 2018-02-02 | 小米科技有限责任公司 | Video browsing approach and device |
CN108140032B (en) * | 2015-10-28 | 2022-03-11 | 英特尔公司 | Apparatus and method for automatic video summarization |
CN108140032A (en) * | 2015-10-28 | 2018-06-08 | 英特尔公司 | Automatic video frequency is summarized |
CN106528586A (en) * | 2016-05-13 | 2017-03-22 | 上海理工大学 | Human behavior video identification method |
CN106534949A (en) * | 2016-11-25 | 2017-03-22 | 济南中维世纪科技有限公司 | Method for prolonging video storage time of video monitoring system |
CN107197162B (en) * | 2017-07-07 | 2020-11-13 | 盯盯拍(深圳)技术股份有限公司 | Shooting method, shooting device, video storage equipment and shooting terminal |
CN107197162A (en) * | 2017-07-07 | 2017-09-22 | 盯盯拍(深圳)技术股份有限公司 | Image pickup method, filming apparatus, video storaging equipment and camera terminal |
CN108364338A (en) * | 2018-02-06 | 2018-08-03 | 阿里巴巴集团控股有限公司 | A kind of processing method of image data, device and electronic equipment |
WO2020052272A1 (en) * | 2018-09-11 | 2020-03-19 | Boe Technology Group Co., Ltd. | Methods and systems for generating picture set from video |
CN109920518A (en) * | 2019-03-08 | 2019-06-21 | 腾讯科技(深圳)有限公司 | Medical image analysis method, apparatus, computer equipment and storage medium |
CN109920518B (en) * | 2019-03-08 | 2021-11-16 | 腾讯科技(深圳)有限公司 | Medical image analysis method, medical image analysis device, computer equipment and storage medium |
US11908188B2 (en) | 2019-03-08 | 2024-02-20 | Tencent Technology (Shenzhen) Company Limited | Image analysis method, microscope video stream processing method, and related apparatus |
CN110448870A (en) * | 2019-08-16 | 2019-11-15 | 深圳特蓝图科技有限公司 | A kind of human body attitude training method |
CN112288838A (en) * | 2020-10-27 | 2021-01-29 | 北京爱奇艺科技有限公司 | Data processing method and device |
CN116939197A (en) * | 2023-09-15 | 2023-10-24 | 海看网络科技(山东)股份有限公司 | Live program head broadcasting and replay content consistency monitoring method based on audio and video |
Also Published As
Publication number | Publication date |
---|---|
CN103533237B (en) | 2016-08-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103533237A (en) | Method for extracting video key frame from video | |
US11636610B2 (en) | Determining multiple camera positions from multiple videos | |
CN112906047B (en) | Image privacy information protection system and method based on deep learning | |
Wolf et al. | The LIRIS Human activities dataset and the ICPR 2012 human activities recognition and localization competition | |
JP2017520050A5 (en) | ||
US20120127276A1 (en) | Image retrieval system and method and computer product thereof | |
CN104537659A (en) | Automatic two-camera calibration method and system | |
WO2008108087A1 (en) | Camera coupling relation information generating device | |
JP2008193530A5 (en) | ||
CN104809248B (en) | Video finger print extracts and search method | |
CN105825521A (en) | Information processing apparatus and control method thereof | |
KR20170066227A (en) | Method for generating a user interface presenting a plurality of videos | |
CN108288017A (en) | Obtain the method and device of object densities | |
CN116908214B (en) | Tunnel construction defect detection method and system based on digital twinning | |
WO2018078986A1 (en) | Information processing device, information processing method, and program | |
WO2021051382A1 (en) | White balance processing method and device, and mobile platform and camera | |
CN107101711A (en) | A kind of vibration frequency recognition methods of UHV transmission line shading ring | |
Kakde | Content-Based Image Retrieval | |
JP2008276613A (en) | Mobile body determination device, computer program and mobile body determination method | |
CN104965915B (en) | A kind of processing method and system of user access activity data | |
KR20210055567A (en) | Positioning system and the method thereof using similarity-analysis of image | |
Apostolidis et al. | CERTH at MediaEval 2014 Synchronization of Multi-User Event Media Task. | |
CN107818287B (en) | Passenger flow statistics device and system | |
CN101902631A (en) | Method for rapidly positioning static scene in compressed video | |
KR101265020B1 (en) | Creation method and apparatus for high resolution disparity map by the histogram analysis of duplicated territory from the stereo image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |