CN109215054A - Face tracking method and system - Google Patents

Face tracking method and system Download PDF

Info

Publication number
CN109215054A
CN109215054A CN201710516233.0A CN201710516233A CN109215054A CN 109215054 A CN109215054 A CN 109215054A CN 201710516233 A CN201710516233 A CN 201710516233A CN 109215054 A CN109215054 A CN 109215054A
Authority
CN
China
Prior art keywords
face
tracking
optical flow
frame
face frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710516233.0A
Other languages
Chinese (zh)
Inventor
姜楠
邹风山
杨奇峰
刘晓帆
宋健
潘鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Siasun Robot and Automation Co Ltd
Original Assignee
Shenyang Siasun Robot and Automation Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Siasun Robot and Automation Co Ltd filed Critical Shenyang Siasun Robot and Automation Co Ltd
Priority to CN201710516233.0A priority Critical patent/CN109215054A/en
Publication of CN109215054A publication Critical patent/CN109215054A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to computer vision tracking technique fields, specifically disclose a kind of face tracking method, comprising the following steps: step 1, uniformly spread in the face frame of initial position a little, initialization needs the point set tracked, forms initial frame;Step 2, optical flow tracking is carried out to all the points in face frame using optical flow method, obtains each new position for being tracked point subsequent time;Step 3, the position of subsequent time face frame is gone out based on the new position estimation, while can estimate the size of subsequent time face frame further according to the position of future position, obtains the variation of face scale.The present invention have so that face tracking process more robust with it is practical, improve the speed and accuracy of face tracking, while low to resource allocation request, the beneficial effect reduced costs.

Description

Face tracking method and system
Technical field
The present invention relates to technical field of computer vision, in particular to a kind of face tracking method and system.
Background technique
Face tracking is as an important technical in individual identification, due to its intuitive, at present increasingly by To attention, while its application is also more and more extensive.
Face tracking process is mainly concerned with the technology of two aspects: human face analysis and target following.Firstly the need of scheming The region that face is oriented as in, is then based on the individual features of the extracted region face, and carry out the process of target following.Mesh Preceding existing face tracking technology is broadly divided into two classifications: tracking based on face key point and based on human face region image Tracking.The first tracking is based primarily upon the key point extracted on face, carries out characteristic point to these key points Tracking, to realize the tracking of face.This method needs after orienting face, further extracts the key point of face, Then it is tracked again, more links in step, the disadvantage is that speed is slow.Another kind based on human face region image with Track method, essence are exactly then to be tracked face as a kind of unified object category.Usual such methods are based on Line learns the process for completing tracking by constantly updating Optimized model online to realize, the method with respect to before is more multiple It is miscellaneous, it is therefore desirable to more computing resources, the disadvantage is that more demanding to computing resource.
Based on this, the present invention proposes a kind of fast human face tracking method that may operate on embedded low configuration board, with Long-term tracking work is carried out to Given Face suitable for the numerous areas such as robot.
Summary of the invention
The present invention is directed to overcome existing face tracking system, calculating speed is slow, the demanding technological deficiency of computing resource, mentions For a kind of face tracking method and system.
To achieve the above object, the invention adopts the following technical scheme:
The present invention provides a kind of face tracking method, comprising the following steps:
Step 1, it is uniformly spread in the face frame of initial position a little, initialization needs the point set tracked, forms initial frame;
Step 2, optical flow tracking is carried out to all the points in face frame using optical flow method, it is lower for the moment obtains each tracked point The new position carved;
Step 3, the position of subsequent time face frame is gone out based on the new position estimation, while further according to the position of future position It can estimate the size of subsequent time face frame, obtain the variation of face scale.
In some embodiments, in the step 2, the optical flow tracking includes preceding to light stream and backward light stream, for rejecting Track the light stream point of mistake.
In some embodiments, the step 2 includes: the point set of the initial position by preceding new to acquisition after optical flow tracking The point set of position;To optical flow tracking after being carried out to the point set of new position;Statistics obtains the error of point set tracking.
It is further comprising the steps of in some embodiments: to carry out closest comparison, intercept the face frame region of former frame and work as The face frame region of previous frame carries out the similarity-rough set of feature, judges whether tracked face frame region is correct.
In some embodiments, closest comparison is carried out using pixel average, variance and EOH feature.
Correspondingly, it the present invention also provides a kind of face tracking system, comprises the following modules:
Face frame setup module, for uniformly spreading in the face frame of initial position a little, initialization needs the point set tracked, Form initial frame;
Optical flow tracking module obtains each quilt for carrying out optical flow tracking to all the points in face frame using optical flow method The new position of trace point subsequent time;
Face frame estimation block goes out the position of subsequent time face frame based on the new position estimation, while further according to pre- The position of measuring point can estimate the size of subsequent time face frame, obtain the variation of face scale.
In some embodiments, to optical flow tracking and backward optical flow tracking before the optical flow tracking module progress, for rejecting Track the light stream point of mistake.
In some embodiments, the optical flow tracking module, to light stream before being also used to carry out the point set of the initial position Tracking, obtains the point set of new position;To optical flow tracking after being carried out to the point set of new position;Statistics obtains the error of point set tracking.
In some embodiments, face tracking system further includes closest comparison module, for intercepting the face frame of former frame The face frame region of region and present frame carries out the similarity-rough set of feature, judges whether tracked face frame region is correct.
In some embodiments, the closest comparison module is carried out most adjacent using pixel average, variance and EOH feature It is close to compare.
The beneficial effects of the present invention are: face tracking method of the invention by the basis of existing optical flow algorithm into Row improves, by increasing reverse light stream and closest comparison so that face tracking process more robust with it is practical, improve people The speed and accuracy of face tracking, at the same it is low to resource allocation request, it reduces costs.
Detailed description of the invention
Fig. 1 is the flow chart of the present inventor's face tracing method;
Fig. 2 is that the face tracking face frame of the present inventor's face tracing method spreads a schematic diagram;
Fig. 3 is forward and reverse light stream schematic diagram in the present inventor's face tracing method;
Fig. 4 is the schematic diagram of EOH feature coding in the present inventor's face tracing method.
Specific embodiment
In order to enable those skilled in the art to better understand the solution of the present invention, below in conjunction in the embodiment of the present invention Attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is only The embodiment of a part of the invention, instead of all the embodiments.Based on the embodiments of the present invention, ordinary skill people The model that the present invention protects all should belong in member's every other embodiment obtained without making creative work It encloses.
Description and claims of this specification and term " first ", " second ", " third " " in above-mentioned attached drawing The (if present)s such as four " are to be used to distinguish similar objects, without being used to describe a particular order or precedence order.It should manage The data that solution uses in this way are interchangeable under appropriate circumstances, so that the embodiments described herein can be in addition to illustrating herein Or the sequence other than the content of description is implemented.In addition, term " includes " and " having " and their any deformation, it is intended that Cover it is non-exclusive include, for example, containing the process, method, system, product or equipment of a series of steps or units need not limit In step or unit those of is clearly listed, but may include be not clearly listed or for these process, methods, produce The other step or units of product or equipment inherently.
Algorithm of the invention is based on LK optical flow method, is improved on the basis of the algorithm to allow its more robust and reality With.It is introduced below for the principle of LK optical flow algorithm.LK optical flow algorithm is based primarily upon following three hypothesis:
Brightness constancy is assumed: assuming that identical pixel in the continuous two field pictures in front and back, gray value are kept constant Constant, formula is as follows,
F (x, t) ≡ I (x (t), t)=I (x (t+dt), t+dt)
Wherein, f (x, t) indicates the gray value of t moment pixel on the position of image coordinate x, I (x (t), t) and f (x, t) Be it is the same, only changed a kind of expression.Dt indicates very short a period of time, and x (t+dt) indicates that the pixel x (t) of t moment is being passed through New position after having spent the dt time, I (x (t+dt), t+dt) indicate the gray value of the original position x (t) pixel after the dt moment.
Time consistency is assumed: assuming that the frequency of Image Acquisition is very high, the time between consecutive frame is very short, then same It the position of pixel in the next frame should be just near the position of previous frame.Its corresponding formula is as follows.
Ixu+Iyv+It=0
IxIndicate partial derivative of the image I to image x-axis, IyImage I is indicated to the partial derivative of image y-axis, u is indicated along x-axis Speed, v indicate the speed along y-axis, ItIndicate the derivative of image at any time.
Space Consistency is assumed: assuming that other points around a pixel in very little neighborhood also all have and current point phase Same movement, then can be used this hypothesis to construct following system, and solved using least square method.
This formula is corresponding from p1...pnThe matrix form of the equation of a point.
Simple object tracking process can be completed using above-mentioned optical flow algorithm, the present invention is changed on this basis Into.
Fig. 1 is the flow chart of the present inventor's face tracing method.Face tracking method of the invention, especially by following steps It realizes:
Step 1 is executed, is uniformly spread in the face frame of initial position a little, initialization needs the point set tracked, is formed initial Frame.Common optical flow tracking needs to carry out when initial frame the angle point grid of a frame, to obtain preferable point to be tracked For subsequent robust tracking.
Referring to Fig. 1, the present invention is proposed without angle point grid, but uniformly spread a little in the face frame of initial position, It is subsequent that line trace is clicked through to these, it does so and is effectively reduced calculation amount.According to certain picture in Initial Face rectangle frame Element interval selected pixels point is as initial trace point.
Step 2 is executed, optical flow tracking is carried out to all the points in face frame using optical flow method, obtains each be tracked under point The new position at one moment.Optical flow tracking includes preceding to light stream and backward light stream, for rejecting the light stream point of tracking mistake.
It carries out optical flow tracking and optical flow field is calculated according to aforementioned three hypothesis.For the optical flow tracking of some point, it is taken Lesser contiguous range, allow it is wherein all press the pixel that the new moment is corresponded to according to identical length and direction, further according to Brightness constancy allows the smallest direction of its global error and length as optical flow field it is assumed that choosing.
Referring to Fig. 3, being tracked due to not extracting relatively stable angle point when initial frame, light stream is being carried out The situation for tracking mistake occur is very easy to during tracking.Therefore, the present invention introduces on the basis of original optical flow tracking Reverse optical flow tracking (namely forward direction light stream), i.e., original initial point set is point set regular in left-side images, by light stream After tracking formed image right in point set, be then based on image right image to the left and tracked, obtain before to light stream with The point set of track, then obtaining the error of initial point tracking by statistics, (namely the point set after forward direction optical flow tracking is relative to initial The error of point set), which judge to obtain the precision currently tracked, to confirm whether current tracking is effective Tracking.
Step 3 is executed, the position of subsequent time face frame is gone out based on the new position estimation, while further according to future position Position can estimate the size of subsequent time face frame, obtain the variation of face scale.The trace point of all selections can get The new position at current time, the corresponding maximum region of all the points are the region of face.
It executes step 4 and closest comparison is carried out using pixel average, variance and EOH feature, intercept the people of former frame Face frame region and the face frame region of present frame carry out the similarity-rough set of feature, whether just to judge tracked face frame region Really.
For the robustness for increasing tracking effect, the present invention additionally uses recently other than using optical flow method to track Adjacent comparative approach.The position for obtaining the target that prediction obtains using optical flow algorithm first, intercepts the image of former frame target position Region and the band of position of present frame target carry out similarity-rough set, are mainly compared using the feature in the region, using picture Plain average value, variance and EOH feature are compared.When the higher just final confirmation tracing area of characteristic similarity is correct, below Sketch EOH feature.
EOH feature is that feature description is carried out by establishing gradient statistic histogram in the regional area of image, special The schematic diagram of assemble-publish code, as shown in Figure 4.
The gradient magnitude and gradient direction of each pixel in EOH feature calculation image-region, obtain a gradient map, so Statistic histogram is divided into 9 dimensions according to gradient direction afterwards, the gradient magnitude accumulation with the same direction is got up, can be obtained most Whole statistic histogram.Therefore gradient value in a certain direction can be obtained are as follows:
binkIndicate k-th of histogram, θ (x, y) indicates the gradient direction at image coordinate point (x, y), G ' (x, y) be Gradient modulus value at image coordinate point (x, y).
And then the gradient value of pixel whole in regional area is got up according to above-mentioned formula accumulation, form one 9 The vector of dimension carries out arest neighbors comparison with the average value of area pixel using the vector together with variance.
The present invention proposes a kind of quick face tracking method, suitable for the numerous areas such as robot to Given Face into The long-term tracking work of row, can be applied on service robot, operates on embedded low configuration board, it can be achieved that real-time tracking Effect, and there is certain robustness to the variation of illumination and human face posture, there is preferable practical value.In addition to this, Algorithm in the present invention is also relatively general track algorithm, can also be tracked to general objectives object, but need to be tracked Target object has certain textured pattern, otherwise optical flow tracking the case where it is easy to appear error trackings.Therefore can be used for handing over In the fields such as logical monitoring, target vehicle or specific invasion human body are tracked, to complete the work such as further processing.
In addition, the present invention correspondingly discloses a kind of quick face tracking system.Specifically, it comprises the following modules:
Face frame setup module, for uniformly spreading in the face frame of initial position a little, initialization needs the point set tracked, Form initial frame;
Optical flow tracking module obtains each quilt for carrying out optical flow tracking to all the points in face frame using optical flow method The new position of trace point subsequent time;Optical flow tracking module carry out before to optical flow tracking and backward optical flow tracking, for reject with The light stream point of track mistake.Optical flow tracking module obtains new to optical flow tracking before being also used to carry out the point set of the initial position The point set of position;To optical flow tracking after being carried out to the point set of new position;Statistics obtains the error of point set tracking.
Face frame estimation block goes out the position of subsequent time face frame based on the new position estimation, while further according to pre- The position of measuring point can estimate the size of subsequent time face frame, obtain the variation of face scale.
Face tracking system further includes closest comparison module, for intercepting the face frame region and present frame of former frame Face frame region carries out the similarity-rough set of feature, judges whether tracked face frame region is correct.Closest comparison module Closest comparison is carried out using pixel average, variance and EOH feature.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description, The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In several embodiments provided herein, it should be understood that disclosed system, device and method can be with It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the unit It divides, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components It can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, it is shown or The mutual coupling, direct-coupling or communication connection discussed can be through some interfaces, the indirect coupling of device or unit It closes or communicates to connect, can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list Member both can take the form of hardware realization, can also realize in the form of software functional units.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of above-described embodiment is can It is completed with instructing relevant hardware by program, which can be stored in a computer readable storage medium, storage Medium may include: read-only memory (ROM, Read Only Memory), random access memory (RAM, Random Access Memory), disk or CD etc..
Face tracking method provided by the present invention and system are described in detail above, for the general of this field Technical staff, thought according to an embodiment of the present invention, there will be changes in the specific implementation manner and application range, to sum up Described, the contents of this specification are not to be construed as limiting the invention.

Claims (10)

1. a kind of face tracking method, which comprises the following steps:
Step 1, it is uniformly spread in the face frame of initial position a little, initialization needs the point set tracked, forms initial frame;
Step 2, optical flow tracking is carried out to all the points in face frame using optical flow method, obtains each tracked point subsequent time New position;
Step 3, the position of subsequent time face frame is gone out based on the new position estimation, while can be estimated further according to the position of future position The size of subsequent time face frame is calculated, the variation of face scale is obtained.
2. face tracking method as described in claim 1, which is characterized in that in the step 2, before the optical flow tracking includes To light stream and backward light stream, for rejecting the light stream point of tracking mistake.
3. face tracking method as claimed in claim 2, which is characterized in that the step 2 includes: the point of the initial position Collect by preceding to the point set for obtaining new position after optical flow tracking;To optical flow tracking after being carried out to the point set of new position;Statistics obtains The error of point set tracking.
4. face tracking method as described in claim 1, which is characterized in that further comprising the steps of: closest comparison is carried out, The face frame region of the face frame region and present frame that intercept former frame carries out the similarity-rough set of feature, judges tracked people Whether face frame region is correct.
5. face tracking method as claimed in claim 4, which is characterized in that use pixel average, variance and EOH feature Carry out closest comparison.
6. a kind of face tracking system, which is characterized in that comprise the following modules:
Face frame setup module, for uniformly spreading in the face frame of initial position a little, initialization needs the point set tracked, is formed Initial frame;
Optical flow tracking module obtains each be tracked for carrying out optical flow tracking to all the points in face frame using optical flow method The new position of point subsequent time;
Face frame estimation block goes out the position of subsequent time face frame based on the new position estimation, while further according to future position Position can estimate the size of subsequent time face frame, obtain the variation of face scale.
7. face tracking system as claimed in claim 6, which is characterized in that the optical flow tracking module carry out before to light stream with Track and backward optical flow tracking, for rejecting the light stream point of tracking mistake.
8. face tracking system as claimed in claim 7, which is characterized in that the optical flow tracking module is also used to described To optical flow tracking before the point set progress of initial position, the point set of new position is obtained;After being carried out to the point set of new position to light stream with Track;Statistics obtains the error of point set tracking.
9. face tracking system as claimed in claim 6, which is characterized in that face tracking system further includes closest relatively mould Block judges institute for intercepting the similarity-rough set of the face frame region of former frame and the face frame region progress feature of present frame Whether the face frame region of tracking is correct.
10. face tracking system as claimed in claim 6, which is characterized in that the closest comparison module is flat using pixel Mean value, variance and EOH feature carry out closest comparison.
CN201710516233.0A 2017-06-29 2017-06-29 Face tracking method and system Pending CN109215054A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710516233.0A CN109215054A (en) 2017-06-29 2017-06-29 Face tracking method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710516233.0A CN109215054A (en) 2017-06-29 2017-06-29 Face tracking method and system

Publications (1)

Publication Number Publication Date
CN109215054A true CN109215054A (en) 2019-01-15

Family

ID=64976521

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710516233.0A Pending CN109215054A (en) 2017-06-29 2017-06-29 Face tracking method and system

Country Status (1)

Country Link
CN (1) CN109215054A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111079670A (en) * 2019-12-20 2020-04-28 北京百度网讯科技有限公司 Face recognition method, face recognition device, face recognition terminal and face recognition medium
CN115174861A (en) * 2022-07-07 2022-10-11 广州后为科技有限公司 Method and device for automatically tracking moving target by pan-tilt camera
WO2023088074A1 (en) * 2021-11-18 2023-05-25 北京眼神智能科技有限公司 Face tracking method and apparatus, and storage medium and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103514441A (en) * 2013-09-21 2014-01-15 南京信息工程大学 Facial feature point locating tracking method based on mobile platform
CN105469056A (en) * 2015-11-26 2016-04-06 小米科技有限责任公司 Face image processing method and device
CN106296742A (en) * 2016-08-19 2017-01-04 华侨大学 A kind of online method for tracking target of combination Feature Points Matching

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103514441A (en) * 2013-09-21 2014-01-15 南京信息工程大学 Facial feature point locating tracking method based on mobile platform
CN105469056A (en) * 2015-11-26 2016-04-06 小米科技有限责任公司 Face image processing method and device
CN106296742A (en) * 2016-08-19 2017-01-04 华侨大学 A kind of online method for tracking target of combination Feature Points Matching

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
KOBI LEVI ET AL: "《Learning Object Detection from a Small Number of Examples:the Importance of Good Features》", 《CVPR’04》 *
周千昊 等: "《基于改进EOH特征的行人检测》", 《中国图象图形学报》 *
李泽泉: "《基于TLD框架的人脸跟踪算法的研究与实现》", 《万方数据库》 *
许高凤 等: "《一种联合的人眼精确定位算法》", 《小型微型计算机***》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111079670A (en) * 2019-12-20 2020-04-28 北京百度网讯科技有限公司 Face recognition method, face recognition device, face recognition terminal and face recognition medium
CN111079670B (en) * 2019-12-20 2023-11-03 北京百度网讯科技有限公司 Face recognition method, device, terminal and medium
WO2023088074A1 (en) * 2021-11-18 2023-05-25 北京眼神智能科技有限公司 Face tracking method and apparatus, and storage medium and device
CN115174861A (en) * 2022-07-07 2022-10-11 广州后为科技有限公司 Method and device for automatically tracking moving target by pan-tilt camera
CN115174861B (en) * 2022-07-07 2023-09-22 广州后为科技有限公司 Method and device for automatically tracking moving target by holder camera

Similar Documents

Publication Publication Date Title
Amato et al. Deep learning for decentralized parking lot occupancy detection
Feng et al. Local background enclosure for RGB-D salient object detection
Roa'a et al. Generation of high dynamic range for enhancing the panorama environment
CN106815842B (en) improved super-pixel-based image saliency detection method
JP2008538832A (en) Estimating 3D road layout from video sequences by tracking pedestrians
CN105809716B (en) Foreground extraction method integrating superpixel and three-dimensional self-organizing background subtraction method
US8538079B2 (en) Apparatus capable of detecting location of object contained in image data and detection method thereof
CN107341815B (en) Violent motion detection method based on multi-view stereoscopic vision scene stream
US9406140B2 (en) Method and apparatus for generating depth information
CN110910421A (en) Weak and small moving object detection method based on block characterization and variable neighborhood clustering
CN109215054A (en) Face tracking method and system
CN103714556A (en) Moving target tracking method based on pyramid appearance model
CN110599522A (en) Method for detecting and removing dynamic target in video sequence
TW201436552A (en) Method and apparatus for increasing frame rate of an image stream using at least one higher frame rate image stream
Cuevas et al. Moving object detection for real-time augmented reality applications in a GPGPU
CN114072839A (en) Hierarchical motion representation and extraction in monocular still camera video
CN110909617B (en) Living body face detection method and device based on binocular vision
CN109299702B (en) Human behavior recognition method and system based on depth space-time diagram
CN104978558B (en) The recognition methods of target and device
KR20110112143A (en) A method for transforming 2d video to 3d video by using ldi method
CN107274477B (en) Background modeling method based on three-dimensional space surface layer
CN104517292A (en) Multi-camera high-density crowd partitioning method based on planar homography matrix restraint
CN110322479B (en) Dual-core KCF target tracking method based on space-time significance
Yang et al. Contrast limited adaptive histogram equalization for an advanced stereo visual slam system
Xu et al. Moving target detection and tracking in FLIR image sequences based on thermal target modeling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190115