CN110569785A - Face recognition method based on fusion tracking technology - Google Patents

Face recognition method based on fusion tracking technology Download PDF

Info

Publication number
CN110569785A
CN110569785A CN201910839847.1A CN201910839847A CN110569785A CN 110569785 A CN110569785 A CN 110569785A CN 201910839847 A CN201910839847 A CN 201910839847A CN 110569785 A CN110569785 A CN 110569785A
Authority
CN
China
Prior art keywords
face
tracking
frame
result
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910839847.1A
Other languages
Chinese (zh)
Other versions
CN110569785B (en
Inventor
张智
李思远
於耀耀
刘子瑜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Zhiai Time Technology Co ltd
Original Assignee
Hangzhou Lichen Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Lichen Technology Co Ltd filed Critical Hangzhou Lichen Technology Co Ltd
Priority to CN201910839847.1A priority Critical patent/CN110569785B/en
Publication of CN110569785A publication Critical patent/CN110569785A/en
Application granted granted Critical
Publication of CN110569785B publication Critical patent/CN110569785B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

the invention provides a face recognition method and a face recognition device integrating tracking technology, which relate to the technical field of face recognition, and the method comprises the following steps of face capture: operating an identification algorithm to capture the face of each frame to obtain the face position and characteristics, wherein the position of the face captured by the current frame is used as an initial value of the target face tracking of the next frame; face comparison: comparing the current frame face tracking result with the current frame face capturing result by calculating Euclidean distance on the face position and the characteristics, and judging whether the current frame face tracking result is the same person or not; face tracking: and tracking the captured human face by operating a target tracking algorithm. The invention solves the problem that the same person is recognized as different persons and different persons are recognized as the same person in the process of comparing the face characteristic values, and reduces the influence of face angle and shielding on face recognition.

Description

face recognition method based on fusion tracking technology
Technical Field
The invention relates to a face recognition technology, in particular to a face recognition method combining a recognition algorithm and a tracking algorithm.
Background
With the development of artificial intelligence, biometric authentication technology is also rapidly developing on the intelligent road, and face recognition technology has become an important authentication mode.
Most of the existing face recognition systems recognize faces in a static image environment, and the Adaboost algorithm is mainly adopted to train and distinguish the faces. The Adaboost algorithm is easily interfered by noise, and the training time is long; the most prominent conventional face recognition algorithms include geometric feature-based, face global feature extraction and SVM (support vector machine) -based methods, but the recognition accuracy is not high. The face recognition algorithm based on the Convolutional Neural Network (CNN) improves the extraction capability of the features, and is superior to the traditional algorithm in the recognition accuracy. However, the current face recognition and tracking technology usually operates face detection and face tracking independently, and causes inaccuracy to face characteristic value comparison due to human movement and different angles of face capture in video stream, so that the method recognizes the same person as different persons and recognizes different persons as the same person during face comparison,
The patent number 201811178561.5 discloses a method and a device for locating a video based on face recognition, which can quickly and accurately recognize frame images of face images in the video and specifically recognize which frame images the face images of a target person appear in.
The present application was made based on this.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides a face recognition method with a fusion tracking technology, which improves the accuracy of face tracking recognition under the condition of existence of multiple persons.
in order to achieve the purpose, the technical scheme adopted by the invention is as follows:
A face recognition method based on fusion tracking technology comprises
Face capturing: operating an identification algorithm to capture the face of each frame to obtain the face position and characteristics, wherein the position of the face captured by the current frame is used as an initial value of the target face tracking of the next frame;
Face comparison: comparing the current frame face tracking result with the current frame face capturing result by calculating Euclidean distance on the face position and the characteristics, and judging whether the current frame face tracking result is the same person or not; if the comparison fails or the current frame does not capture the face, the tracking result of the current frame is used as the tracking initial value of the next frame to continue tracking;
face tracking: and tracking the captured human face by operating a target tracking algorithm.
As a preferred scheme, the face comparison step includes:
Calculating the offset of the face position of the tracking result and the recognition result, and if the offset is smaller than a given threshold value, keeping the recognition position in an adjacent coordinate set;
And acquiring the recognized face features in the adjacent coordinate set, and matching the face features of the tracking target and the capturing target.
As a preferred scheme, the offset of the face position of the tracking result and the recognition result is calculated and compared with a given threshold, the threshold is determined through a large number of experiments, if the offset is smaller than the threshold, the recognition position is kept in the step of adjacent coordinate set, and the coordinate vector of the target position output by the tracking algorithm and the recognition algorithm is expressed in the form of an upper, left, right and lower 4-dimensional vector; calculating the offset refers to calculating its euclidean distance in the 4-dimensional vector space.
As a preferred scheme, in the step of obtaining the recognized face features in the adjacent coordinate set, matching the face features of the tracking target and the captured target, the euclidean distance between the feature value vector of the target frame in the output result of the tracking algorithm and the feature value vector of the target frame in the output result of the recognition algorithm is calculated, and the euclidean distance is compared with a given threshold value, the threshold value is determined through a large number of experiments, and if the euclidean distance is smaller than the threshold value which is given after the validity is verified through a large number of experiments, the recognized face and the face which is captured at the beginning are determined to be the same person.
As a preferred scheme, in the face comparison step, the face capturing result that is successfully compared is used as an initial value for tracking a next frame, and if the comparison fails or the face is not captured by the current frame, the tracking result of the current frame is used as the initial value for tracking the next frame, specifically: if the positions and the characteristic values of the tracking result and the capturing result are matched, updating the current state of the target person by using the result captured by the recognition algorithm; if the tracking target and the capturing target have position matching but do not match the characteristics, the tracking target is considered to be shielded, the tracking is continued if the tracking target and the capturing target have the position matching, the tracking of the next frame is initialized by using the position result tracked by the current frame until the face is captured by the recognition algorithm again, and the tracking of the next frame is initialized by using the capturing position of the current frame; if the tracking result is wrong and none of the capture positions can be matched, the person is offline and is not tracked any more; the remaining unmatched capture results are added to the database as new people or the old target comes back on line.
as a preferable scheme, the face tracking step includes:
initializing a tracker, using the identification position of the previous frame as the initial value of face tracking of the next frame, and using the face number of the previous frame as the face number of the face tracking of the next frame; if the face successfully captured in the previous frame cannot be captured, acquiring an initial value and a face number of the next frame from a face result tracked by the current frame;
And running a tracking algorithm to obtain the current tracking face position and extract the tracking target face characteristics.
As a preferred scheme, in the face tracking step, starting from the 1 st frame of capturing a face, a face set tracked in the 1 st frame is an empty set, the face captured in the 1 st frame is compared with the empty set, the result captured by the face of the 1 st frame is directly used as an initial value of a next frame tracking algorithm for face tracking, and the actual face comparison is performed from the 2 nd frame; and repeating the steps of face capturing, face comparing and face tracking if the real-time or recorded video stream is not finished.
The working principle of the invention is as follows: the method identifies the human face by combining the target tracking algorithm and the human face identification algorithm, captures the human face by the human face identification algorithm in each frame and numbers, and the numbers of the same human face appearing subsequently are consistent. The successfully captured face position of the previous frame becomes the initial value of the tracking algorithm of the next frame, and the tracking result is reflected in the next frame. The feature values of the currently captured face and the tracked face are compared in each frame of image, and the result of successfully comparing the captured face updates the current tracking result and is used as the initial value of the next frame of tracking. And if the face is not successfully captured in the previous frame, using the current value of the face tracking as the initial value of the next frame tracking. When the target is occluded, the occluded target is continuously tracked by using the position result tracked by the current frame, and the next time the target face reappears and is captured, and then the next frame tracking algorithm is initialized by using the captured position.
The invention can realize the following technical effects:
(1) The invention utilizes the correlation of face tracking in space and time in continuous two frames of videos, uses the position of a face captured in a current frame as an initial value of the face tracking of a next frame of target, compares the characteristic value and the position of the face by face detection and combining Euclidean distance, skillfully converts the face comparison in different frames into the face comparison in the same frame, simplifies the target tracking into only performing the target tracking between the two frames, and compensates the inaccuracy of the face identification by a face tracking technology in the identification and tracking scenes of faces, similar persons, shielded persons or external interference and the like at different angles, thereby reducing the misjudgment rate of the face identification, simultaneously remarkably improving the accuracy of the face tracking and effectively solving the problem of inaccurate tracking in the process of multi-frame continuous tracking.
(2) The invention solves the problem that the same person is recognized as different persons and different persons are recognized as the same person in the process of comparing the face characteristic values, and reduces the influence of face angles (front face, side face, head lowering and head raising) and shielding on face recognition.
(3) According to the invention, the face is captured in each frame and the captured image is used as an initial value of the next frame tracking, so that the conditions of high-speed motion, complex background and shielding in target tracking are improved, and the tracking loss rate is greatly reduced under the condition of full shielding.
(4) According to the method, through target tracking, the face characteristic values in different frames are compared by utilizing the spatial and temporal correlation of two continuous frames, and are converted into the comparison of the face characteristic values in the same frame, so that the accuracy of face recognition is improved to a great extent.
drawings
fig. 1 is a flowchart illustrating basic steps of an implementation process of a face recognition method incorporating a tracking technology according to the present embodiment;
FIG. 2 is a flowchart of an algorithm main loop of an implementation process of a face recognition method incorporating a tracking technology according to the present embodiment;
Fig. 3 is a schematic diagram of an euclidean distance comparison algorithm between a capture result and a tracking result in an implementation process of the face recognition method based on the fusion tracking technology in the embodiment;
Fig. 4 is a schematic diagram of the first three frames of the face recognition method based on the fusion tracking technology in this embodiment.
Detailed Description
In order to make the technical means and technical effects achieved by the technical means of the present invention more clearly and more perfectly disclosed, the following embodiments are provided, and the following detailed description is made with reference to the accompanying drawings:
As shown in fig. 1 and fig. 2, a face recognition method based on a fusion tracking technology in this embodiment includes the following steps:
Step S1, capturing human face: and (4) operating a recognition algorithm for each frame to capture the face, wherein the position of the face captured by the current frame is used as an initial value for tracking the target face of the next frame.
Input video stream frame sequence F ═ { F ═ F1,F2,…,FmIn the n-th frame, 1<n<m, obtaining F by calling a face recognition algorithmnAnd adding the face position and the face characteristics into an identification position set, and numbering the face.
Step S2, comparing human faces: and comparing the current frame face tracking result with the current frame face capturing result by calculating the Euclidean distance on the face position and the characteristics, and judging whether the current frame face tracking result is the same person or not. And if the comparison fails or the current frame does not capture the face, using the tracking result of the current frame as the tracking initial value of the next frame to continue tracking.
And step S21, calculating the offset of the face position of the tracking result and the recognition result, and if the offset is smaller than a given threshold value, keeping the recognition position in the adjacent coordinate set. The coordinate vectors of the target position output by the tracking algorithm and the recognition algorithm are expressed in the form of (upper, left, right, lower), and the calculation of the offset means that the euclidean distance is calculated in the 4-dimensional vector space, as shown in formula 1:
equation 1:
wherein, the coordinate vector of the target position of the current frame tracking result is (tx)1,ly1,ry1,bx1) The coordinate vector of the target position of the recognition algorithm capture result is (tx)2,ly2,ry2,bx2) And d represents the euclidean distance between the two. As shown in fig. 3, fig. 3 is a schematic diagram of the euclidean distance comparison algorithm between the capturing result and the tracking result of the implementation process.
And step S22, acquiring the recognized face features in the adjacent coordinate set, and matching the face features of the tracking target and the captured target. The characteristic value vector of the target frame in the output result of the tracking algorithm is expressed as (x)1,x2,…,xn) And the characteristic value vector of the target box in the output result of the recognition algorithm is expressed as (y)1,y2,…,yn) And calculating the Euclidean distance f in the feature space, and if f is smaller than a given threshold eta, determining that the recognized face and the face captured at the beginning are the same person.
equation 2:
The size of the threshold e and η in the step 2 needs to be obtained by a large number of experiments according to different algorithms, the embodiment uses an open source algorithm face _ recognition, and the threshold e and η are set to be 0.6 after the Euclidean distance is calculated by an np.
Step S3, face tracking: and tracking the captured human face through a target tracking algorithm.
And step S31, initializing the tracker. With the (n-1) th frame Fn-1the identified position of the frame is used as the face tracking initial value of the nth frame, and the nth-1 frame F is usedn-1the face number of the frame n is used as the face number of the face tracking of the frame n; if the face successfully captured in the previous frame cannot be captured, the initial value and the face number of the next frame are obtained from the face result tracked in the current frame.
and step S32, running a tracking algorithm to obtain the current tracking face position and extract the tracking target face characteristics.
In the steps, from the 1 st frame of capturing the face, the face set tracked in the 1 st frame is an empty set, the face captured in the 1 st frame is compared with the empty set, namely, the result of the face capture of the 1 st frame is directly used as the initial value of the next frame tracking algorithm to track the face, and the actual face comparison is performed from the 2 nd frame. As long as the real-time or recorded video stream is not finished, steps S1 to S3 are repeated.
as shown in fig. 2, which is a main cycle flow chart of an implementation process algorithm, after a sequence of video frames is obtained, a database is connected for global initialization, if a readable sequence of video frames exists, a recognition algorithm is operated to capture a face to obtain a face feature value and a face capture position, if the current frame is the 1 st frame, a tracker is initialized by using the current face capture position, a tracking algorithm is operated to track the current face, and if the video frame is not finished, face capture is continued; if the current frame is not the 1 st frame, the current frame has a face capturing result and a face tracking result, the Euclidean distance is calculated through a formula 1 and a formula 2 to compare the face positions and the characteristic values of the two results, and if the tracking result is wrong and no capturing position can be matched completely, the person is off-line, namely, the tracking is finished; if the positions and the characteristic values of the tracking result and the capturing result are matched, updating the current state of the target person by using the result captured by the recognition algorithm; if the tracking target and the captured target have matched positions but do not match the features, the tracked target and the captured target are considered to be shielded, the tracking of the next frame is initialized by using the position result tracked by the current frame until the face is captured by the recognition algorithm again, the tracking of the next frame is initialized by using the captured position of the current frame continuously, and the remaining unmatched capture result is used as a newly added target and is added into a database or the old target is on line again.
As shown in fig. 4, which is a schematic diagram of the first three frames in the implementation process, it can be seen that the method of the present invention is a face recognition method capable of tracking and recognizing multiple targets, in this embodiment, a face _ recognition open source recognition algorithm is used to recognize a face, and an ECO (Efficient Convolution Operators) target tracking algorithm based on a correlation filter is used to track the face. The invention is also applicable to other face recognition and target tracking algorithms. In the figure, a thick solid line frame represents a tracking result, a thin solid line frame represents a result of capturing a human face by an identification algorithm, in a 1 st frame, the human face is captured by the identification algorithm, a tracking set is an empty set, and the captured human face result is used as an initial value of tracking of the next frame; in frame 2, capturing the face through an identification algorithm to obtain a current face capturing result, namely a fine solid frame, wherein a current tracking result tracked from a previous frame face capturing result is a coarse solid frame, comparing the current face capturing result with the current tracking result, if the comparison is successful, updating the current tracking result by using the current capturing result and using the current tracking result as an initial value of the next frame tracking, and if the comparison is failed or the face is not captured by the current frame, continuously tracking by using the current frame tracking result as the initial value of the next frame tracking; in frame 3, current target 1 (ID)1) Target 2 (ID)2) And (3) shielding, initializing a tracking algorithm of the next frame by using the current frame capturing result for the target 2, initializing tracking of the next frame by using the position result tracked by the current frame for the target 1 until the face is captured by the recognition algorithm again, and then continuously initializing tracking of the next frame by using the current frame capturing result.
the above description is provided for the purpose of further elaboration of the technical solutions provided in connection with the preferred embodiments of the present invention, and it should not be understood that the embodiments of the present invention are limited to the above description, and it should be understood that various simple deductions or substitutions can be made by those skilled in the art without departing from the spirit of the present invention, and all such alternatives are included in the scope of the present invention.

Claims (7)

1. a face recognition method of fusion tracking technology is characterized in that: comprises that
Face capturing: operating an identification algorithm to capture the face of each frame to obtain the face position and characteristics, wherein the position of the face captured by the current frame is used as an initial value of the target face tracking of the next frame;
face comparison: comparing the current frame face tracking result with the current frame face capturing result by calculating Euclidean distance on the face position and the characteristics, and judging whether the current frame face tracking result is the same person or not; if the comparison fails or the current frame does not capture the face, the tracking result of the current frame is used as the tracking initial value of the next frame to continue tracking;
face tracking: and tracking the captured human face by operating a target tracking algorithm.
2. The face recognition method based on the fusion tracking technology as claimed in claim 1, characterized in that: the face comparison step comprises:
calculating the offset of the face position of the tracking result and the recognition result, and if the offset is smaller than a given threshold value, keeping the recognition position in an adjacent coordinate set;
And acquiring the recognized face features in the adjacent coordinate set, and matching the face features of the tracking target and the capturing target.
3. The face recognition method based on the fusion tracking technology as claimed in claim 2, characterized in that: calculating the offset of the face position of the tracking result and the recognition result, comparing the offset with a given threshold, if the offset is smaller than the threshold, keeping the recognition position in the adjacent coordinate set, and expressing the coordinate vectors of the target position output by the tracking algorithm and the recognition algorithm in the form of upper, left, right and lower 4-dimensional vectors; calculating the offset refers to calculating its euclidean distance in the 4-dimensional vector space.
4. The face recognition method based on the fusion tracking technology as claimed in claim 2, characterized in that: in the step of obtaining the recognized face features in the adjacent coordinate set, matching the tracked target with the face features of the captured target, calculating Euclidean distance of the feature value vector of the target frame in the output result of the tracking algorithm and the feature value vector of the target frame in the output result of the recognition algorithm, comparing the Euclidean distance with a given threshold value, and if the Euclidean distance is smaller than the threshold value, determining that the recognized face and the face captured at the beginning are the same person.
5. the face recognition method based on the fusion tracking technology as claimed in claim 1, characterized in that: in the face comparison step, the successfully compared face capture result is used as an initial value of the next frame tracking, and if the comparison fails or the face is not captured by the current frame, the current frame tracking result is used as the initial value of the next frame tracking to continue tracking, specifically: if the positions and the characteristic values of the tracking result and the capturing result are matched, updating the current state of the target person by using the result captured by the recognition algorithm; if the tracking target and the capturing target have position matching but do not match the characteristics, the tracking target is considered to be shielded, the tracking is continued if the tracking target and the capturing target have the position matching, the tracking of the next frame is initialized by using the position result tracked by the current frame until the face is captured by the recognition algorithm again, and the tracking of the next frame is initialized by using the capturing position of the current frame; if the tracking result is wrong and none of the capture positions can be matched, the person is offline and is not tracked any more; the remaining unmatched capture results are added to the database as new people or the old target comes back on line.
6. The face recognition method based on the fusion tracking technology as claimed in claim 1, characterized in that: the face tracking step comprises the following steps:
Initializing a tracker, using the identification position of the previous frame as the initial value of face tracking of the next frame, and using the face number of the previous frame as the face number of the face tracking of the next frame; if the face successfully captured in the previous frame cannot be captured, acquiring an initial value and a face number of the next frame from a face result tracked by the current frame;
And running a tracking algorithm to obtain the current tracking face position and extract the tracking target face characteristics.
7. the face recognition method based on the fusion tracking technology as claimed in claim 1, characterized in that: in the face tracking step, starting from a 1 st frame of captured faces, a face set tracked in the 1 st frame is an empty set, the faces captured in the 1 st frame are compared with the empty set, a result of the face capture of the 1 st frame is directly used as an initial value of a next frame tracking algorithm for face tracking, and actual face comparison is carried out from the 2 nd frame; and repeating the steps of face capturing, face comparing and face tracking if the real-time or recorded video stream is not finished.
CN201910839847.1A 2019-09-05 2019-09-05 Face recognition method integrating tracking technology Active CN110569785B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910839847.1A CN110569785B (en) 2019-09-05 2019-09-05 Face recognition method integrating tracking technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910839847.1A CN110569785B (en) 2019-09-05 2019-09-05 Face recognition method integrating tracking technology

Publications (2)

Publication Number Publication Date
CN110569785A true CN110569785A (en) 2019-12-13
CN110569785B CN110569785B (en) 2023-07-11

Family

ID=68777948

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910839847.1A Active CN110569785B (en) 2019-09-05 2019-09-05 Face recognition method integrating tracking technology

Country Status (1)

Country Link
CN (1) CN110569785B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111460884A (en) * 2020-02-09 2020-07-28 天津博宜特科技有限公司 Multi-face recognition method based on human body tracking
CN112232257A (en) * 2020-10-26 2021-01-15 青岛海信网络科技股份有限公司 Traffic abnormity determining method, device, equipment and medium
CN113255608A (en) * 2021-07-01 2021-08-13 杭州智爱时刻科技有限公司 Multi-camera face recognition positioning method based on CNN classification
CN114241586A (en) * 2022-02-21 2022-03-25 飞狐信息技术(天津)有限公司 Face detection method and device, storage medium and electronic equipment
CN115451962A (en) * 2022-08-09 2022-12-09 中国人民解放军63629部队 Target tracking strategy planning method based on five-variable Carnot graph
WO2023088074A1 (en) * 2021-11-18 2023-05-25 北京眼神智能科技有限公司 Face tracking method and apparatus, and storage medium and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106845385A (en) * 2017-01-17 2017-06-13 腾讯科技(上海)有限公司 The method and apparatus of video frequency object tracking
CN107122751A (en) * 2017-05-03 2017-09-01 电子科技大学 A kind of face tracking and facial image catching method alignd based on face
CN107516303A (en) * 2017-09-01 2017-12-26 成都通甲优博科技有限责任公司 Multi-object tracking method and system
CN109063593A (en) * 2018-07-13 2018-12-21 北京智芯原动科技有限公司 A kind of face tracking method and device
CN109190444A (en) * 2018-07-02 2019-01-11 南京大学 A kind of implementation method of the lane in which the drivers should pay fees vehicle feature recognition system based on video

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106845385A (en) * 2017-01-17 2017-06-13 腾讯科技(上海)有限公司 The method and apparatus of video frequency object tracking
CN107122751A (en) * 2017-05-03 2017-09-01 电子科技大学 A kind of face tracking and facial image catching method alignd based on face
CN107516303A (en) * 2017-09-01 2017-12-26 成都通甲优博科技有限责任公司 Multi-object tracking method and system
CN109190444A (en) * 2018-07-02 2019-01-11 南京大学 A kind of implementation method of the lane in which the drivers should pay fees vehicle feature recognition system based on video
CN109063593A (en) * 2018-07-13 2018-12-21 北京智芯原动科技有限公司 A kind of face tracking method and device

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111460884A (en) * 2020-02-09 2020-07-28 天津博宜特科技有限公司 Multi-face recognition method based on human body tracking
CN112232257A (en) * 2020-10-26 2021-01-15 青岛海信网络科技股份有限公司 Traffic abnormity determining method, device, equipment and medium
CN112232257B (en) * 2020-10-26 2023-08-11 青岛海信网络科技股份有限公司 Traffic abnormality determination method, device, equipment and medium
CN113255608A (en) * 2021-07-01 2021-08-13 杭州智爱时刻科技有限公司 Multi-camera face recognition positioning method based on CNN classification
WO2023088074A1 (en) * 2021-11-18 2023-05-25 北京眼神智能科技有限公司 Face tracking method and apparatus, and storage medium and device
CN114241586A (en) * 2022-02-21 2022-03-25 飞狐信息技术(天津)有限公司 Face detection method and device, storage medium and electronic equipment
CN114241586B (en) * 2022-02-21 2022-05-27 飞狐信息技术(天津)有限公司 Face detection method and device, storage medium and electronic equipment
CN115451962A (en) * 2022-08-09 2022-12-09 中国人民解放军63629部队 Target tracking strategy planning method based on five-variable Carnot graph
CN115451962B (en) * 2022-08-09 2024-04-30 中国人民解放军63629部队 Target tracking strategy planning method based on five-variable Carnot diagram

Also Published As

Publication number Publication date
CN110569785B (en) 2023-07-11

Similar Documents

Publication Publication Date Title
CN110569785B (en) Face recognition method integrating tracking technology
Xu et al. Eventcap: Monocular 3d capture of high-speed human motions using an event camera
CN105023278B (en) A kind of motion target tracking method and system based on optical flow method
CN103761514B (en) The system and method for recognition of face is realized based on wide-angle gunlock and many ball machines
CN104217428B (en) A kind of fusion feature matching and the video monitoring multi-object tracking method of data correlation
CN101950426A (en) Vehicle relay tracking method in multi-camera scene
CN104883548B (en) Monitor video face captures processing method and its system
CN111798486B (en) Multi-view human motion capture method based on human motion prediction
CN110555867B (en) Multi-target object tracking method integrating object capturing and identifying technology
Harville Stereo person tracking with short and long term plan-view appearance models of shape and color
US11494922B2 (en) Object tracking device, object tracking method, and object tracking program
US9947106B2 (en) Method and electronic device for object tracking in a light-field capture
Jean et al. Body tracking in human walk from monocular video sequences
Wang et al. Face tracking using motion-guided dynamic template matching
WO2022134916A1 (en) Identity feature generation method and device, and storage medium
CN107730535A (en) A kind of cascaded infrared video tracing method of visible ray
Wu et al. Robust object tracking with bidirectional corner matching and trajectory smoothness algorithm
Colombari et al. Video objects segmentation by robust background modeling
CN111368608B (en) Face recognition method, device and system
Ben Shitrit et al. Tracking multiple players using a single camera
JP2017151582A (en) Image analyzer, program, and method for tracking person shown in photographed camera image
Mademlis et al. Stereoscopic video description for human action recognition
Canton-Ferrer et al. Particle filtering and sparse sampling for multi-person 3D tracking
Badal et al. Multi-object trajectory coupling using online target specific decision making
Raskin et al. Using gaussian processes for human tracking and action classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20201125

Address after: Room 1007, building 3, Fengyuan international building, 430 Fengtan Road, Gongshu District, Hangzhou City, Zhejiang Province

Applicant after: Hangzhou Zhiai time Technology Co.,Ltd.

Address before: 311300 room 413, building 2, No. 168, Qianwu Road, Qingshanhu street, Lin'an District, Hangzhou City, Zhejiang Province

Applicant before: HANGZHOU LICHEN TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant