CN109508575A - Face tracking method and device, electronic equipment and computer readable storage medium - Google Patents

Face tracking method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN109508575A
CN109508575A CN201710826957.5A CN201710826957A CN109508575A CN 109508575 A CN109508575 A CN 109508575A CN 201710826957 A CN201710826957 A CN 201710826957A CN 109508575 A CN109508575 A CN 109508575A
Authority
CN
China
Prior art keywords
image
human face
characteristic point
face characteristic
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201710826957.5A
Other languages
Chinese (zh)
Inventor
唐春益
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SuperD Co Ltd
Shenzhen Super Technology Co Ltd
Original Assignee
Shenzhen Super Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Super Technology Co Ltd filed Critical Shenzhen Super Technology Co Ltd
Priority to CN201710826957.5A priority Critical patent/CN109508575A/en
Publication of CN109508575A publication Critical patent/CN109508575A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The present embodiments relate to technical field of image processing, a kind of face tracking method and device, electronic equipment and computer readable storage medium are disclosed.Face tracking method comprises determining that position of the human face characteristic point in the first image;Position based on the human face characteristic point in the first image, determine predicted position of the human face characteristic point in the second image, wherein, the first image and second image are continuous two field pictures in sequence of video images, and second image is the next frame image of the first image;According to predicted position of the human face characteristic point in the second image, the human face region in second image is determined;Based on the human face region in second image, position of the human face characteristic point in second image is determined.In the present invention, so that face can still accurately track human face characteristic point when there is larger displacement in two frames before and after video image, and the real-time of Facial features tracking is effectively ensured.

Description

Face tracking method and device, electronic equipment and computer readable storage medium
Technical field
The present embodiments relate to technical field of image processing, in particular to a kind of face tracking method and device, electronics Equipment and computer readable storage medium.
Background technique
Facial features tracking is a basis and important component part in human face analysis task, also, face is special Sign point tracking has in many applications such as face character reasoning, face authentication, recognition of face and face continuous videos tracking Extremely important status.Currently, the position for providing human face characteristic point in real time is to measure face characteristic in the tracking of face continuous videos One important indicator of the performance of point tracking system.
The trace flow of existing Facial features tracking system is substantially are as follows: (1) positions successive frame by human-face detector Approximate region in image sequence in a certain frame image where face;(2) it is accurately positioned by the approximate region where the face The position of human face characteristic point in the frame image;(3) expand on the basis of the approximate region where face in the frame, after will be enlarged by Obtained region carries out human face characteristic point accurate positioning as the region where face in next frame image in this region.? Reciprocation cycle (2) (3) step realizes the tracking of face in sequential frame image.
At least there are the following problems in the prior art for inventor's discovery: in existing trace flow, in the face traced into Expand certain area on the basis of region, then carries out the positioning of next frame human face characteristic point, the face in this way in two frame of front and back Accurate position can be completely navigated in the case where displacement less.But if the human face region displacement of two frame of front and back compared with Greatly, such as in face Large Amplitude Motion or man face image acquiring terminal strenuous exercise, it just will appear face tracking positioning not The case where quasi- or tracking failure.Therefore, how to accurately track human face characteristic point is that those skilled in the art is of interest.
Summary of the invention
Embodiment of the present invention be designed to provide a kind of face tracking method and device, electronic equipment and computer can Storage medium is read, so that face can still accurately track face spy when there is larger displacement in two frames before and after video image Point is levied, and the real-time of Facial features tracking is effectively ensured.
In order to solve the above technical problems, embodiments of the present invention provide a kind of face tracking method, including following step It is rapid:
Determine position of the human face characteristic point in the first image;
Position based on the human face characteristic point in the first image determines the human face characteristic point in the second image Predicted position, wherein the first image and second image are continuous two field pictures in sequence of video images, described the Two images are the next frame image of the first image;
According to predicted position of the human face characteristic point in the second image, the face area in second image is determined Domain;
Based on the human face region in second image, position of the human face characteristic point in second image is determined It sets.
Embodiments of the present invention additionally provide a kind of face tracking device, comprising:
First determining module, for determining position of the human face characteristic point in the first image;
Prediction module determines the human face characteristic point for the position based on the human face characteristic point in the first image Predicted position in the second image, wherein the first image and second image are continuous in sequence of video images Two field pictures, second image are the next frame image of the first image;
Second determining module determines described for the predicted position according to the human face characteristic point in the second image Human face region in two images;
Detection module, for based on the human face region in second image, determination to obtain the human face characteristic point in institute State the position in the second image.
Embodiments of the present invention additionally provide a kind of electronic equipment, comprising:
At least one processor, and,
The memory being connect at least one described processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one described processor, and described instruction is by described at least one A processor executes, so that at least one described processor is able to carry out face tracking method described in above embodiment.
Embodiments of the present invention additionally provide a kind of computer readable storage medium, are stored with computer program, described Face tracking method described in above embodiment is realized when computer program is executed by processor.
Embodiment of the present invention in terms of existing technologies, according to position of the human face characteristic point in the first image, really Determine predicted position of the human face characteristic point in the second image, determines human face characteristic point in the second image based on the predicted position Position is based on predicted position spotting in which, avoids caused by the mode for blindly expanding region of search in people Face is there is a situation where tracking and positioning is inaccurate when larger displacement or tracking is lost, so that still being able to when larger displacement occurs for face Human face characteristic point is accurately tracked, and ensure that the real-time of Facial features tracking.
In addition, the position based on the human face characteristic point in the first image, determines the human face characteristic point Predicted position in two images, comprising: based on position of the human face characteristic point in the first image and predetermined prediction algorithm, Prediction obtains predicted position of the human face characteristic point in second image.In the embodiment, using predetermined pre- measuring and calculating Method is predicted to obtain predicted position of the human face characteristic point in the second image, to realize that the accurate positionin of human face characteristic point provides base Plinth.
In addition, the position based on the human face characteristic point in the first image and predetermined prediction algorithm, measure in advance To predicted position of the human face characteristic point in the second image, comprising:
For each human face characteristic point, position of the human face characteristic point in the first image is substituting to predetermined pre- In method of determining and calculating, to obtain tentative prediction position of the human face characteristic point in second image, and the face is calculated The tentative prediction position of characteristic point and the difference of position of the human face characteristic point in the first image;
Tentative prediction position and the human face characteristic point based on each human face characteristic point are in the first image Position difference, calculate the average value of the difference, it is respective flat using the average value as each human face characteristic point Equal increment;
Position based on the average increment and the human face characteristic point in the first image determines the face Predicted position of the characteristic point in second image.
In the embodiment, each characteristic point in face is predicted, the accuracy of prediction is increased, and is passed through The average increment for calculating each human face characteristic point determines prediction bits of the human face characteristic point in the second image based on the average increment It sets, increases the stability of prediction.
In addition, position of the determining human face characteristic point in the first image, comprising: determine the people in the first image Face region determines position of the human face characteristic point in the first image based on the human face region in the first image.The reality It applies in mode, by determining the position of human face characteristic point in the human face region in the first image, makes it possible to accurately determine the The position of human face characteristic point in one image ensure that subsequent human face characteristic point to provide the foundation for human face characteristic point prediction The accuracy of prediction.
In addition, the human face region in the determining the first image includes: based on Face datection algorithm detection described the Human face region in one image;Alternatively, indicating the human face region of determining the first image based on the region of user's input;Or Person determines the human face region in the first image based on preset zone position information;Alternatively, determining human face characteristic point Position in third image, the position based on the human face characteristic point in third image determine that the human face characteristic point exists Predicted position in first image, wherein the first image and the third image are continuous two in sequence of video images Frame image, the first image is the next frame image of the third image, according to the human face characteristic point in the first image Predicted position, determine the human face region in the first image.
In the embodiment, provide determine the first image in human face region a variety of acquisition patterns, so as to The determination of the human face region in the first image is realized under different scenes.
In addition, the prediction module is used for: based on position of the human face characteristic point in the first image and predetermined pre- Method of determining and calculating, prediction obtain predicted position of the human face characteristic point in second image.
In addition, the prediction module includes the first prediction submodule, computational submodule and the second prediction submodule;
The first prediction submodule, is used for for each human face characteristic point, by the human face characteristic point first Position in image is substituting in predetermined prediction algorithm, so that it is preliminary in second image to obtain the human face characteristic point Predicted position, and the tentative prediction position for calculating the human face characteristic point and the human face characteristic point are in the first image The difference of position;
The computational submodule, tentative prediction position and the human face characteristic point based on each human face characteristic point exist The difference of position in the first image calculates the average value of the difference, using the average value as each face The respective average increment of characteristic point;
The second prediction submodule, for being based on the average increment and the human face characteristic point in first figure Position as in, determines predicted position of the human face characteristic point in second image.
In addition, first determining module is used for: determining the human face region in the first image, be based on first figure Human face region as in, determines position of the human face characteristic point in the first image.In addition, first determining module is used In: based on the human face region in Face datection algorithm detection the first image;Alternatively, the region instruction based on user's input is true Determine the human face region of the first image;Alternatively, being determined in the first image based on preset zone position information Human face region;Alternatively, position of the human face characteristic point in third image is determined, based on the human face characteristic point in third image Position, determine predicted position of the human face characteristic point in the first image, wherein the first image and the third figure As being continuous two field pictures in sequence of video images, the first image is the next frame image of the third image, according to Predicted position of the human face characteristic point in the first image, determines the human face region in the first image.
Detailed description of the invention
One or more embodiments are illustrated by the picture in corresponding attached drawing, these exemplary theorys The bright restriction not constituted to embodiment, the element in attached drawing with same reference numbers label are expressed as similar element, remove Non- to have special statement, composition does not limit the figure in attached drawing.
Fig. 1 is face tracking method flow chart in first embodiment of the invention;
Fig. 2 is face tracking method flow chart in second embodiment of the invention;
Fig. 3 is face tracking structure drawing of device in third embodiment of the invention;
Fig. 4 is face tracking structure drawing of device in four embodiment of the invention;
Fig. 5 is the structure chart for the electronic equipment that fifth embodiment of the invention provides.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with attached drawing to the present invention Each embodiment be explained in detail.However, it will be understood by those skilled in the art that in each embodiment party of the present invention In formula, in order to make the reader understand this application better, many technical details are proposed.But even if without these technical details And various changes and modifications based on the following respective embodiments, the application technical solution claimed also may be implemented.
The first embodiment of the present invention is related to a kind of face tracking methods.Detailed process is as shown in Figure 1, comprising:
Step 101: determining position of the human face characteristic point in the first image;
Step 102: based on position of the human face characteristic point in the first image, determining the human face characteristic point in the second image Predicted position, wherein first image and second image are continuous two field pictures in sequence of video images, second figure Next frame image as being first image;
Step 103: according to predicted position of the human face characteristic point in the second image, determining the face in second image Region;
Step 104: based on the human face region in second image, determining position of the human face characteristic point in second image It sets.
The face tracking method that present embodiment provides, the video image of continuous acquisition target face utilize video image The position of human face characteristic point, predicts the prediction of human face characteristic point in posterior second image in the first image in sequence formerly Position determines position of the human face characteristic point in the second image based on the predicted position, is based on predicted position spotting, Avoid caused by the mode for blindly expanding region of search that tracking and positioning is inaccurate when larger displacement occurs for face or tracking is lost The case where, so that still being able to accurately track human face characteristic point when larger displacement occurs for face, and it ensure that face spy The real-time of sign point tracking.
Wherein, the first image can be the first frame image in sequence of video images, be also possible to intermediate frame image, and second Image is the next frame image of first image.
Optionally, in a step 101, position of the human face characteristic point in the first image can be determined by any way, Which is not limited by the present invention, specifically, determining that the detailed process of position of the human face characteristic point in the first image comprises determining that Human face region in first image determines human face characteristic point in the first image based on the human face region in first image Position.
Wherein, for example, it determines that the human face region in the first image can have different modes, specifically includes but unlimited In following several:
First way detects the human face region in the first image based on Face datection algorithm.
For example, the face area in the first image can be detected using any Face datection algorithm commonly known in the art Domain, such as aforementioned background art are sayed, are examined by human-face detector locating human face's approximate region, such as using the face of opencv Brake.Certainly, according to the difference of concrete application scene, such as when needing to track all people's face occurred in video image When, then all faces in Face datection algorithm the first image of detection may be selected.
The second way indicates the human face region of determining first image based on the region of user's input.
For example, can provide interactive function for user for the ease of improving the positioning accuracy of human face region, be specifically people Face tracing area selectes function, and user can be indicated with input area, indicates the human face region in the first image, for example, by One image is shown on touching display screen, and user makes face in the first image centre circle by touch control gesture on touching display screen Region, except of course that user can also pass through the sides such as voice input, body-sensing input or external equipment input except touch control gesture Formula, input area indicate information, for example, delineation face approximate region, the location coordinate information for inputting face approximate region etc..
Certainly, according to the difference of concrete application scene, for example, in some special circumstances, it is only necessary to track one in image Determine the face in region, it is determined that the region of user's selection and the face for identifying the region.
The third mode determines the human face region in the first image based on preset zone position information.
For example, in some concrete application scenes, the initial position of target face be it is determining, then can be according to target person The initial position of face presets out the zone position information of the human face region in video image, is based on preset region Location information determines the human face region in the first image.
4th kind of mode, determines position of the human face characteristic point in third image, based on human face characteristic point in third image In position, determine predicted position of the human face characteristic point in the first image, wherein first image and the third image are Continuous two field pictures in sequence of video images, the first image are the next frame image of third image, are existed according to human face characteristic point Predicted position in first image determines the human face region in the first image.
That is, mode identical with the method for determination of the human face region in the second image can be used, first is determined Human face region in image.To determine the human face region in the first image, need to get the first image in sequence of video images Previous frame image, that is, third image in human face characteristic point position, and then predict human face characteristic point in the second image Predicted position determines the human face region in the first image according to the predicted position.
It, can be according to first~the third side if the first image is the first frame image in sequence of video images in Any one in formula determines the human face region in the first image, if the first image is not the first frame figure in sequence of video images Picture then can determine the human face region in the first image according to the 4th kind of mode.Certainly, it is also not precluded in sequence of video images A possibility that determining the human face region in the first image using first~the third mode in non-first frame image, present embodiment Protection scope is not limited thereto.For example, the first frame image in video image physical significance may not include face, then it can be first The first image comprising face is determined in the video, such as according to preset zone position information, carries out image detection, Determine that the first image comprising face, the first image can be the first image comprising face in the video, naturally it is also possible to It is not first, and human face region is determined in first image.
Specifically, in a step 102, the detailed process for obtaining predicted position of the human face characteristic point in the second image can be with Are as follows:
Based on position of the human face characteristic point in the first image and predetermined prediction algorithm, prediction obtains the human face characteristic point Predicted position in second image.
Wherein, prediction algorithm is unlimited, be subject to be able to carry out movement object point position prediction, for example, Kalman filtering calculate Method or average drifting meanshift algorithm etc..It is known algorithm about Kalman filtering algorithm and mean shift algorithm, here not It is described in detail again.
In specific implementation, human face characteristic point is substituting in predetermined prediction algorithm in the position in the first image, operation should Algorithm, and then obtain predicted position of the human face characteristic point in the second image.
Preferably, in a specific embodiment, the position based on human face characteristic point in the first image and predetermined pre- Method of determining and calculating, prediction obtains predicted position of the human face characteristic point in second image in the following way:
In the case where needing to track at least two human face characteristic points, for each human face characteristic point, by the face characteristic O'clock be substituting in predetermined prediction algorithm in the position in the first image, thus obtain the human face characteristic point in the second image just Predicted position is walked, and calculates the tentative prediction position and position of the human face characteristic point in the first image of the human face characteristic point Difference;
The difference of tentative prediction position and position of the human face characteristic point in the first image based on each human face characteristic point Value, calculates the average value of the corresponding difference of all human face characteristic points, respective average using average value as each human face characteristic point Increment;
Position based on average increment and human face characteristic point in the first image determines human face characteristic point in the second image In predicted position.
It is this that tentative prediction position is predicted based on prediction algorithm, predicted position is further obtained according to tentative prediction position, It may further ensure that the accuracy of obtained predicted position.
It, in step 103, will be pre- in the second image according to human face characteristic point after step 102 obtains predicted position Location is set, and determines the human face region in the second image.
The predicted position of the human face region and human face characteristic point has scheduled corresponding position relationship, in step 103, by root According to these scheduled positional relationships, the human face region in the second image is determined according to predicted position.
The human face region determined in the step is one piece of region for surrounding these human face characteristic points, such as square region.Example Such as, the left margin of human face region can be determined, according to the people of the rightmost side according to the predicted position of the human face characteristic point of the leftmost side The predicted position of face characteristic point determines the right margin of human face region, according to the predicted position of the human face characteristic point of top side, really The lower boundary of human face region is determined according to the predicted position of the human face characteristic point of most bottom side in the coboundary for making human face region, To delimit out one piece of square region as human face region.
Preferably, need to expand certain region on the basis of predicted position as human face region, that is to say, that face The corresponding square region in region, be by most most under most left most right human face characteristic point minimum square region basis On, expand certain area, generally 1.5 times of minimum square region.In this way, according to the face characteristic position of the first image It determines the predicted position in the second image, expands certain region on the basis of the predicted position as face in the second image Region, further ensure and accurately track human face characteristic point in the second image.
At step 104, characteristic point detection can be carried out in this region based on the human face region in the second image, thus Determine accurate location of the human face characteristic point in the second image.
Specifically, determining position of the human face characteristic point in second image based on the human face region in the second image Detailed process can include:
SDM (Supervised Descent Method and Its Applications to is declined using supervision gradient Face Alignment) algorithm or deep learning algorithm detected in the human face region in the second image, determine face Position of the characteristic point in the second image.
It supervises gradient decline SDM algorithm or deep learning algorithm is algorithm known in this field, which is not described herein again.
It is understood that predicted position is merely to determine the face area in the second image in a kind of specific implementation Domain, accordingly, it is determined that the number of the corresponding face tracking point of predicted position used by human face region can be less than need carry out with The number of the human face characteristic point of track assumes to need to track 10 human face characteristic points, determine under human face region needs on most most most Left most right 4 human face characteristic points then in a step 101, can determine this 10 or the position of this 4 human face characteristic points, and In step 102, the predicted position of this 4 human face characteristic points can be only determined, and in step 103, utilize this 4 face characteristics The predicted position of point just can determine human face region, and at step 104, this 10 face spies will be determined in the human face region Sign point.
As described in background technique above, usually initialized in the prior art using the human face region result of present frame next Frame, but when face is mobile too fast in successive frame, the face displacement between present frame and next frame is excessive, and present frame is caused to mention The initial position of supply next frame may be inaccuracy.For this problem, compared with the existing technology, base in present embodiment In predicted position spotting, avoid caused by the mode for blindly expanding region of search when larger displacement occurs for face The case where tracking and positioning is inaccurate or tracking is lost so that when larger displacement occurs for face, such as in face Large Amplitude Motion or When person man face image acquiring terminal strenuous exercise, it still is able to accurately track human face characteristic point, and ensure that face characteristic The real-time of point tracking.
Second embodiment of the present invention is related to a kind of face tracking method.Second embodiment is big with first embodiment It causes identical, is in place of the main distinction: in present embodiment, specifically illustrating the position based on human face characteristic point in the first image It sets and predetermined prediction algorithm, prediction obtains the detailed process of predicted position of the human face characteristic point in second image, such as Shown in Fig. 2.In addition, it will be understood by those skilled in the art that determining the specific implementation process of the predicted position in the second image simultaneously It is not limited only to mode discussed below, for other in such a way that predetermined prediction algorithm predicts human face characteristic point position, It can be applied to present embodiment, herein by way of example only.Present embodiment includes:
Step 201: determining position of the human face characteristic point in the first image;
Step 202: for each human face characteristic point, position of the human face characteristic point in the first image being substituting to predetermined pre- In method of determining and calculating, to obtain tentative prediction position of the human face characteristic point in the second image, and the first of human face characteristic point is calculated Walk the difference of predicted position and position of the human face characteristic point in first image;
Step 203: tentative prediction position and the human face characteristic point based on each human face characteristic point are in first image In position difference, the average value of the difference is calculated, using the average value as the respective average increment of each human face characteristic point;
Step 204: based on the position of the average increment and the human face characteristic point in the first image, determining the people Predicted position of the face characteristic point in second image;
Step 205: according to predicted position of the human face characteristic point in the second image, determining the face in second image Region;
Step 206: based on the human face region in second image, determining position of the human face characteristic point in second image It sets.
Specifically, bringing position of the human face characteristic point in the first image into predetermined prediction for each human face characteristic point In algorithm, the tentative prediction position of human face characteristic point and the difference of position of the human face characteristic point in first image are calculated, It is specific to indicate such as formula 1 and according to the average increment of each human face characteristic point of the mean value calculation of the difference:
D=∑ (Sk2-Sk1)/n (I)
In above-mentioned formula 1, d indicates that average increment, n indicate the number of human face characteristic point, Sk2Indicate k-th of human face characteristic point Tentative prediction position in the second image, Sk1Indicate position of k-th of human face characteristic point in the first image, wherein k takes Value range is greater than equal to 1 and is less than or equal to n,.It indicates
It in a concrete implementation, needs to track n human face characteristic point, is i.e. includes n face characteristic in the first image Point obtains position of each human face characteristic point in video image respectively and tracks to human face characteristic point, and error can compare Greatly, for error capable of being reduced by calculating average increment, while to be obtained according to human face characteristic point position in the first image The position of human face characteristic point is more smooth in two images, reduces the mutation and shake of position, enables to people in the second image The problem of face characteristic point tracking has more stability, also avoids the Facial metamorphosis occurred due to partial dot tracking error.
For example, it is assumed that characterizing face using 49 human face characteristic points, determine that 49 human face characteristic points in the first image are each From position, and obtain 49 human face characteristic points respectively behind the tentative prediction position in the second image, calculate 49 face spies Levy the average increment of point.It should be noted that only being not offered as having to so that 49 human face characteristic points characterize face as an example herein Using 49 human face characteristic points, the number of the human face characteristic point used in practical application is not limited, specific used people The quantity of face characteristic point is advisable with can determine and distinguish face.
Specifically, the positional increment of the human face characteristic point calculated according to above-mentioned formula 1 calculates the human face characteristic point second Specific location in image is indicated with following formula 2:
Skp=Sk1+d (2)
In above-mentioned formula 2, SkpIndicate predicted position of k-th of human face characteristic point in the second image, wherein the value of k Range, which is greater than, is less than or equal to n, S equal to 1kiIt is identical as the meaning in above-mentioned formula 1, according to average increment, it is calculated Predicted position of each of the one image face characteristic point in the second image.
In present embodiment, in order to guarantee that human face characteristic point is smooth in the second image variation and no one's face are special The case where sign is lost occurs, and each feature obtains the corresponding predicted position in the second image after calculating.
It is understood that location information can be indicated using transverse and longitudinal coordinate, then transverse and longitudinal coordinate is required to pass through respectively Formula 1 and formula 2 are calculated.
One in the specific implementation, the predetermined prediction algorithm includes Kalman filtering algorithm or mean shift algorithm.It is herein A kind of preferred algorithm implementation, is subject to the position prediction that can be realized movement object point in specific track algorithm, is not limited to Kalman filtering or mean shift algorithm can also for other prediction algorithms for being able to carry out position prediction and guaranteeing real-time To be applied to present embodiment.
One based on the above first or second embodiment in the specific implementation, based on the human face region in the second image, It determines position of the human face characteristic point in second image, specifically may is that using supervision gradient descent algorithm SDM or depth Degree learning algorithm is detected in the human face region in second image, determines position of the human face characteristic point in the second image It sets.Algorithm involved in during the realization is a kind of preferred embodiment, in the premise for guaranteeing realization identical result Under, it can also select other algorithms.
The step of various methods divide above, be intended merely to describe it is clear, when realization can be merged into a step or Certain steps are split, multiple steps are decomposed into, as long as including identical logical relation, all in the protection scope of this patent It is interior;To adding inessential modification in algorithm or in process or introducing inessential design, but its algorithm is not changed Core design with process is all in the protection scope of the patent.
Third embodiment of the invention is related to a kind of face tracking device, as shown in Figure 3, comprising: the first determining module 301, prediction module 302, the second determining module 303, detection module 304.
First determining module 301, for determining position of the human face characteristic point in the first image;
Prediction module 302 determines the face characteristic for the position based on the human face characteristic point in the first image Predicted position o'clock in the second image, wherein the first image and second image are continuous in sequence of video images Two field pictures, second image be the first image next frame image;
Second determining module 303, for the predicted position according to the human face characteristic point in the second image, determine described in Human face region in second image;
Detection module 304, for based on the human face region in second image, determining that obtaining the human face characteristic point exists Position in second image.
Specifically, prediction module 302 is used for, based on position of the human face characteristic point in the first image and predetermined pre- measuring and calculating Method, prediction obtain predicted position of the human face characteristic point in the second image.
Specifically, the first determining module 301 is used for: the human face region in the first image is determined, based in first image Human face region, determine position of the human face characteristic point in first image.
Specifically, the first determining module 301 is used for: detecting the human face region in the first image based on Face datection algorithm; Alternatively, indicating the human face region of determining first image based on the region of user's input;Alternatively, based on preset region position Confidence ceases the human face region determined in first image;Alternatively, determining position of the human face characteristic point in third image, being based on should Position of the human face characteristic point in third image determines predicted position of the human face characteristic point in the first image, wherein this One image and the third image are continuous two field pictures in sequence of video images, which is the next of the third image Frame image determines the human face region in first image according to predicted position of the human face characteristic point in the first image.
It is not difficult to find that present embodiment is Installation practice corresponding with first embodiment, present embodiment can be with First embodiment is worked in coordination implementation.The relevant technical details mentioned in first embodiment still have in the present embodiment Effect, in order to reduce repetition, which is not described herein again.Correspondingly, the relevant technical details mentioned in present embodiment are also applicable in In first embodiment.
It is noted that each module involved in present embodiment is logic module, and in practical applications, one A logic unit can be a physical unit, be also possible to a part of a physical unit, can also be with multiple physics lists The combination of member is realized.In addition, in order to protrude innovative part of the invention, it will not be with solution institute of the present invention in present embodiment The technical issues of proposition, the less close unit of relationship introduced, but this does not indicate that there is no other single in present embodiment Member.
Four embodiment of the invention is related to a kind of face tracking device.4th embodiment and third embodiment are substantially Identical, be in place of the main distinction: in the third embodiment, prediction module 302 is used to determine the prediction bits in the second image It sets.And in four embodiment of the invention, the structure of prediction module 302 is specifically illustrated, as shown in Figure 4.Prediction module 302 It include: the first prediction submodule 3021, the prediction submodule 3023 of computational submodule 3022, second.
First prediction submodule 3021, is used for for each human face characteristic point, by the human face characteristic point in the first image Position be substituting in predetermined prediction algorithm, to obtain tentative prediction position of the human face characteristic point in the second image, and Calculate the tentative prediction position of the human face characteristic point and the difference of position of the human face characteristic point in the first image;
Computational submodule 3022 exists for the tentative prediction position based on each human face characteristic point with the human face characteristic point The difference of position in first image calculates the average value of the difference, and the average value is respective as each human face characteristic point Average increment;
Second prediction submodule 3023, for the position based on the average increment and the human face characteristic point in the first image It sets, determines predicted position of the human face characteristic point in second image.
Since second embodiment is corresponded to each other with present embodiment, present embodiment can be mutual with second embodiment Match implementation.The relevant technical details mentioned in second embodiment are still effective in the present embodiment, implement second The attainable technical effect of institute similarly may be implemented in the present embodiment in mode, no longer superfluous here in order to reduce repetition It states.Correspondingly, the relevant technical details mentioned in present embodiment are also applicable in second embodiment.
Fifth embodiment of the invention is related to a kind of electronic equipment, as shown in Figure 5, comprising: memory 501, at least one Processor 502.Its structure is as shown in figure 5, this at least one memory 501 and the processor 502 communicate to connect.
Memory 501 is used to store the instruction that can be executed by least one processor;
Processor 502 is for executing the instruction stored in the memory.
Processor 502 is also used to execute the execution step in the first and second embodiments in relation to face tracking method.
Specifically, processor 502 is used for: determining position of the human face characteristic point in the first image;Based on the face characteristic Position o'clock in the first image, determines predicted position of the human face characteristic point in the second image, wherein first image and Second image is continuous two field pictures in sequence of video images, which is the next frame image of first image; According to predicted position of the human face characteristic point in the second image, the human face region in second image is determined;Based on this second Human face region in image determines position of the human face characteristic point in second image.
Specifically, processor 502 is used for: based on position of the human face characteristic point in the first image and predetermined pre- measuring and calculating Method, prediction obtain predicted position of the human face characteristic point in second image.
Specifically, processor 502 is used for: the human face region in the first image is determined, based on the face in first image Region determines position of the human face characteristic point in first image.
Wherein, memory is connected with processor using bus mode, and bus may include the bus of any number of interconnection And bridge, bus link together the various circuits of one or more processors and memory.Bus can also will be such as peripheral Various other circuits of equipment, voltage-stablizer and management circuit or the like link together, these are all well known in the art , therefore, it will not be further described herein.Bus interface provides interface between bus and transceiver.Transceiver Can be an element, be also possible to multiple element, such as multiple receivers and transmitter, provide for over a transmission medium with The unit of various other device communications.The data handled through processor are transmitted on the radio medium by antenna, further, Antenna also receives data and transfers data to processor.
Processor is responsible for managing bus and common processing, can also provide various functions, including periodically, peripheral interface, Voltage adjusting, power management and other control functions.And memory can be used for storage processor and execute operation when institute The data used.
Sixth embodiment of the invention is related to a kind of computer readable storage medium, is stored with computer program, the meter Calculation machine program can be realized the face tracking method mentioned in the first or second embodiment when being executed by processor.
It will be appreciated by those skilled in the art that implementing the method for the above embodiments is that can pass through Program is completed to instruct relevant hardware, which is stored in a storage medium, including some instructions are used so that one A equipment (can be single-chip microcontroller, chip etc.) or processor (processor) execute each embodiment the method for the application All or part of the steps.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic or disk etc. are various can store journey The medium of sequence code.
It will be understood by those skilled in the art that the respective embodiments described above are to realize specific embodiments of the present invention, And in practical applications, can to it, various changes can be made in the form and details, without departing from the spirit and scope of the present invention.

Claims (14)

1. a kind of face tracking method characterized by comprising
Determine position of the human face characteristic point in the first image;
Position based on the human face characteristic point in the first image determines prediction of the human face characteristic point in the second image Position, wherein the first image and second image are continuous two field pictures in sequence of video images, second figure Next frame image as being the first image;
According to predicted position of the human face characteristic point in the second image, the human face region in second image is determined;
Based on the human face region in second image, position of the human face characteristic point in second image is determined.
2. face tracking method according to claim 1, which is characterized in that described to be based on the human face characteristic point first Position in image determines predicted position of the human face characteristic point in the second image, comprising:
Based on position of the human face characteristic point in the first image and predetermined prediction algorithm, prediction obtains the face characteristic Predicted position of the point in second image.
3. face tracking method according to claim 2, which is characterized in that described to be based on the human face characteristic point first Position and predetermined prediction algorithm in image, prediction obtain predicted position of the human face characteristic point in the second image, wrap It includes:
For each human face characteristic point, position of the human face characteristic point in the first image is substituting to predetermined pre- measuring and calculating In method, to obtain tentative prediction position of the human face characteristic point in second image, and the face characteristic is calculated The tentative prediction position of point and the difference of position of the human face characteristic point in the first image;
Tentative prediction position and position of the human face characteristic point in the first image based on each human face characteristic point The difference set calculates the average value of the difference, using the average value as each respective average increasing of human face characteristic point Amount;
Position based on the average increment and the human face characteristic point in the first image determines the face characteristic Predicted position of the point in second image.
4. face tracking method according to claim 1, which is characterized in that the determining human face characteristic point is in the first image In position, comprising:
It determines the human face region in the first image, based on the human face region in the first image, determines human face characteristic point Position in the first image.
5. face tracking method according to claim 4, which is characterized in that the face in the determining the first image Region includes:
Based on the human face region in Face datection algorithm detection the first image;
Alternatively,
The human face region of determining the first image is indicated based on the region of user's input;
Alternatively,
The human face region in the first image is determined based on preset zone position information;
Alternatively,
Determine position of the human face characteristic point in third image, the position based on the human face characteristic point in third image, really Fixed predicted position of the human face characteristic point in the first image, wherein the first image and the third image are video Continuous two field pictures in image sequence, the first image is the next frame image of the third image, according to the face Predicted position of the characteristic point in the first image, determines the human face region in the first image.
6. face tracking method according to claim 1, which is characterized in that the face based in second image Region determines position of the human face characteristic point in second image, comprising:
It is examined in the human face region in second image using supervision gradient decline SDM algorithm or deep learning algorithm It surveys, determines position of the human face characteristic point in second image.
7. face tracking method according to claim 2 or 3, which is characterized in that the predetermined prediction algorithm includes karr Graceful filtering algorithm or mean shift algorithm.
8. a kind of face tracking device characterized by comprising
First determining module, for determining position of the human face characteristic point in the first image;
Prediction module determines the human face characteristic point for the position based on the human face characteristic point in the first image Predicted position in two images, wherein the first image and second image are continuous two frame in sequence of video images Image, second image are the next frame image of the first image;
Second determining module determines second figure for the predicted position according to the human face characteristic point in the second image Human face region as in;
Detection module obtains the human face characteristic point described for determining based on the human face region in second image Position in two images.
9. face tracking device according to claim 8, which is characterized in that the prediction module is used for:
Based on position of the human face characteristic point in the first image and predetermined prediction algorithm, prediction obtains the face characteristic Predicted position of the point in second image.
10. face tracking device according to claim 9, which is characterized in that the prediction module includes the first prediction Module, computational submodule and the second prediction submodule;
The first prediction submodule, is used for for each human face characteristic point, by the human face characteristic point in the first image In position be substituting in predetermined prediction algorithm, to obtain tentative prediction of the human face characteristic point in second image Position, and calculate the tentative prediction position and position of the human face characteristic point in the first image of the human face characteristic point Difference;
The computational submodule exists for the tentative prediction position based on each human face characteristic point with the human face characteristic point The difference of position in the first image calculates the average value of the difference, using the average value as each face The respective average increment of characteristic point;
The second prediction submodule, for being based on the average increment and the human face characteristic point in the first image Position, determine predicted position of the human face characteristic point in second image.
11. face tracking device according to claim 8, which is characterized in that first determining module is used for:
It determines the human face region in the first image, based on the human face region in the first image, determines human face characteristic point Position in the first image.
12. face tracking device according to claim 11, which is characterized in that first determining module is used for:
Based on the human face region in Face datection algorithm detection the first image;
Alternatively,
The human face region of determining the first image is indicated based on the region of user's input;
Alternatively,
The human face region in the first image is determined based on preset zone position information;
Alternatively,
Determine position of the human face characteristic point in third image, the position based on the human face characteristic point in third image, really Fixed predicted position of the human face characteristic point in the first image, wherein the first image and the third image are video Continuous two field pictures in image sequence, the first image is the next frame image of the third image, according to the face Predicted position of the characteristic point in the first image, determines the human face region in the first image.
13. a kind of electronic equipment characterized by comprising
At least one processor;And
The memory being connect at least one described processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one described processor, and described instruction is by described at least one It manages device to execute, so that at least one described processor is able to carry out face tracking side as described in any one of claims 1 to 7 Method.
14. a kind of computer readable storage medium, is stored with computer program, which is characterized in that the computer program is located It manages when device executes and realizes the described in any item face tracking methods of claim 1~7.
CN201710826957.5A 2017-09-14 2017-09-14 Face tracking method and device, electronic equipment and computer readable storage medium Withdrawn CN109508575A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710826957.5A CN109508575A (en) 2017-09-14 2017-09-14 Face tracking method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710826957.5A CN109508575A (en) 2017-09-14 2017-09-14 Face tracking method and device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN109508575A true CN109508575A (en) 2019-03-22

Family

ID=65744401

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710826957.5A Withdrawn CN109508575A (en) 2017-09-14 2017-09-14 Face tracking method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN109508575A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110097586A (en) * 2019-04-30 2019-08-06 青岛海信网络科技股份有限公司 A kind of Face datection method for tracing and device
CN110298327A (en) * 2019-07-03 2019-10-01 北京字节跳动网络技术有限公司 A kind of visual effect processing method and processing device, storage medium and terminal
CN112446229A (en) * 2019-08-27 2021-03-05 北京地平线机器人技术研发有限公司 Method and device for acquiring pixel coordinates of marker post
WO2023088074A1 (en) * 2021-11-18 2023-05-25 北京眼神智能科技有限公司 Face tracking method and apparatus, and storage medium and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102592138A (en) * 2011-12-30 2012-07-18 上海电力学院 Object tracking method for intensive scene based on multi-module sparse projection
CN102831382A (en) * 2011-06-15 2012-12-19 北京三星通信技术研究有限公司 Face tracking apparatus and method
CN106682582A (en) * 2016-11-30 2017-05-17 吴怀宇 Compressed sensing appearance model-based face tracking method and system
CN106709932A (en) * 2015-11-12 2017-05-24 阿里巴巴集团控股有限公司 Face position tracking method and device and electronic equipment
CN106919918A (en) * 2017-02-27 2017-07-04 腾讯科技(上海)有限公司 A kind of face tracking method and device
CN107122751A (en) * 2017-05-03 2017-09-01 电子科技大学 A kind of face tracking and facial image catching method alignd based on face

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831382A (en) * 2011-06-15 2012-12-19 北京三星通信技术研究有限公司 Face tracking apparatus and method
CN102592138A (en) * 2011-12-30 2012-07-18 上海电力学院 Object tracking method for intensive scene based on multi-module sparse projection
CN106709932A (en) * 2015-11-12 2017-05-24 阿里巴巴集团控股有限公司 Face position tracking method and device and electronic equipment
CN106682582A (en) * 2016-11-30 2017-05-17 吴怀宇 Compressed sensing appearance model-based face tracking method and system
CN106919918A (en) * 2017-02-27 2017-07-04 腾讯科技(上海)有限公司 A kind of face tracking method and device
CN107122751A (en) * 2017-05-03 2017-09-01 电子科技大学 A kind of face tracking and facial image catching method alignd based on face

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110097586A (en) * 2019-04-30 2019-08-06 青岛海信网络科技股份有限公司 A kind of Face datection method for tracing and device
CN110097586B (en) * 2019-04-30 2023-05-30 青岛海信网络科技股份有限公司 Face detection tracking method and device
CN110298327A (en) * 2019-07-03 2019-10-01 北京字节跳动网络技术有限公司 A kind of visual effect processing method and processing device, storage medium and terminal
CN112446229A (en) * 2019-08-27 2021-03-05 北京地平线机器人技术研发有限公司 Method and device for acquiring pixel coordinates of marker post
WO2023088074A1 (en) * 2021-11-18 2023-05-25 北京眼神智能科技有限公司 Face tracking method and apparatus, and storage medium and device

Similar Documents

Publication Publication Date Title
Angah et al. Tracking multiple construction workers through deep learning and the gradient based method with re-matching based on multi-object tracking accuracy
CN109584276A (en) Critical point detection method, apparatus, equipment and readable medium
CN109344806B (en) The method and system detected using multitask target detection model performance objective
CN111428607B (en) Tracking method and device and computer equipment
CN109508575A (en) Face tracking method and device, electronic equipment and computer readable storage medium
JP6065427B2 (en) Object tracking method and object tracking apparatus
CN108198201A (en) A kind of multi-object tracking method, terminal device and storage medium
CN104794733A (en) Object tracking method and device
CN107408303A (en) System and method for Object tracking
CN109934065A (en) A kind of method and apparatus for gesture identification
CN109598744A (en) A kind of method, apparatus of video tracking, equipment and storage medium
CN109034095A (en) A kind of face alignment detection method, apparatus and storage medium
CN110807410B (en) Key point positioning method and device, electronic equipment and storage medium
CN107784671A (en) A kind of method and system positioned immediately for vision with building figure
CN102915545A (en) OpenCV(open source computer vision library)-based video target tracking algorithm
AU2020300067B2 (en) Layered motion representation and extraction in monocular still camera videos
Hannuksela et al. Vision-based motion estimation for interaction with mobile devices
CN108734735A (en) Object shapes tracks of device and method and image processing system
CN109523573A (en) The tracking and device of target object
CN109800678A (en) The attribute determining method and device of object in a kind of video
Bazo et al. Baptizo: A sensor fusion based model for tracking the identity of human poses
Hannuksela et al. A vision-based approach for controlling user interfaces of mobile devices
Dominguez et al. Robust finger tracking for wearable computer interfacing
WO2021239000A1 (en) Method and apparatus for identifying motion blur image, and electronic device and payment device
CN107665495A (en) Method for tracing object and Object tracking device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20190322