CN112653844A - Camera holder steering self-adaptive tracking adjustment method - Google Patents

Camera holder steering self-adaptive tracking adjustment method Download PDF

Info

Publication number
CN112653844A
CN112653844A CN202011579326.6A CN202011579326A CN112653844A CN 112653844 A CN112653844 A CN 112653844A CN 202011579326 A CN202011579326 A CN 202011579326A CN 112653844 A CN112653844 A CN 112653844A
Authority
CN
China
Prior art keywords
face
camera
frame
center
tilt
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011579326.6A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Eeasy Electronic Tech Co ltd
Original Assignee
Zhuhai Eeasy Electronic Tech Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Eeasy Electronic Tech Co ltd filed Critical Zhuhai Eeasy Electronic Tech Co ltd
Priority to CN202011579326.6A priority Critical patent/CN112653844A/en
Publication of CN112653844A publication Critical patent/CN112653844A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a camera holder steering self-adaptive tracking and adjusting method, which relates to the technical field of image recognition, and comprises the following steps: recognizing a face frame of a face according to the continuously acquired input frame of the video stream data, and subtracting the coordinate of the center of the face frame of the face from the coordinate of the center of the image acquired by the camera to obtain an offset vector of the face and the image center of the camera; carrying out smooth filtering on the offset vector, carrying out Gaussian fusion on a detection result of a face detection algorithm and a prediction result of a frame, and extracting a position where the face is most likely to appear; and when the offset vector exceeds the set protection area of the camera picture, indicating the camera holder to rotate so as to enable the face to be positioned in the center of the camera picture. The invention adopts a self-adaptive control method to lead the camera pan-tilt to track the face and automatically rotate, thus leading the picture center to be always aligned with the face.

Description

Camera holder steering self-adaptive tracking adjustment method
Technical Field
The invention relates to the technical field of image recognition, in particular to a camera holder steering self-adaptive tracking and adjusting method.
Background
The direction of adjusting the pan-tilt according to the position of the face of the camera is an emerging demand of the market in recent years, along with the development of the internet of things and the artificial intelligence technology, the technology of face detection and identification is more and more mature, the development of intelligent hardware provides strong calculation power, the camera pan-tilt has certain intelligence to be realized, the method for manually adjusting the position of the camera cannot meet the market demand during live broadcasting or recording, a plurality of live broadcasting supervisors all need a camera capable of actively tracking the camera to rotate, especially, the professional level of the civil podcasts of a plurality of live broadcasting tape goods is not high at present, the understanding degree of the equipment such as the camera, the pan-tilt and the like is very low, the situation that a supporter walks and the camera still can focus the supervisors during the tape goods is difficult to achieve, a plurality of network direct marketing, network tape goods, network classes, comprehensive live broadcasting and self-timer people need the camera to actively track the motion, the effect of a full-automatic live broadcast room is achieved.
In the prior art, methods for adjusting a camera in a live broadcast room are that a camera holder is manually and remotely controlled to rotate, or the camera is manually adjusted to correct a picture of the camera, so that a human face is positioned in the center of the picture. This method is time consuming and laborious and requires assistance from a highly specialized photographer.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a self-adaptive tracking and adjusting method for the steering of a camera pan-tilt, which adopts a self-adaptive control method to lead the camera pan-tilt to track a human face and automatically rotate, so that the center of a picture can always aim at the human face.
In order to achieve the purpose, the technical scheme of the invention is as follows:
the self-adaptive tracking and adjusting method for the steering of the camera holder is used for a server, a control signal of the server is connected with the camera holder, a control signal of the camera holder is connected with a camera, and the method comprises the following steps:
acquiring video stream data;
acquiring first frame image data of the video stream data, identifying a face in the first frame image data, matching the face with a stored face template in a face image library, and if matching is successful, continuing to acquire the video stream data;
recognizing a face frame of a face according to the continuously acquired input frame of the video stream data, and subtracting the coordinate of the center of the face frame of the face from the coordinate of the center of the image acquired by the camera to obtain an offset vector of the face and the image center of the camera;
carrying out smooth filtering on the offset vector, carrying out Gaussian fusion on a detection result of a face detection algorithm and a prediction result of a frame, and extracting a position where the face is most likely to appear;
and when the offset vector exceeds the set protection area of the camera picture, indicating the camera holder to rotate so as to enable the face to be positioned in the center of the camera picture.
The method for adaptively tracking and adjusting the steering of the camera pan-tilt further comprises the following steps: the face recognition method comprises the steps of face correction and side face, and meanwhile, generating a reference structure template from facial features information.
The method for adaptively tracking and adjusting the steering of the camera holder as described above further comprises the steps of,
when no human face appears in the first frame of image data;
and if no human face is identified from the first frame of image data, indicating the camera pan-tilt head to rotate along the horizontal plane direction, wherein,
every time the camera holder rotates by a set angle, acquiring a frame of image data and carrying out face recognition on the acquired frame of image data;
in the process of rotating for one circle:
if the face is recognized, matching the face with a face template stored in a face image library;
and if the human face is not recognized, entering a standby state, continuously acquiring input frames of the video stream data in the standby state, and performing human face recognition on each frame of image data until the human face is successfully recognized.
According to the camera holder steering self-adaptive tracking adjustment method, further, when the face is shielded by an obstacle;
continuously acquiring input frames of the video stream data, and performing face search on each input frame of image data;
if the face cannot be searched, no control signal is sent to the camera holder;
and if the face is searched and successfully identified, generating a corresponding motion vector for adjusting the camera holder, and enabling the center of the camera picture to be aligned with the face.
According to the self-adaptive tracking and adjusting method for the steering of the camera holder, further, when the face of a person is not over against the camera;
continuously acquiring input frames of the video stream data, and carrying out face skin color detection on each input frame of image data;
detecting facial feature information in a facial skin color area, and matching the facial feature information with a corresponding reference template structure registered with a face in a face image library;
if the matching is successful, the central coordinates in the human face skin color area are used as the central coordinates of the human face frame to generate corresponding motion vectors; if the information of the five sense organs of the face cannot be detected or the matching is unsuccessful, no processing is carried out.
The method for adaptively tracking and adjusting the steering of the camera holder as described above further comprises the steps of,
the rotation range of the camera tripod head is within a set range, wherein,
the first control instruction comprises a PID control algorithm corresponding to the camera pan-tilt, the PID control algorithm is used for enabling a human face to be located in the center of a camera collecting picture, after the human face is tracked, the direction of the camera pan-tilt is adjusted to enable the center of the camera picture to be always aligned with the human face, the search area of the human face is reduced for the next frame of input, the width of a video image is W, the height of the video image is H, and then the width W and the height H of the search area of the next frame are as follows:
w=W/5
h=H/5
the center of the search area is located at the center of the camera picture.
The method for adaptively tracking and adjusting the steering of the camera holder as described above further comprises the steps of,
the rotating angular speed of the camera tripod head is within a set range, wherein,
the nonlinear coordinate mapping relation between the movement of the face frame and the rotation of the camera holder is used for preventing the subsequent camera holder from generating the shaking phenomenon of reciprocating rotation.
The camera pan-tilt steering adaptive tracking adjustment method further includes the following specific steps: and processing the offset vector of the face frame by adopting a Kalman filter to generate a smooth motion track, wherein,
the Kalman filter adopts a first-order linear model, namely a predicted value is a value output by the filter at the last moment;
modeling the motion trail of the face according to a state space theory, wherein a state equation and an observation equation of the system are respectively as follows:
x(k+1)=Fx(k)+v(k)
y(k+1)=Hx(k)+w(k)
f, H are observation matrices, x (k), y (k) are state vector and observation vector, respectively, and v (k) and w (k) are random perturbation; because the invention adopts a first-order motion model, no external input exists, namely the motion of the human face is approximate to uniform linear motion, and the observation equation has no unit conversion, the transfer matrix F and the observation matrix H are as follows:
Figure BDA0002865493040000031
Figure BDA0002865493040000032
the observation steps of the Kalman filter are as follows:
x(k+1)=Fx(k)+v(k)
the covariance matrix is:
P(k+1)=FP(k)FT+Q
updating Kalman gain:
K=HP(k)HT(HP(k)HT+R)-1
and (3) updating the covariance matrix:
Figure BDA0002865493040000041
updating the output value of the filter:
Figure BDA0002865493040000042
according to the self-adaptive tracking and adjusting method for the steering of the camera holder, the position of the face is further detected by adopting a Gaussian mixture skin color model.
Compared with the prior art, the invention has the beneficial effects that:
1. the invention obtains the offset vector by utilizing the central coordinate of the picture acquired by the camera and the coordinate information of the center of the face frame of the face, and adaptively adjusts the camera according to the offset vector, so that the camera always follows the action of the face.
2. The invention carries out smooth filtering on the offset vector, so that the camera holder smoothly rotates according to the position of the face.
3. The invention sets two processing methods aiming at face tracking failure.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a camera pan-tilt steering adaptive tracking adjustment method.
Fig. 2 is a flow chart of a method according to an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Example (b):
it should be noted that the terms "comprises" and "comprising," and any variations thereof, of embodiments of the present invention are intended to cover non-exclusive inclusions, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
A self-adaptive tracking and adjusting method for the steering of a camera pan-tilt adopts a self-adaptive control method to enable the camera pan-tilt to track a human face and automatically rotate, so that the center of a picture can always aim at the human face.
To better describe an application scenario of the embodiment of the present application, fig. 1 discloses a schematic flow chart of a camera pan-tilt steering adaptive tracking adjustment method. As shown in fig. 1, the method may include the steps of:
s101: acquiring video stream data;
s102: acquiring first frame image data of the video stream data, identifying a face in the first frame image data, matching the face with a stored face template in a face image library, and if matching is successful, continuing to acquire the video stream data;
s103: recognizing a face frame of a face according to the continuously acquired input frame of the video stream data, and subtracting the coordinate of the center of the face frame of the face from the coordinate of the center of the image acquired by the camera to obtain an offset vector of the face and the image center of the camera;
s104: carrying out smooth filtering on the offset vector, carrying out Gaussian fusion on a detection result of a face detection algorithm and a prediction result of a frame, and extracting a position where the face is most likely to appear;
s105: and when the offset vector exceeds the set protection area of the camera picture, indicating the camera holder to rotate so as to enable the face to be positioned in the center of the camera picture.
As an alternative implementation, in some embodiments, the face template of the face image library includes: the face recognition method comprises the steps of face correction and side face, and meanwhile, generating a reference structure template from facial features information. Specifically, a face to be tracked by a camera is registered in a database, and a corresponding face structure is extracted, so that a subsequent tracking algorithm and a cradle head control algorithm can be operated conveniently.
As an alternative implementation, in some embodiments, when no human face appears in the first frame of image data;
at the moment, if no human face is identified from the first frame of image data, the camera holder is instructed to rotate along the horizontal plane direction, wherein, every time the camera holder rotates for a set angle, one frame of image data is collected and the human face identification is carried out on the collected frame of image data; in the process of rotating for one circle: if the face is recognized, matching the face with a face template stored in a face image library; and if the human face is not recognized, entering a standby state, continuously acquiring input frames of the video stream data in the standby state, and performing human face recognition on each frame of image data until the human face is successfully recognized. Specifically, the system is initialized by using a first frame of an input video stream, whether a registered face exists in the first frame or not is detected and recognized, and if the registered face exists in the first frame, the subsequent steps are carried out. If not, the controller sends a horizontal rotation instruction to the holder, the face detection and identification are carried out once every 10 degrees of rotation, if the holder still cannot detect and identify the registered face after rotating for a circle, the holder enters a standby state, and no rotation is carried out until the registered face appears on the current shooting picture; and if the face is detected and recognized in the rotating process, performing the subsequent steps.
As an alternative implementation, in some embodiments, when a face is occluded by an obstacle; continuously acquiring input frames of the video stream data, and performing face search on each input frame of image data; if the face cannot be searched, no control signal is sent to the camera holder; and if the face is searched and successfully identified, generating a corresponding motion vector for adjusting the camera holder, and enabling the center of the camera picture to be aligned with the face.
As an alternative implementation, in some embodiments, when the face is not facing the camera; continuously acquiring input frames of the video stream data, and carrying out face skin color detection on each input frame of image data; detecting facial feature information in a facial skin color area, and matching the facial feature information with a corresponding reference template structure registered with a face in a face image library; if the matching is successful, the central coordinates in the human face skin color area are used as the central coordinates of the human face frame to generate corresponding motion vectors; if the information of the five sense organs of the face cannot be detected or the matching is unsuccessful, no processing is carried out.
Specifically, if face tracking fails, two cases can be distinguished. The first condition is that the face does not face the camera, a small area is selected near the center coordinate of the face frame of the previous frame according to the fact that the displacement of two adjacent frames of faces is small, the face skin color detection is carried out in the small area, if the face skin color appears, the face structure ears, the nose and the like in the face skin color area are extracted and matched with the corresponding structure template of the face registered in the database, if the detected face structure is successfully matched with the corresponding template in the database, the center coordinate of the face skin color area is used as the center coordinate of the face frame, the corresponding motion vector is calculated and filtered, and the main control sends a control command to the pan-tilt head to carry out corresponding rotation.
As an optional implementation manner, in some embodiments, the rotation range of the camera pan-tilt is within a set range, where the first control instruction includes a PID control algorithm corresponding to the camera pan-tilt, the PID control algorithm is configured to enable a face to be located at the center of a picture acquired by the camera, after the face is tracked, adjust the direction of the camera pan-tilt to enable the center of the picture of the camera to be always aligned with the face, narrow a search area of the face for an input next frame, and if the width of a video image is W and the height of the video image is H, the width W and the height H of the search area for the next frame are:
w=W/5
h=H/5
the center of the search area is located at the center of the camera picture. Specifically, if the offset vector is in a preset protection area, the controller does not send a control instruction to the pan-tilt; if the offset vector exceeds the preset protection area, the controller sends a control instruction to the holder and pulls the camera holder to rotate until the face falls on the central position of the camera picture. In addition, in order to align the rotation of the cradle head to the face as soon as possible, the rotation speed of the cradle head can be determined by a fuzzy control method according to a table look-up method, when the motion vector is large, the output of a cradle head steering engine can be correspondingly improved, and when the motion vector is small, the output of the cradle head steering engine can be correspondingly reduced
As an optional implementation manner, in some embodiments, the rotation angular velocity of the camera pan/tilt head is within a set range, wherein a nonlinear coordinate mapping relationship between the movement of the face frame and the rotation of the camera pan/tilt head is used to prevent the subsequent camera pan/tilt head from generating a shake phenomenon of reciprocating rotation. Specifically, a protection area needs to be manually set near the center of the camera, and after the offset vector between the face and the picture of the camera is obtained, the pan-tilt is controlled only when the offset vector exceeds the preset protection area of the camera. A control instruction is sent to a steering engine of the cradle head to rotate the cradle head, the cradle head can only rotate left and right, up and down based on the cradle head, and a nonlinear mapping relation between the movement of a human face frame and the rotation of the cradle head is involved, so that the rotating angular speed of the cradle head is in a certain constraint range, and the phenomenon of shaking of the subsequent cradle head in reciprocating rotation is prevented.
As an optional implementation manner, in some embodiments, the kalman filtering is specifically: and processing the offset vector of the face frame by adopting a Kalman filter to generate a smooth motion track, wherein,
the Kalman filter adopts a first-order linear model, namely a predicted value is a value output by the filter at the last moment;
modeling the motion trail of the face according to a state space theory, wherein a state equation and an observation equation of the system are respectively as follows:
x(k+1)=Fx(k)+v(k)
y(k+1)=Hx(k)+w(k)
f, H are observation matrices, x (k), y (k) are state vector and observation vector, respectively, and v (k) and w (k) are random perturbation; because the invention adopts a first-order motion model, no external input exists, namely the motion of the human face is approximate to uniform linear motion, and the observation equation has no unit conversion, the transfer matrix F and the observation matrix H are as follows:
Figure BDA0002865493040000071
Figure BDA0002865493040000074
the observation steps of the Kalman filter are as follows:
x(k+1)=Fx(k)+v(k)
the covariance matrix is:
P(k+1)=FP(k)FT+Q
updating Kalman gain:
K=HP(k)HT(HP(k)HT+R)-1
and (3) updating the covariance matrix:
Figure BDA0002865493040000072
updating the output value of the filter:
Figure BDA0002865493040000073
as an alternative implementation, in some embodiments, the location of the face is detected using a gaussian mixture skin color model.
Referring to fig. 2, the method may be implemented by:
step 1: firstly, registering a face to be tracked by a camera in a database, and extracting a corresponding face structure.
Step 2: initializing the system by using the first frame of the input video stream, detecting and identifying whether the first frame has a registered face, and if so, carrying out the subsequent steps. If not, the controller sends a horizontal rotation instruction to the holder, the face detection and identification are carried out once every 10 degrees of rotation, if the holder still cannot detect and identify the registered face after rotating for a circle, the holder enters a standby state, and no rotation is carried out until the registered face appears on the current shooting picture; and if the face is detected and recognized in the rotating process, performing the subsequent steps.
At step 3, the next input frame of the video stream is acquired.
And 4, carrying out face detection and identification on the acquired input frame, skipping to the step 6 if the detection and identification are successful, and skipping to a face tracking abnormity processing module for subsequent processing if the detection and identification are failed.
In step 5, if the face tracking fails, two cases can be distinguished. The first condition is that the face does not face the camera, a small area is selected near the center coordinate of the face frame of the previous frame according to the fact that the displacement of two adjacent frames of faces is small, the face skin color detection is carried out in the small area, if the face skin color appears, the face structure ears, the nose and the like in the face skin color area are extracted and matched with the corresponding structure template of the face registered in the database, if the detected face structure is successfully matched with the corresponding template in the database, the center coordinate of the face skin color area is used as the center coordinate of the face frame, the corresponding motion vector is calculated and filtered, and the main control sends a control command to the pan-tilt head to carry out corresponding rotation.
In step 6, because the face detection and identification in step 4 are successful, the offset vector of the face relative to the center of the camera picture is calculated according to the difference between the center coordinate of the face frame and the center coordinate of the camera picture, and the offset vector is smoothed by using a Kalman filter.
The Kalman filter adopts a first-order linear uniform motion model, namely a predicted value is a value output by the filter at the last moment.
Modeling the motion trail of the face according to a state space theory, wherein a state equation and an observation equation of the system are respectively as follows:
x(k+1)=Fx(k)+v(k)
y(k+1)=Hx(k)+w(k)
f, H are observation matrices, x (k), y (k) are state vectors and observation vectors, respectively, and v (k) and w (k) are random perturbations. Because the invention adopts a first-order motion model, no external input exists, namely the motion of the human face is approximate to uniform linear motion, and the observation equation has no unit conversion, the transfer matrix F and the observation matrix H are as follows:
Figure BDA0002865493040000081
Figure BDA0002865493040000082
the observation steps of the Kalman filter are as follows:
x(k+1)=Fx(k)+v(k)
the covariance matrix is:
P(k+1)=FP(k)FT+Q
updating Kalman gain:
K=HP(k)HT(HP(k)HT+R)-1
and (3) updating the covariance matrix:
Figure BDA0002865493040000091
updating the output value of the filter:
Figure BDA0002865493040000092
in the step, if the offset vector is in a preset protection area, the controller does not send a control instruction to the holder; if the offset vector exceeds the preset protection area, the controller sends a control instruction to the holder and pulls the camera holder to rotate until the face falls on the central position of the camera picture. In addition, in order to align the rotation of the cradle head to the face as soon as possible, the rotation speed of the cradle head can be determined by a fuzzy control method according to a table look-up method, when the motion vector is large, the output of a cradle head steering engine can be correspondingly improved, and when the motion vector is small, the output of the cradle head steering engine can be correspondingly reduced
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
The above embodiments are only for illustrating the technical concept and features of the present invention, and the purpose thereof is to enable those skilled in the art to understand the contents of the present invention and implement the present invention accordingly, and not to limit the protection scope of the present invention accordingly. All equivalent changes or modifications made in accordance with the spirit of the present disclosure are intended to be covered by the scope of the present disclosure.

Claims (9)

1. The utility model provides a camera cloud platform turns to self-adaptation and trails adjusting method for the server, server control signal is connected with the camera cloud platform, camera cloud platform control signal is connected with the camera, its characterized in that, the method includes:
acquiring video stream data;
acquiring first frame image data of the video stream data, identifying a face in the first frame image data, matching the face with a stored face template in a face image library, and if matching is successful, continuing to acquire the video stream data;
recognizing a face frame of a face according to the continuously acquired input frame of the video stream data, and subtracting the coordinate of the center of the face frame of the face from the coordinate of the center of the image acquired by the camera to obtain an offset vector of the face and the image center of the camera;
carrying out smooth filtering on the offset vector, carrying out Gaussian fusion on a detection result of a face detection algorithm and a prediction result of a frame, and extracting a position where the face is most likely to appear;
and when the offset vector exceeds the set protection area of the camera picture, indicating the camera holder to rotate so as to enable the face to be positioned in the center of the camera picture.
2. The method for adaptive tracking and adjusting of steering of a camera pan-tilt according to claim 1, wherein the face template of the face image library comprises: the face recognition method comprises the steps of face correction and side face, and meanwhile, generating a reference structure template from facial features information.
3. The adaptive tracking adjustment method for the steering of the camera pan-tilt according to claim 1,
when no human face appears in the first frame of image data;
and if no human face is identified from the first frame of image data, indicating the camera pan-tilt head to rotate along the horizontal plane direction, wherein,
every time the camera holder rotates by a set angle, acquiring a frame of image data and carrying out face recognition on the acquired frame of image data;
in the process of rotating for one circle:
if the face is recognized, matching the face with a face template stored in a face image library;
and if the human face is not recognized, entering a standby state, continuously acquiring input frames of the video stream data in the standby state, and performing human face recognition on each frame of image data until the human face is successfully recognized.
4. The camera pan-tilt steering adaptive tracking adjustment method according to claim 1, wherein when the face is occluded by an obstacle;
continuously acquiring input frames of the video stream data, and performing face search on each input frame of image data;
if the face cannot be searched, no control signal is sent to the camera holder;
and if the face is searched and successfully identified, generating a corresponding motion vector for adjusting the camera holder, and enabling the center of the camera picture to be aligned with the face.
5. The method for adaptively tracking and adjusting the steering of the pan/tilt head of the camera according to claim 2, wherein when the face of the person is not directly facing the camera;
continuously acquiring input frames of the video stream data, and carrying out face skin color detection on each input frame of image data;
detecting facial feature information in a facial skin color area, and matching the facial feature information with a corresponding reference template structure registered with a face in a face image library;
if the matching is successful, the central coordinates in the human face skin color area are used as the central coordinates of the human face frame to generate corresponding motion vectors; if the information of the five sense organs of the face cannot be detected or the matching is unsuccessful, no processing is carried out.
6. The adaptive tracking adjustment method for the steering of the camera pan-tilt according to claim 1,
the rotation range of the camera tripod head is within a set range, wherein,
the first control instruction comprises a PID control algorithm corresponding to the camera pan-tilt, the PID control algorithm is used for enabling a human face to be located in the center of a camera collecting picture, after the human face is tracked, the direction of the camera pan-tilt is adjusted to enable the center of the camera picture to be always aligned with the human face, the search area of the human face is reduced for the next frame of input, the width of a video image is W, the height of the video image is H, and then the width W and the height H of the search area of the next frame are as follows:
w=W/5
h=H/5
the center of the search area is located at the center of the camera picture.
7. The adaptive tracking adjustment method for the steering of the camera pan-tilt according to claim 1,
the rotating angular speed of the camera tripod head is within a set range, wherein,
the nonlinear coordinate mapping relation between the movement of the face frame and the rotation of the camera holder is used for preventing the subsequent camera holder from generating the shaking phenomenon of reciprocating rotation.
8. The camera pan-tilt steering adaptive tracking adjustment method according to claim 1, wherein the kalman filtering is specifically: and processing the offset vector of the face frame by adopting a Kalman filter to generate a smooth motion track, wherein,
the Kalman filter adopts a first-order linear model, namely a predicted value is a value output by the filter at the last moment;
modeling the motion trail of the face according to a state space theory, wherein a state equation and an observation equation of the system are respectively as follows:
x(k+1)=Fx(k)+v(k)
y(k+1)=Hx(k)+w(k)
f, H are observation matrices, x (k), y (k) are state vector and observation vector, respectively, and v (k) and w (k) are random perturbation; because the invention adopts a first-order motion model, no external input exists, namely the motion of the human face is approximate to uniform linear motion, and the observation equation has no unit conversion, the transfer matrix F and the observation matrix H are as follows:
Figure FDA0002865493030000031
Figure FDA0002865493030000032
the observation steps of the Kalman filter are as follows:
x(k+1)=Fx(k)+v(k)
the covariance matrix is:
P(k+1)=FP(k)FT+Q
updating Kalman gain:
K=HP(k)HT(HP(k)HT+R)-1
and (3) updating the covariance matrix:
Figure FDA0002865493030000033
updating the output value of the filter:
Figure FDA0002865493030000034
9. the method for self-adaptive tracking and adjusting of the steering of the camera holder according to claim 1, characterized in that the position of the face is detected by adopting a Gaussian mixture skin color model.
CN202011579326.6A 2020-12-28 2020-12-28 Camera holder steering self-adaptive tracking adjustment method Pending CN112653844A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011579326.6A CN112653844A (en) 2020-12-28 2020-12-28 Camera holder steering self-adaptive tracking adjustment method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011579326.6A CN112653844A (en) 2020-12-28 2020-12-28 Camera holder steering self-adaptive tracking adjustment method

Publications (1)

Publication Number Publication Date
CN112653844A true CN112653844A (en) 2021-04-13

Family

ID=75363314

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011579326.6A Pending CN112653844A (en) 2020-12-28 2020-12-28 Camera holder steering self-adaptive tracking adjustment method

Country Status (1)

Country Link
CN (1) CN112653844A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113112668A (en) * 2021-04-15 2021-07-13 新疆爱华盈通信息技术有限公司 Face recognition-based holder tracking method, holder and entrance guard recognition machine
CN113364985A (en) * 2021-06-11 2021-09-07 广州逅艺文化科技有限公司 Live broadcast lens tracking method, device and medium
CN113658354A (en) * 2021-08-27 2021-11-16 深圳市奥赛克科技有限公司 Take control system's vehicle event data recorder
CN114157813A (en) * 2022-02-07 2022-03-08 深圳市慧为智能科技股份有限公司 Electronic scale camera motion control method and device, control terminal and storage medium
CN114157802A (en) * 2021-10-22 2022-03-08 北京注色影视科技有限公司 Camera supporting device and moving target tracking method thereof
CN114567726A (en) * 2022-02-25 2022-05-31 苏州安智汽车零部件有限公司 Human-eye-like self-adaptive shake-eliminating front-view camera
CN114845054A (en) * 2022-04-25 2022-08-02 南京奥拓电子科技有限公司 Micro-motion camera control method and device
CN115100676A (en) * 2022-05-27 2022-09-23 中国科学院半导体研究所 Writing posture tracking method and device, electronic equipment and storage medium
CN116896658A (en) * 2023-09-11 2023-10-17 厦门视诚科技有限公司 Camera picture switching method in live broadcast
CN117241134A (en) * 2023-11-15 2023-12-15 杭州海康威视数字技术股份有限公司 Shooting mode switching method for camera

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722698A (en) * 2012-05-17 2012-10-10 上海中原电子技术工程有限公司 Method and system for detecting and tracking multi-pose face
JP2013148754A (en) * 2012-01-20 2013-08-01 Fujitsu General Ltd Lens fixing mechanism and camera device including the same
CN104013414A (en) * 2014-04-30 2014-09-03 南京车锐信息科技有限公司 Driver fatigue detecting system based on smart mobile phone
CN105898136A (en) * 2015-11-17 2016-08-24 乐视致新电子科技(天津)有限公司 Camera angle adjustment method, system and television
CN109829436A (en) * 2019-02-02 2019-05-31 福州大学 Multi-face tracking method based on depth appearance characteristics and self-adaptive aggregation network
CN109968351A (en) * 2017-12-28 2019-07-05 深圳市优必选科技有限公司 Robot, control method thereof and device with storage function
CN110099254A (en) * 2019-05-21 2019-08-06 浙江师范大学 A kind of driver's face tracking device and method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013148754A (en) * 2012-01-20 2013-08-01 Fujitsu General Ltd Lens fixing mechanism and camera device including the same
CN102722698A (en) * 2012-05-17 2012-10-10 上海中原电子技术工程有限公司 Method and system for detecting and tracking multi-pose face
CN104013414A (en) * 2014-04-30 2014-09-03 南京车锐信息科技有限公司 Driver fatigue detecting system based on smart mobile phone
CN105898136A (en) * 2015-11-17 2016-08-24 乐视致新电子科技(天津)有限公司 Camera angle adjustment method, system and television
CN109968351A (en) * 2017-12-28 2019-07-05 深圳市优必选科技有限公司 Robot, control method thereof and device with storage function
CN109829436A (en) * 2019-02-02 2019-05-31 福州大学 Multi-face tracking method based on depth appearance characteristics and self-adaptive aggregation network
CN110099254A (en) * 2019-05-21 2019-08-06 浙江师范大学 A kind of driver's face tracking device and method

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113112668A (en) * 2021-04-15 2021-07-13 新疆爱华盈通信息技术有限公司 Face recognition-based holder tracking method, holder and entrance guard recognition machine
CN113364985B (en) * 2021-06-11 2022-07-29 广州逅艺文化科技有限公司 Live broadcast lens tracking method, device and medium
CN113364985A (en) * 2021-06-11 2021-09-07 广州逅艺文化科技有限公司 Live broadcast lens tracking method, device and medium
CN113658354A (en) * 2021-08-27 2021-11-16 深圳市奥赛克科技有限公司 Take control system's vehicle event data recorder
CN114157802B (en) * 2021-10-22 2023-07-18 北京注色影视科技有限公司 Camera supporting device and moving object tracking method thereof
CN114157802A (en) * 2021-10-22 2022-03-08 北京注色影视科技有限公司 Camera supporting device and moving target tracking method thereof
CN114157813A (en) * 2022-02-07 2022-03-08 深圳市慧为智能科技股份有限公司 Electronic scale camera motion control method and device, control terminal and storage medium
CN114567726A (en) * 2022-02-25 2022-05-31 苏州安智汽车零部件有限公司 Human-eye-like self-adaptive shake-eliminating front-view camera
CN114845054A (en) * 2022-04-25 2022-08-02 南京奥拓电子科技有限公司 Micro-motion camera control method and device
CN114845054B (en) * 2022-04-25 2024-04-19 南京奥拓电子科技有限公司 Micro camera control method and device
CN115100676A (en) * 2022-05-27 2022-09-23 中国科学院半导体研究所 Writing posture tracking method and device, electronic equipment and storage medium
CN116896658A (en) * 2023-09-11 2023-10-17 厦门视诚科技有限公司 Camera picture switching method in live broadcast
CN116896658B (en) * 2023-09-11 2023-12-12 厦门视诚科技有限公司 Camera picture switching method in live broadcast
CN117241134A (en) * 2023-11-15 2023-12-15 杭州海康威视数字技术股份有限公司 Shooting mode switching method for camera
CN117241134B (en) * 2023-11-15 2024-03-08 杭州海康威视数字技术股份有限公司 Shooting mode switching method for camera

Similar Documents

Publication Publication Date Title
CN112653844A (en) Camera holder steering self-adaptive tracking adjustment method
CN112164015B (en) Monocular vision autonomous inspection image acquisition method and device and power inspection unmanned aerial vehicle
CN109922250B (en) Target object snapshot method and device and video monitoring equipment
CN111862296B (en) Three-dimensional reconstruction method, three-dimensional reconstruction device, three-dimensional reconstruction system, model training method and storage medium
US9396399B1 (en) Unusual event detection in wide-angle video (based on moving object trajectories)
US8396262B2 (en) Apparatus and method for face recognition and computer program
WO2017020856A1 (en) Photographing device and method using drone to automatically track and photograph moving object
CN111272148A (en) Unmanned aerial vehicle autonomous inspection self-adaptive imaging quality optimization method for power transmission line
CN105979147A (en) Intelligent shooting method of unmanned aerial vehicle
CN106910206B (en) Target tracking method and device
CN106973221B (en) Unmanned aerial vehicle camera shooting method and system based on aesthetic evaluation
WO2003098922A1 (en) An imaging system and method for tracking the motion of an object
JP2017076288A (en) Information processor, information processing method and program
CN112640419B (en) Following method, movable platform, device and storage medium
WO2022041014A1 (en) Gimbal and control method and device therefor, photographing apparatus, system, and storage medium thereof
JP4578864B2 (en) Automatic tracking device and automatic tracking method
WO2020257999A1 (en) Method, apparatus and platform for image processing, and storage medium
Dinh et al. Real time tracking using an active pan-tilt-zoom network camera
CN114500839B (en) Visual cradle head control method and system based on attention tracking mechanism
CN111414012A (en) Region retrieval and holder correction method for inspection robot
CN115019241B (en) Pedestrian identification and tracking method and device, readable storage medium and equipment
WO2023159611A1 (en) Image photographing method and device, and movable platform
CN115988309A (en) Photographing method and device, robot and readable storage medium
CN114594770B (en) Inspection method for inspection robot without stopping
CN113438399B (en) Target guidance system, method for unmanned aerial vehicle, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210413

RJ01 Rejection of invention patent application after publication