CN114650453B - Target tracking method, device, equipment and medium applied to classroom recording and broadcasting - Google Patents

Target tracking method, device, equipment and medium applied to classroom recording and broadcasting Download PDF

Info

Publication number
CN114650453B
CN114650453B CN202210344137.3A CN202210344137A CN114650453B CN 114650453 B CN114650453 B CN 114650453B CN 202210344137 A CN202210344137 A CN 202210344137A CN 114650453 B CN114650453 B CN 114650453B
Authority
CN
China
Prior art keywords
matting
image
area
teacher
student
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210344137.3A
Other languages
Chinese (zh)
Other versions
CN114650453A (en
Inventor
张亚娟
李胜怀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zonekey Modern Technology Co ltd
Original Assignee
Beijing Zonekey Modern Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zonekey Modern Technology Co ltd filed Critical Beijing Zonekey Modern Technology Co ltd
Priority to CN202210344137.3A priority Critical patent/CN114650453B/en
Publication of CN114650453A publication Critical patent/CN114650453A/en
Application granted granted Critical
Publication of CN114650453B publication Critical patent/CN114650453B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47202End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The application relates to a target tracking method, a device, equipment and a medium applied to classroom recording and broadcasting, and relates to the technical field of modern teaching systems; according to the coordinates of the moving targets in the panoramic video of the teacher area, carrying out uniform-speed tracking and matting on the moving targets in the panoramic video of the teacher area to obtain a close-up video of the teacher area; and carrying out variable speed tracking matting on the moving target in the panoramic video of the student zone according to the coordinates of the moving target in the panoramic video of the student zone to obtain a close-up video of the student zone. The application has the effect that the recorded teacher close-up video picture and student close-up video picture are more stable.

Description

Target tracking method, device, equipment and medium applied to classroom recording and broadcasting
Technical Field
The application relates to the technical field of modern teaching systems, in particular to a target tracking method, a target tracking device, target tracking equipment and target tracking media applied to classroom recording and broadcasting.
Background
At present, the classroom recording and broadcasting is widely applied to a modern teaching system, and becomes a necessary device for evaluating the teaching level of teachers and strengthening the learning content of students. In order to realize multi-angle omnibearing monitoring and recording teaching activities, the traditional classroom recording and broadcasting device mostly adopts multi-machine-position shooting, and a single classroom is usually required to be provided with a plurality of cameras such as a teacher panorama, a teacher close-up, a student panorama, a student close-up and the like.
At present, a new classroom recording and broadcasting method is presented, wherein two cameras are adopted to respectively shoot a teacher panorama and a student panorama, then the teacher panorama is scratched according to the position of a moving target to obtain a teacher close-up picture, and the scratched teacher close-up picture is fused into a teacher close-up video; and according to the position of the moving target, carrying out panoramic matting on the students to obtain a student close-up picture, and fusing the student close-up picture obtained by matting into a student close-up video. However, as the moving object is moving, the position of the moving object in each frame of close-up video image basically changes, so that the teacher close-up video image and the student close-up video image are unstable, and the watching of users is affected.
Disclosure of Invention
In order to record teacher close-up videos and student close-up videos with stable pictures, the application provides a target tracking method, device, equipment and medium applied to classroom recording and broadcasting.
In a first aspect, the present application provides a target tracking method applied to classroom recording and broadcasting, which adopts the following technical scheme:
a target tracking method applied to classroom recording and broadcasting includes:
acquiring a teacher area panoramic video and a student area panoramic video;
according to the coordinates of the moving targets in the panoramic video of the teacher area, carrying out uniform-speed tracking and matting on the moving targets in the panoramic video of the teacher area to obtain a close-up video of the teacher area;
and carrying out variable speed tracking matting on the moving target in the panoramic video of the student zone according to the coordinates of the moving target in the panoramic video of the student zone to obtain a close-up video of the student zone.
By adopting the technical scheme, the movement of a teacher approaches to a constant speed, so that the moving target in the panoramic video of the teacher area is tracked at the constant speed to be scratched, each frame of image picture obtained by the scratched image has continuity, and the shot video picture of the close-up of the teacher area is more stable; the application carries out variable speed tracking and matting on the moving object in the panoramic video of the student zone, so that the matting position moves along with the movement of the moving object, the matting position changes fast when the moving object moves fast, the matting position changes slowly when the moving object moves slow, the matting position keeps stable as much as possible when the matting position moves along with the moving object, and the shot close-up video picture of the student zone is more stable.
Preferably, the coordinates of the moving object in the panoramic video of the teacher area are first center coordinates; according to the coordinates of the moving object in the teacher area panoramic video, the moving object in the teacher area panoramic video is tracked at a constant speed to obtain a close-up video of the teacher area, which comprises the following steps:
judging whether a moving target exists in the teacher area image; the teacher area image is an image frame in the teacher area panoramic video;
if yes, acquiring the first center coordinate, and carrying out uniform-speed tracking and matting on the teacher area image according to the first center coordinate to obtain a first image;
if not, the teacher area image is scratched by a first preset method to obtain a second image;
and fusing the first image and the second image into a teacher area close-up video according to the time sequence of the teacher area panoramic video.
By adopting the technical scheme, when a moving object exists, the moving object is scratched, and when the moving object does not exist, a first preset method is adopted for scratching, and the first image and the second image obtained by scratching are fused into the teacher area close-up video, so that the total duration of the teacher area close-up video is consistent with the total duration of the teacher area panoramic video, and the total duration of the teacher area close-up video and the total duration of the teacher area panoramic video are the duration of one class; if some users only want to watch the scene of the teacher teaching, only watch the video of the close-up of the teacher area.
Preferably, the obtaining the first center coordinate, according to the first center coordinate, performs uniform velocity tracking matting on the teacher area image to obtain a first image, including:
acquiring a plurality of first center coordinates, and selecting a first current coordinate and a first source coordinate from the plurality of first center coordinates;
calculating the number of times of change of the matting position based on the first current coordinate and the first source coordinate;
taking the first current coordinate as an initial position of the matting, taking the first source coordinate as a final position of the matting, uniformly transforming the matting position from the initial position to the final position for a plurality of times, wherein the transformation times are the matting position change times;
and according to the matting position, matting the teacher area image to obtain a first image.
By adopting the technical scheme, in the process that the matting position moves along with the moving object, the frequency of the matting position change is reduced as much as possible according to the characteristic that the teacher moves to approach to a uniform speed, and the picture stability of the close-up video of the teacher area is further improved.
Preferably, the coordinates of the moving object in the panoramic video of the student area are second center coordinates; and carrying out variable speed tracking matting on the moving target in the panoramic video of the student zone according to the coordinates of the moving target in the panoramic video of the student zone to obtain a close-up video of the student zone, wherein the variable speed tracking matting comprises the following steps:
judging whether a moving object exists in the student area image; the student area image is an image frame in the student area panoramic video;
if yes, acquiring the second center coordinates, and carrying out variable speed tracking matting on the student region image according to the second center coordinates to obtain a third image;
if not, the student region image is scratched by a second preset method to obtain a fourth image;
and fusing the third image and the fourth image into a student area close-up video according to the time sequence of the student area panoramic video.
By adopting the technical scheme, when a moving object exists, the moving object is scratched, and when the moving object does not exist, a second preset method is adopted for scratching, and the third image and the fourth image obtained by scratching are fused into the student region close-up video, so that the total duration of the student region close-up video is consistent with the total duration of the student region panoramic video, and the total duration of the student region close-up video and the total duration of the student region panoramic video are the duration of a lesson; if some users only want to see the scene of student listening and speaking, only the student zone close-up video can be watched.
Preferably, the obtaining the second center coordinate, according to the second center coordinate, performs variable speed tracking matting on the student area image to obtain a third image, including:
acquiring a plurality of second center coordinates, and selecting a second current coordinate and a second source coordinate from the plurality of second center coordinates;
calculating a matting position change distance based on the second current coordinate and the second source coordinate;
the second current coordinate is used as an initial position of the matting, the second source coordinate is used as a final position of the matting, the matting position is transformed for preset times from the initial position to the final position, and the transformed distance is the matting position change distance;
and according to the matting positions, matting the student region images to obtain a third image.
By adopting the technical scheme, in the process that the matting position moves along with the moving object, the frequency of the matting position change is reduced as much as possible according to the characteristic that students move to change speed, and further the picture stability of the close-up video of the teacher area is improved.
In a second aspect, the present application provides a target tracking device applied to classroom recording and broadcasting, which adopts the following technical scheme:
a target tracking device applied to classroom recording and broadcasting comprises,
the video acquisition module is used for acquiring a teacher area panoramic video and a student area panoramic video;
the uniform-speed tracking and matting module is used for uniformly tracking and matting the moving target in the panoramic video of the teacher area according to the coordinates of the moving target in the panoramic video of the teacher area to obtain a close-up video of the teacher area; the method comprises the steps of,
and the variable speed tracking matting module is used for carrying out variable speed tracking matting on the moving target in the panoramic video of the student zone according to the coordinates of the moving target in the panoramic video of the student zone to obtain a close-up video of the student zone.
In a third aspect, the present application provides a computer device, which adopts the following technical scheme:
a computer device comprising a memory and a processor, the memory having stored thereon a computer program capable of being loaded by the processor and executing the object tracking method of any of the first aspects for use in classroom video recording.
In a fourth aspect, the present application provides a computer readable storage medium, which adopts the following technical scheme:
a computer readable storage medium storing a computer program capable of being loaded by a processor and executing the object tracking method applied to class recording according to any one of the first aspects.
Drawings
Fig. 1 is a flow chart of a target tracking method applied to classroom recording and broadcasting according to an embodiment of the present application.
Fig. 2 is a schematic diagram of a first detection region and a first detection shielding region provided in an embodiment of the present application.
Fig. 3 is a schematic diagram of a first frame image matting by using a first edge as a matting side according to an embodiment of the present application.
Fig. 4 is a schematic diagram of a second detection region and a second detection shielding region provided in an embodiment of the present application.
Fig. 5 is a schematic diagram of a conventional classroom recording and broadcasting system according to an embodiment of the present application.
Fig. 6 is a schematic diagram of a classroom recording and broadcasting system according to an embodiment of the present application.
Fig. 7 is a block diagram of a target tracking device applied to classroom recording according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application.
The present embodiment provides a target tracking method applied to classroom recording and broadcasting, as shown in fig. 1, the main flow of the method is described as follows (steps S101 to S104):
step S101: and acquiring the panoramic video of the teacher area and the panoramic video of the student area.
The teacher area panoramic video is shot by a teacher machine, the teacher machine is generally arranged behind a classroom, and a lens is aligned to a teacher platform area; the student area panoramic video is shot by the student machine, and the student machine is the camera of shooting student area panoramic video, and the student machine is generally installed in classroom the place ahead blackboard one side, and the student area in the camera lens alignment classroom.
Step S102: and carrying out uniform-speed tracking and matting on the moving target in the panoramic video of the teacher area according to the coordinates of the moving target in the panoramic video of the teacher area to obtain a close-up video of the teacher area.
In the embodiment, a frame difference method in a Haiyi VDA algorithm is adopted to judge whether a moving target exists in the teacher area image; the teacher area image is an image frame in the teacher area panoramic video; if so, acquiring a first center coordinate, and carrying out uniform-speed tracking and matting on the teacher area image according to the first center coordinate to obtain a first image; if not, the teacher area image is scratched by a first preset method to obtain a second image; and fusing the first image and the second image into a teacher area close-up video according to the time sequence of the teacher area panoramic video.
The method for judging whether a moving object exists in the teacher area image by adopting a frame difference method in a Hai Si VDA algorithm comprises the following steps:
the method comprises the steps of obtaining a first coordinate set and a second coordinate set of a teacher area panoramic video, dividing a first detection area in the teacher area panoramic video based on the first coordinate set, and dividing a first detection shielding area in the teacher area panoramic video based on the second coordinate set, wherein the first detection area and the first detection shielding area are arranged in a teacher area image. And performing motion detection on the first non-coincident region and the second non-coincident region in the teacher region image by adopting a frame difference method in a Haiyi VDA algorithm.
For example, referring to fig. 2, the first detection region and the first detection shielding region are each rectangular, and the first coordinate set and the second coordinate set are each composed of four angular coordinates; wherein, four angular coordinates of the first coordinate set and the second coordinate set are set manually. If the teacher moves, the moving state base is presented in the first detection area in fig. 2, and when the first detection area is detected, a situation that a certain area needs to be shielded and detected is often encountered, for example, if a slide show exists in a picture shot by the teacher machine, the slide change can interfere the detection of a moving target and the calculation of a first center coordinate in the follow-up process, so that the first detection shielding area is defined, and the interference is reduced; the first center coordinates are center coordinates of a moving object in the teacher area image.
The method for obtaining the first center coordinates includes the steps of obtaining first center coordinates, carrying out uniform-speed tracking and matting on the teacher area image according to the first center coordinates to obtain a first image, and comprising the following steps:
(1) And acquiring a plurality of first center coordinates, and selecting a first current coordinate and a first source coordinate from the plurality of first center coordinates.
Specifically, two first center coordinates are selected at intervals to serve as a first current coordinate and a first source coordinate respectively. For example, the interval value is 200 frames, the same moving object appears in all 100 th to 200 th frames, and only one moving object appears, then the first center coordinate of the moving object in the 100 th frame image is taken as the first source coordinate, and the first center coordinate of the moving object in the 200 th frame image is taken as the first current coordinate; only two moving targets always appear in the 200 th frame image to the 300 th frame image, for the 200 th frame image, calculating an average value of first center coordinates of the two moving targets, taking the average value as a first source coordinate, and for the 300 th frame image, likewise calculating an average value of first center coordinates of the two moving targets, taking the average value as a first current coordinate; similarly, if more than two moving objects appear, the average value of the first center coordinates of all the moving objects is also used as the first current coordinates or the first source coordinates. It should be noted that the first current coordinates and the first source coordinates are in one-to-one correspondence, for example, the first source coordinates calculated by the 100 th frame image and the first current coordinates calculated by the 200 th frame image are in one-to-one correspondence, and the first source coordinates calculated by the 200 th frame image and the first current coordinates calculated by the 300 th frame image are in one-to-one correspondence.
(2) And calculating the number of times of change of the matting position based on the first current coordinate and the first source coordinate.
The calculation formula of the number of times of change of the matting position is as follows:
x new = x org +n*w 1
wherein n is the number of times of change of the matting position; x is x org Is the abscissa of the first source coordinate; x is x new The abscissa is the first current coordinate; w (w) 1 And fixing the variable length for the preset matting position.
Another calculation formula for the number of focus movements is as follows:
y new = y org +n*h 1
wherein y is org Is the ordinate of the first source coordinate; y is new Is the ordinate of the first current coordinate; h is a 1 And fixing the variable width for the preset matting position.
(3) The first current coordinate is used as an initial position of the matting, the first source coordinate is used as a final position of the matting, the matting position is uniformly transformed for a plurality of times from the initial position to the final position, and the transformation times are the matting position change times.
Specifically, starting from an initial position to an end position, the matting position is represented by w 1 Length of change for each change, in h 1 Varying n times for each varying width; wherein w is 1 And h 1 And n is a fixed value, and n is a variable, so that the matting of the panoramic video of the teacher area is uniform tracking matting.
The method comprises the following steps of illustrating a teacher area panoramic video uniform-speed tracking matting: the first source coordinates are (x org ,y org ) The first current coordinate is (x new ,y new ) N is 4, then for the slave coordinate (x org ,y org ) To (x) org +w 1 ,y org +h 1 ) In each frame of image (note that the coordinates (x org ,y org ) Is shown in the figure of (a)Image, excluding coordinates (x org +w 1 ,y org +h 1 ) Image of (x), the matting positions are (x) org ,y org ) The method comprises the steps of carrying out a first treatment on the surface of the For the slave coordinates (x org +w 1 ,y org +h 1 ) To (x) org +2*w 1 ,y org +2*h 1 ) In each frame of image (note that the coordinates (x org +w 1 ,y org +h 1 ) Does not include an image having a coordinate (x org +2*w 1 ,y org +2*h 1 ) Image of (x), the matting positions are (x) org +w 1 ,y org +h 1 ) The method comprises the steps of carrying out a first treatment on the surface of the For the slave coordinates (x org +2*w 1 ,y org +2*h 1 ) To (x) org +3*w 1 ,y org +3*h 1 ) In each frame of image (note that the coordinates (x org +2*w 1 ,y org +2*h 1 ) Does not include an image having a coordinate (x org +3*w 1 ,y org +3*h 1 ) Image of (x), the matting positions are (x) org +2*w 1 ,y org +2*h 1 ) The method comprises the steps of carrying out a first treatment on the surface of the For the slave coordinates (x org +3*w 1 ,y org +3*h 1 ) To (x) new ,y new ) In each frame of image (note that the coordinates (x org +3*w 1 ,y org +3*h 1 ) Does not include an image having a coordinate (x new ,y new ) Image of (x), the matting positions are (x) org +3*w 1 ,y org +3*h 1 ) The method comprises the steps of carrying out a first treatment on the surface of the For a coordinate of (x new ,y new ) The matting positions are (x) new ,y new ). It can be seen that w 1 For the length of each time of the change of the matting position, h 1 The width of each time of the position change of the matting is changed for 4 times.
(4) And according to the matting position, matting the teacher area image to obtain a first image.
The method for matting the teacher area image through the first preset method to obtain a second image comprises the following steps:
when a classroom starts, if a moving target does not exist in the panoramic video of the teacher area, a first preset area in an image of the teacher area is scratched; if a moving target appears in the panoramic video of the teacher area, when the moving target disappears again, the matting position keeps unchanged when the moving target disappears, and when a new moving target appears, the matting position changes according to the coordinates of the new moving target.
Further, the matting position is adjusted based on the first current coordinate, the first source coordinate and a preset first tracking sensitivity.
Specifically, based on a motion detection principle, whether a target to be detected in the panoramic video of the teacher area is a static target or a moving target is judged; the method for judging whether the target to be detected is a static target is to judge whether the target to be detected is static or not, if so, the static time is recorded, and if not, the static time is larger than the preset time, and if so, the target to be detected is judged to be the static target.
If the static target is the static target, tracking the static target, judging whether the static target moves or not, and acquiring the moving distance (X, Y) of the movement; the moving distance of the static target is the difference value between the first current coordinate of the static target and the first source coordinate; judging whether the X value is larger than a first threshold value or whether the Y value is larger than a second threshold value; if yes, the matting position is transformed from the first source coordinate before moving to the first current coordinate after moving. For example, the first source coordinate of the static target is (x 1 ,y 1 ) The first current coordinate after the movement is (x 2 ,y 2 ) Then x=x 2 -x 1 ,Y=y 2 -y 1 If the X value is greater than the first threshold or the Y value is greater than the second threshold, the matting position is determined from (X 1 ,y 1 ) Is transformed into (x) 2 ,y 2 ) The method comprises the steps of carrying out a first treatment on the surface of the If the X value is neither greater than the first threshold nor the Y value is greater than the second threshold, the matting position is maintained at (X 1 ,y 1 ) The coordinates are unchanged.
The first tracking sensitivity includes a third threshold and a fourth threshold. If the moving object is a moving object, calculating a moving distance (X ', Y') of the moving object; the moving distance of the moving object is the difference value between the first current coordinate of the moving object and the first source coordinate; judging whether the X 'value is larger than a third threshold value or whether the Y' value is larger than a fourth threshold value; if yes, converting the matting position from a first source coordinate before moving to a first current coordinate after moving; if not, the matting position is kept unchanged as the first source coordinate; wherein X is larger than X ', Y is larger than Y', so that the static target always keeps stable tracking picture even if the static target moves in a relatively large range after moving.
In this embodiment, for the teacher area image, the matting position is the first center point. Judging whether a moving target exists in the teacher area image; if yes, the first center point is a tie value of first center coordinates of all moving targets in the teacher area image; if not, the first center point is a preset fixed value, namely the center coordinate of the first preset area.
Further, a first quantity value of a moving target in the teacher area image is obtained through motion detection of the teacher area image, and a first range for matting the teacher area image is determined based on the first quantity value; and carrying out matting on the teacher area image according to the first center point and the first range, and obtaining a first image obtained by matting.
Specifically, the method for determining the first range for matting the teacher area image based on the first quantity value comprises the following steps: the larger the first quantity value, the larger the first range. For example, if the first quantity value is 1, the first range is one fourth of the image range of the teacher area; the first quantity value is 2, and the first range is one third of the image range of the teacher area; the first number value is 3 or more, and the first range is one half of the image range of the teacher area.
Further, judging whether the matting range exceeds the edge of the teacher area image according to the first center point and the first range; if yes, the image is scratched according to the edge of the teacher area image. For example, the teacher area image is rectangular, four angular coordinates are (0, 0), (P, Q), (P, 0) and (0, Q), respectively, and then the first edge of the teacher area image is a line segment composed of (0, 0) and (0, Q), the second edge of the teacher area image is a line segment composed of (P, Q) and (P, 0), the third edge of the teacher area image is a line segment composed of (0, 0) and (P, 0), and the fourth edge of the teacher area image is a line segment composed of (0, Q) and (P, Q); the first center point has coordinates of (x, y), and the first range has coordinates of (x + -x ', y + -y'); referring to fig. 3, if the difference value of x minus x' is less than or equal to 0, the teacher area image is scratched by taking the first edge as the side of the scratch; if the sum value of x and x' is greater than or equal to P, the second edge is taken as the side edge of the picture matting to perform picture matting on the teacher area image; if the difference value of y minus y' is less than or equal to 0, the third edge is taken as the side edge of the image matting to perform image matting on the teacher area image; and if the sum value of y and y' is greater than or equal to Q, the fourth edge is taken as the side edge of the matting to perform matting on the teacher area image.
Further, in order to ensure that the resolution of each frame of picture of the recorded teacher area close-up video is consistent, the resolution of the first image and the resolution of the second image are unified to be fixed resolution. For example, the fixed resolution is 1080P, the shooting resolution of the teacher machine is 4K, the resolution of the image of the teacher area is also 4K, the range of the first preset area is one fourth of the range of the image of the teacher area, the resolution of the second image is 1080P, and the zooming process is not needed; if the first range is one fourth of the image range of the teacher area, the resolution of the first image is 1080P, and scaling processing is not needed; and if the first range is larger than one fourth of the image range of the teacher area, scaling the first image so that the resolution of the first image is reduced to be fixed resolution.
Step S103: and carrying out variable speed tracking matting on the moving target in the panoramic video of the student zone according to the coordinates of the moving target in the panoramic video of the student zone, so as to obtain a close-up video of the student zone.
In the embodiment, a frame difference method in a Hai Si VDA algorithm is adopted to judge whether a moving object exists in the student area image; the student area image is an image frame in the student area panoramic video; if yes, acquiring a second center coordinate, and carrying out variable speed tracking matting on the student region image according to the second center coordinate to obtain a third image; if not, the student region image is scratched by a second preset method to obtain a fourth image; and fusing the third image and the fourth image into a student area close-up video according to the time sequence of the student area panoramic video.
The method for judging whether the moving object exists in the student area image by adopting a frame difference method in a Hai Si VDA algorithm comprises the following steps:
referring to fig. 4, in the same way as the method of setting the first detection area and the first detection shielding area, the student usually sets up an answer question, and in the process of setting up the student, the student is determined as a moving target, in order to eliminate the movement interference of the sitting student, a third coordinate set of the panoramic video of the student area is acquired, and a second detection area is divided in the panoramic video of the student area based on the third coordinate set; when the second detection area is subjected to motion detection, the condition that a certain area needs to be shielded and detected, such as lamplight interference of a pendant lamp, is often met, so that a fourth coordinate set of the panoramic video of the student area is obtained, the second detection shielding area is divided into the panoramic video of the student area based on the fourth coordinate set, and the lamplight interference of the pendant lamp is reduced.
The method for obtaining the second center coordinates includes the steps of carrying out variable speed tracking and matting on the student zone images according to the second center coordinates to obtain third images, and comprises the following steps:
(1) And acquiring a plurality of second center coordinates, and selecting a second current coordinate and a second source coordinate from the plurality of second center coordinates.
The specific method of selecting two second center coordinates at intervals as the second current coordinates and the second source coordinates is consistent with the principle of selecting the first current coordinates and the first source coordinates in the above description, and is not repeated here.
(2) And calculating the change distance of the matting position based on the second current coordinate and the second source coordinate.
The keying position change distance comprises a keying position change length and a keying position change width.
The calculation formula of the length of the change of the matting position is as follows:
X new = X org +N*w 2
wherein w is 2 The length is changed for the matting position; x is X org Is the abscissa of the second source coordinate; x is X new For the second current sittingA target abscissa; n is the preset times of the matting position transformation.
The calculation formula of the change width of the matting position is as follows:
Y new =Y org + N*h 2
wherein h is 2 The width is changed for the matting position; y is Y org Is the ordinate of the second source coordinate; y is Y new Is the ordinate of the second current coordinate.
D=(w 2 ,h 2 );
Wherein D is the change distance of the matting position.
(3) And taking the second current coordinate as an initial position of the matting, taking the second source coordinate as a final position of the matting, and transforming the matting position from the initial position to the final position for preset times, wherein the transformed distance is the matting position change distance.
Specifically, from the initial position to the end position, the matting position is transformed N times, and the length of each transformation is calculated as w 2 The width of each transformation is calculated h 2 The method comprises the steps of carrying out a first treatment on the surface of the Wherein N is a fixed value; when the second source coordinate and the second current coordinate are updated, the calculated w 2 And h 2 Will also update, therefore, w 2 And h 2 Is a variable. The matting of the panoramic video of the student area is variable speed tracking matting, if the difference between the second source coordinate and the second current coordinate is large, the w is calculated 2 And h 2 The value of (2) is also large, which indicates that the moving target moves fast and is regarded as fast tracking; if the difference between the second source coordinate and the second current coordinate is small, the calculated w 2 And h 2 The value of (2) is also small, indicating that the moving object is moving slowly, and is regarded as slow tracking.
The illustration of the speed change tracking matting of the panoramic video of the student area is consistent with the illustration principle of the uniform speed tracking matting of the panoramic video of the teacher area in step S102, and will not be repeated here.
(4) And according to the matting positions, matting the student region images to obtain a third image.
Further, the matting position is adjusted based on the second current coordinate, the second source coordinate and the preset second tracking sensitivity, and the specific method principle of the matting position is consistent with the method principle of adjusting the matting position based on the first current coordinate, the first source coordinate and the preset first tracking sensitivity in step S102, and is not described herein; wherein the second tracking sensitivity includes a fifth threshold, which may be equal to the third threshold, and a sixth threshold, which may be equal to the fourth threshold.
In this embodiment, for the student region image, the matting position is the second center point. Judging whether a moving object exists in the student area image; if yes, the second center point is a tie value of second center coordinates of all moving targets in the student area image; if not, the second center point is a preset fixed value, namely the center coordinate of the second preset area.
Further, through motion detection on the student region image, a second quantity value of a moving target in the student region image is obtained, and a second range for matting the student region image is determined based on the second quantity value; and carrying out matting on the student region image according to the second center point and the second range, and obtaining a third image obtained by matting.
The principle of determining the second range is consistent with the principle of determining the first range in the above, and the principle of performing image matting on the student region image by the fourth method to obtain the fourth image is consistent with the principle of performing image matting on the teacher region image by the second method to obtain the second image, which is not described herein.
Further, whether the matting range exceeds the edge of the student area image is judged according to the second center point and the second range, and the specific method principle is consistent with the method principle that whether the matting range exceeds the edge of the teacher area image according to the first center point and the first range in the above description, and is not repeated here.
Further, in order to ensure that the resolution of each frame of the recorded student region close-up video is consistent, the resolutions of the third image and the fourth image are unified to be fixed resolution.
It is noted that if the teacher moves to the teacher area, the teacher is shot into the panoramic video of the student area to perform motion detection, and similarly, if the student moves to the teacher area, the teacher is shot into the panoramic video of the teacher area to perform motion detection.
In this embodiment, a switching policy may be preset according to the occurrence position of the moving target, and the teacher area panoramic video, the teacher area close-up video, the student area panoramic video, and the student area close-up video may be switched and recorded according to the switching policy, so as to obtain a classroom video.
Specifically, when recording starts, namely when a classroom starts, judging whether a moving object appears in the panoramic video of the teacher area and the panoramic video of the student area at the same time; if yes, switching to the student area close-up video, and recording the student area close-up video of the first time length; if not, judging whether a moving target appears in the panoramic video of the teacher area; if yes, switching to the teacher area close-up video, and recording the teacher area close-up video with the second duration; if not, judging whether a moving target appears in the panoramic video of the student area; if yes, switching to the student area close-up video, and recording the student area close-up video with the third duration; if not, switching to the panoramic video of the teacher area, recording the panoramic video of the teacher area with the fourth time length, and after the recording is finished, switching to the panoramic video of the student area, and recording the panoramic video of the student area with the fifth time length.
And when the recording is finished, namely when the class is finished, switching recording of the teacher area panoramic video, the teacher area close-up video, the student area panoramic video and the student area close-up video is finished, and a class video is obtained.
Referring to fig. 5, in addition to multi-camera shooting, many current classroom recording and broadcasting systems also require an automatic tracking function, and a detector needs to be configured to detect the position of a moving object in a classroom, and a tracker performs close-up tracking on the moving object, so that the configuration cost is increased. Referring to fig. 6, in the application, the motion detection is directly carried out on the panoramic video of the teacher area and the panoramic video of the student area, the position of the moving target is detected, the image picking tracking is carried out on the moving target, the position detection and the close-up tracking of the moving target can be finished only by the teacher machine and the student machine, and the close-up of the teacher panorama, the teacher close-up, the student panorama and the student close-up can be realized only by three devices of the teacher machine, the student machine and the close-up recorder, thereby not only simplifying the devices and wiring in the record-play teaching room, greatly reducing the configuration cost of a classroom record-play system, but also meeting the function requirement cost of the integration of the devices of the classroom record-play.
In order to better implement the above method, the embodiment of the application also provides a target tracking device applied to classroom recording and broadcasting, which can be integrated in computer equipment, such as a terminal or a server, and the terminal can include, but is not limited to, mobile phones, tablet computers or desktop computers.
Fig. 7 is a block diagram of a target tracking device applied to classroom recording and broadcasting according to an embodiment of the present application, and as shown in fig. 7, the device mainly includes:
the video acquisition module 201 is used for acquiring a teacher area panoramic video and a student area panoramic video;
the uniform tracking matting module 202 is configured to perform uniform tracking matting on a moving target in the panoramic video of the teacher area according to coordinates of the moving target in the panoramic video of the teacher area, so as to obtain a close-up video of the teacher area; the method comprises the steps of,
and the variable speed tracking matting module 203 is configured to perform variable speed tracking matting on the moving target in the panoramic video of the student area according to the coordinates of the moving target in the panoramic video of the student area, so as to obtain a close-up video of the student area.
Specifically, the method further comprises the following steps:
specifically, the constant-speed tracking and matting module further comprises:
the coordinates of a moving target in the panoramic video of the teacher area are first center coordinates; according to the coordinates of the moving targets in the panoramic video of the teacher area, carrying out uniform-speed tracking and matting on the moving targets in the panoramic video of the teacher area to obtain a close-up video of the teacher area;
the judging module is used for judging whether a moving object exists in the teacher area image or not; if so, acquiring a first center coordinate, and carrying out uniform-speed tracking and matting on the teacher area image according to the first center coordinate to obtain a first image; if not, the teacher area image is scratched by a first preset method to obtain a second image; the teacher area image is an image frame in the teacher area panoramic video;
the fusion module is used for fusing the first image and the second image into a close-up video of the teacher area according to the time sequence of the panoramic video of the teacher area;
specifically, the judging module further includes:
the acquisition selection module is used for acquiring a plurality of first center coordinates and selecting a first current coordinate and a first source coordinate from the plurality of first center coordinates;
the calculation module is used for calculating the number of times of change of the matting position based on the first current coordinate and the first source coordinate; taking the first current coordinate as an initial position of the matting, taking the first source coordinate as a final position of the matting, uniformly transforming the matting position from the initial position to the final position for a plurality of times, wherein the transformation times are the matting position change times;
and the teacher area image matting module is used for matting the teacher area image according to the matting position to obtain a first image.
The various modifications and specific examples of the method provided in the foregoing embodiment are applicable to the target tracking device applied to the classroom recording in this embodiment, and by the foregoing detailed description of the target tracking method applied to the classroom recording, those skilled in the art can clearly know the implementation method of the target tracking device applied to the classroom recording in this embodiment, which is not described in detail herein for brevity of description.
In order to better execute the program of the above method, the embodiment of the present application further provides a computer device, as shown in fig. 8, where the computer device 300 includes a memory 301 and a processor 302.
The computer device 300 may be implemented in a variety of forms including a cell phone, tablet computer, palmtop computer, notebook computer, desktop computer, and the like.
Wherein the memory 301 may be used to store instructions, programs, code sets, or instruction sets. The memory 301 may include a stored program area and a stored data area, where the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as motion detection on a teacher area panoramic video and motion detection on a student area panoramic video, etc.), and instructions for implementing the object tracking method applied to class recording and playing provided in the above embodiments, etc.; the data storage area may store the data and the like related to the target tracking method applied to the classroom recording and playing provided by the above embodiment.
Processor 302 may include one or more processing cores. The processor 302 performs the various functions of the present application and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 301, invoking data stored in the memory 301. The processor 302 may be at least one of an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a digital signal processor (Digital Signal Processor, DSP), a digital signal processing device (Digital Signal Processing Device, DSPD), a programmable logic device (Programmable Logic Device, PLD), a field programmable gate array (Field Programmable Gate Array, FPGA), a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, and a microprocessor. It will be appreciated that the electronics for implementing the functions of the processor 302 described above may be other for different devices, and embodiments of the present application are not particularly limited.
Embodiments of the present application provide a computer-readable storage medium, for example, comprising: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes. The computer readable storage medium stores a computer program that can be loaded by a processor and execute the target tracking method applied to the class recording and playing of the above embodiment.
The present application is not limited by the specific embodiments, and those skilled in the art, having read the present specification, may make modifications to the embodiments without creative contribution as necessary, but are protected by patent laws within the scope of the claims of the present application.

Claims (6)

1. The target tracking method applied to classroom recording and broadcasting is characterized by comprising the following steps of:
acquiring a teacher area panoramic video and a student area panoramic video;
according to the coordinates of the moving targets in the panoramic video of the teacher area, carrying out uniform-speed tracking and matting on the moving targets in the panoramic video of the teacher area to obtain a close-up video of the teacher area;
according to the coordinates of the moving targets in the panoramic video of the student zone, carrying out variable speed tracking matting on the moving targets in the panoramic video of the student zone to obtain a close-up video of the student zone;
the coordinates of a moving target in the panoramic video of the teacher area are first center coordinates; according to the coordinates of the moving object in the teacher area panoramic video, the moving object in the teacher area panoramic video is tracked at a constant speed to obtain a close-up video of the teacher area, which comprises the following steps:
judging whether a moving target exists in the teacher area image; the teacher area image is an image frame in the teacher area panoramic video;
if yes, acquiring the first center coordinate, and carrying out uniform-speed tracking and matting on the teacher area image according to the first center coordinate to obtain a first image;
if not, the teacher area image is scratched by a first preset method to obtain a second image;
according to the time sequence of the panoramic video of the teacher area, the first image and the second image are fused into a close-up video of the teacher area;
the obtaining the first center coordinate, according to the first center coordinate, the teacher area image is tracked at a constant speed to obtain a first image, including:
acquiring a plurality of first center coordinates, and selecting a first current coordinate and a first source coordinate from the plurality of first center coordinates;
calculating the number of times of change of the matting position based on the first current coordinate and the first source coordinate;
taking the first current coordinate as an initial position of the matting, taking the first source coordinate as a final position of the matting, uniformly transforming the matting position from the initial position to the final position for a plurality of times, wherein the transformation times are the matting position change times;
and according to the matting position, matting the teacher area image to obtain a first image.
2. The method of claim 1, wherein the coordinates of the moving object in the student zone panoramic video are second center coordinates; and carrying out variable speed tracking matting on the moving target in the panoramic video of the student zone according to the coordinates of the moving target in the panoramic video of the student zone to obtain a close-up video of the student zone, wherein the variable speed tracking matting comprises the following steps:
judging whether a moving object exists in the student area image; the student area image is an image frame in the student area panoramic video;
if yes, acquiring the second center coordinates, and carrying out variable speed tracking matting on the student region image according to the second center coordinates to obtain a third image;
if not, the student region image is scratched by a second preset method to obtain a fourth image;
and fusing the third image and the fourth image into a student area close-up video according to the time sequence of the student area panoramic video.
3. The method of claim 2, wherein the obtaining the second center coordinate, according to the second center coordinate, performs variable speed tracking matting on the student zone image to obtain a third image, includes:
acquiring a plurality of second center coordinates, and selecting a second current coordinate and a second source coordinate from the plurality of second center coordinates;
calculating a matting position change distance based on the second current coordinate and the second source coordinate;
the second current coordinate is used as an initial position of the matting, the second source coordinate is used as a final position of the matting, the matting position is transformed for preset times from the initial position to the final position, and the transformed distance is the matting position change distance;
and according to the matting positions, matting the student region images to obtain a third image.
4. A target tracking device applied to classroom recording and broadcasting is characterized by comprising,
the video acquisition module is used for acquiring a teacher area panoramic video and a student area panoramic video;
the uniform-speed tracking and matting module is used for uniformly tracking and matting the moving target in the panoramic video of the teacher area according to the coordinates of the moving target in the panoramic video of the teacher area to obtain a close-up video of the teacher area; the method comprises the steps of,
the variable speed tracking matting module is used for carrying out variable speed tracking matting on the moving target in the panoramic video of the student zone according to the coordinates of the moving target in the panoramic video of the student zone to obtain a close-up video of the student zone;
specifically, the constant-speed tracking and matting module further comprises:
the coordinates of a moving target in the panoramic video of the teacher area are first center coordinates; according to the coordinates of the moving targets in the panoramic video of the teacher area, carrying out uniform-speed tracking and matting on the moving targets in the panoramic video of the teacher area to obtain a close-up video of the teacher area;
the judging module is used for judging whether a moving object exists in the teacher area image or not; if yes, acquiring the first center coordinate, and carrying out uniform-speed tracking and matting on the teacher area image according to the first center coordinate to obtain a first image; if not, the teacher area image is scratched by a first preset method to obtain a second image; the teacher area image is an image frame in the teacher area panoramic video;
the fusion module is used for fusing the first image and the second image into a teacher area close-up video according to the time sequence of the teacher area panoramic video;
specifically, the judging module further includes:
the acquisition selection module is used for acquiring a plurality of first center coordinates and selecting a first current coordinate and a first source coordinate from the plurality of first center coordinates;
the calculation module is used for calculating the number of times of change of the matting position based on the first current coordinate and the first source coordinate; taking the first current coordinate as an initial position of the matting, taking the first source coordinate as a final position of the matting, uniformly transforming the matting position from the initial position to the final position for a plurality of times, wherein the transformation times are the matting position change times;
and the teacher area image matting module is used for matting the teacher area image according to the matting position to obtain a first image.
5. A computer device comprising a memory and a processor, the memory having stored thereon a computer program capable of being loaded by the processor and performing the method according to any of claims 1 to 3.
6. A computer readable storage medium, characterized in that a computer program is stored which can be loaded by a processor and which performs the method according to any of claims 1 to 3.
CN202210344137.3A 2022-04-02 2022-04-02 Target tracking method, device, equipment and medium applied to classroom recording and broadcasting Active CN114650453B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210344137.3A CN114650453B (en) 2022-04-02 2022-04-02 Target tracking method, device, equipment and medium applied to classroom recording and broadcasting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210344137.3A CN114650453B (en) 2022-04-02 2022-04-02 Target tracking method, device, equipment and medium applied to classroom recording and broadcasting

Publications (2)

Publication Number Publication Date
CN114650453A CN114650453A (en) 2022-06-21
CN114650453B true CN114650453B (en) 2023-08-15

Family

ID=81996461

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210344137.3A Active CN114650453B (en) 2022-04-02 2022-04-02 Target tracking method, device, equipment and medium applied to classroom recording and broadcasting

Country Status (1)

Country Link
CN (1) CN114650453B (en)

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101208721A (en) * 2005-08-02 2008-06-25 卡西欧计算机株式会社 Image processing apparatus and image processing program
CN103747174A (en) * 2013-12-20 2014-04-23 河北汉光重工有限责任公司 Multi-target free focusing method and application apparatus thereof
CN103905734A (en) * 2014-04-17 2014-07-02 苏州科达科技股份有限公司 Method and device for intelligent tracking and photographing
JP2016054491A (en) * 2015-10-20 2016-04-14 ルネサスエレクトロニクス株式会社 Image processing apparatus, image processing method, and program
CN106101847A (en) * 2016-07-12 2016-11-09 三星电子(中国)研发中心 The method and system of panoramic video alternating transmission
CN106657893A (en) * 2016-11-10 2017-05-10 浙江蓝鸽科技有限公司 Recorded broadcasting method and system with intelligent switching function
CN107452018A (en) * 2017-08-02 2017-12-08 北京翰博尔信息技术股份有限公司 Speaker's tracking and system
CN108810455A (en) * 2017-05-02 2018-11-13 南京理工大学 It is a kind of can recognition of face intelligent video monitoring system
CN109215055A (en) * 2017-06-30 2019-01-15 杭州海康威视数字技术股份有限公司 A kind of target's feature-extraction method, apparatus and application system
CN110086992A (en) * 2019-04-29 2019-08-02 努比亚技术有限公司 Filming control method, mobile terminal and the computer storage medium of mobile terminal
CN110517288A (en) * 2019-07-23 2019-11-29 南京莱斯电子设备有限公司 Real-time target detecting and tracking method based on panorama multichannel 4k video image
CN110570448A (en) * 2019-09-07 2019-12-13 深圳岚锋创视网络科技有限公司 Target tracking method and device of panoramic video and portable terminal
CN110751674A (en) * 2018-07-24 2020-02-04 北京深鉴智能科技有限公司 Multi-target tracking method and corresponding video analysis system
CN111083557A (en) * 2019-12-20 2020-04-28 浙江大华技术股份有限公司 Video recording and playing control method and device
CN111225145A (en) * 2020-01-13 2020-06-02 北京中庆现代技术股份有限公司 Real-time image detection analysis and tracking method
CN111385474A (en) * 2020-03-09 2020-07-07 浙江大华技术股份有限公司 Target object tracking method and device, storage medium and electronic device
CN112040137A (en) * 2020-11-03 2020-12-04 深圳点猫科技有限公司 Method, device and equipment for automatically tracking and shooting teachers in recording and broadcasting
CN112183235A (en) * 2020-09-07 2021-01-05 根尖体育科技(北京)有限公司 Automatic control method for video acquisition aiming at sport places
JP2021033060A (en) * 2019-08-23 2021-03-01 キヤノン株式会社 Lens control device and control method therefor
CN112541484A (en) * 2020-12-28 2021-03-23 平安银行股份有限公司 Face matting method, system, electronic device and storage medium
CN113052868A (en) * 2021-03-11 2021-06-29 奥比中光科技集团股份有限公司 Cutout model training and image cutout method and device
CN114125267A (en) * 2021-10-19 2022-03-01 上海赛连信息科技有限公司 Method and device for intelligently tracking camera

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6012199B2 (en) * 2012-02-24 2016-10-25 キヤノン株式会社 Drive device and camera system

Patent Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101208721A (en) * 2005-08-02 2008-06-25 卡西欧计算机株式会社 Image processing apparatus and image processing program
CN103747174A (en) * 2013-12-20 2014-04-23 河北汉光重工有限责任公司 Multi-target free focusing method and application apparatus thereof
CN103905734A (en) * 2014-04-17 2014-07-02 苏州科达科技股份有限公司 Method and device for intelligent tracking and photographing
JP2016054491A (en) * 2015-10-20 2016-04-14 ルネサスエレクトロニクス株式会社 Image processing apparatus, image processing method, and program
CN106101847A (en) * 2016-07-12 2016-11-09 三星电子(中国)研发中心 The method and system of panoramic video alternating transmission
CN106657893A (en) * 2016-11-10 2017-05-10 浙江蓝鸽科技有限公司 Recorded broadcasting method and system with intelligent switching function
CN108810455A (en) * 2017-05-02 2018-11-13 南京理工大学 It is a kind of can recognition of face intelligent video monitoring system
CN109215055A (en) * 2017-06-30 2019-01-15 杭州海康威视数字技术股份有限公司 A kind of target's feature-extraction method, apparatus and application system
CN107452018A (en) * 2017-08-02 2017-12-08 北京翰博尔信息技术股份有限公司 Speaker's tracking and system
CN110751674A (en) * 2018-07-24 2020-02-04 北京深鉴智能科技有限公司 Multi-target tracking method and corresponding video analysis system
CN110086992A (en) * 2019-04-29 2019-08-02 努比亚技术有限公司 Filming control method, mobile terminal and the computer storage medium of mobile terminal
CN110517288A (en) * 2019-07-23 2019-11-29 南京莱斯电子设备有限公司 Real-time target detecting and tracking method based on panorama multichannel 4k video image
JP2021033060A (en) * 2019-08-23 2021-03-01 キヤノン株式会社 Lens control device and control method therefor
CN110570448A (en) * 2019-09-07 2019-12-13 深圳岚锋创视网络科技有限公司 Target tracking method and device of panoramic video and portable terminal
WO2021043295A1 (en) * 2019-09-07 2021-03-11 影石创新科技股份有限公司 Target tracking method and apparatus for panoramic video, and portable terminal
CN111083557A (en) * 2019-12-20 2020-04-28 浙江大华技术股份有限公司 Video recording and playing control method and device
CN111225145A (en) * 2020-01-13 2020-06-02 北京中庆现代技术股份有限公司 Real-time image detection analysis and tracking method
CN111385474A (en) * 2020-03-09 2020-07-07 浙江大华技术股份有限公司 Target object tracking method and device, storage medium and electronic device
CN112183235A (en) * 2020-09-07 2021-01-05 根尖体育科技(北京)有限公司 Automatic control method for video acquisition aiming at sport places
CN112040137A (en) * 2020-11-03 2020-12-04 深圳点猫科技有限公司 Method, device and equipment for automatically tracking and shooting teachers in recording and broadcasting
CN112541484A (en) * 2020-12-28 2021-03-23 平安银行股份有限公司 Face matting method, system, electronic device and storage medium
CN113052868A (en) * 2021-03-11 2021-06-29 奥比中光科技集团股份有限公司 Cutout model training and image cutout method and device
CN114125267A (en) * 2021-10-19 2022-03-01 上海赛连信息科技有限公司 Method and device for intelligently tracking camera

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈世文 ; .全自动智能录播***跟踪技术比较研究.中国教育技术装备.2013,(第23期),59-62. *

Also Published As

Publication number Publication date
CN114650453A (en) 2022-06-21

Similar Documents

Publication Publication Date Title
US11663733B2 (en) Depth determination for images captured with a moving camera and representing moving features
CN108377342B (en) Double-camera shooting method and device, storage medium and terminal
US20170094196A1 (en) Automatic composition of composite images or video with stereo foreground objects
US11102413B2 (en) Camera area locking
CN105635588B (en) A kind of digital image stabilization method and device
CN101640788B (en) Method and device for controlling monitoring and monitoring system
KR20050065298A (en) Motion compensated frame rate conversion
CN107809563B (en) Blackboard writing detection system, method and device
CN111225145A (en) Real-time image detection analysis and tracking method
Wu et al. Global motion estimation with iterative optimization-based independent univariate model for action recognition
Yokoi et al. Virtual camerawork for generating lecture video from high resolution images
CN105430269A (en) Shooting method and apparatus applied to mobile terminal
CN114650453B (en) Target tracking method, device, equipment and medium applied to classroom recording and broadcasting
CN113596544A (en) Video generation method and device, electronic equipment and storage medium
CN104123716A (en) Image stability detection method, device and terminal
JPH04213973A (en) Image shake corrector
CN111988520B (en) Picture switching method and device, electronic equipment and storage medium
CN113301324B (en) Virtual focus detection method, device, equipment and medium based on camera device
CN111325674A (en) Image processing method, device and equipment
CN114222065A (en) Image processing method, image processing apparatus, electronic device, storage medium, and program product
TWI690207B (en) Method for object tracking
CN108156512B (en) Video playing control method and device
CN114786054A (en) Classroom recording and broadcasting method, device, equipment and storage medium
US11991448B2 (en) Digital zoom
CN113364985B (en) Live broadcast lens tracking method, device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant