CN109597431B - Target tracking method and device - Google Patents

Target tracking method and device Download PDF

Info

Publication number
CN109597431B
CN109597431B CN201811308505.9A CN201811308505A CN109597431B CN 109597431 B CN109597431 B CN 109597431B CN 201811308505 A CN201811308505 A CN 201811308505A CN 109597431 B CN109597431 B CN 109597431B
Authority
CN
China
Prior art keywords
target
tracking
video stream
developer terminal
embedded platform
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811308505.9A
Other languages
Chinese (zh)
Other versions
CN109597431A (en
Inventor
孙洋
秦元河
覃才俊
韩杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hainan Qiantang Shilian Information Technology Co.,Ltd.
Original Assignee
Visionvera Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Visionvera Information Technology Co Ltd filed Critical Visionvera Information Technology Co Ltd
Priority to CN201811308505.9A priority Critical patent/CN109597431B/en
Publication of CN109597431A publication Critical patent/CN109597431A/en
Application granted granted Critical
Publication of CN109597431B publication Critical patent/CN109597431B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/12Target-seeking control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The embedded platform developer terminal acquires a video stream acquired by the camera equipment and sends the video stream to the client, the client receives a tracking target selected by a user and sends the tracking target to the embedded platform developer terminal, the embedded platform developer terminal calls an openT L D process to determine tracking parameters and target characteristic information of the tracking target, the embedded platform developer terminal identifies that characteristic information of a shooting target in the video stream is matched with the tracking target characteristic information after determining that the tracking target is lost, continues to call the openT L D process to determine the tracking parameters of the tracking target, and controls the camera equipment to zoom and/or adjust the angle of the electric pan-tilt head according to the tracking parameters to track the tracking target.

Description

Target tracking method and device
Technical Field
The present invention relates to the field of target tracking, and in particular, to a method and an apparatus for target tracking.
Background
The traditional target tracking system comprises ultrasonic wave and microwave tracking, infrared tracking and the like, wherein the ultrasonic wave and microwave tracking is abandoned due to poor anti-interference performance, and the infrared tracking is generally used, but the tracked target needs to carry an infrared positioning device, so that the tracking is inconvenient.
In recent years, the field of artificial intelligence is rapidly developed, the visual tracking technology is taken as an important direction in the field of artificial intelligence, the development speed is very rapid, and automatic tracking of a target can be realized through the combination of the visual tracking technology, an electric holder, an intelligent system and the like.
Although the existing automatic target tracking system can automatically track the target, the target cannot be continuously tracked after the target is lost, and even if the target reappears in the visual field of the camera, the target cannot be re-identified, so that the target cannot be continuously tracked.
Disclosure of Invention
In view of the above problems, the present invention provides a method and an apparatus for tracking a target, which solve the problem that the existing automatic target tracking system cannot re-identify the target and cannot continue to track the target even if the target reappears in the field of view of the image capturing device after the tracking target is lost.
In order to solve the above technical problem, an embodiment of the present invention provides a method for tracking a target, where the method is applied to a tracking system, and the tracking system includes: the method comprises the following steps that the camera shooting equipment, an embedded platform developer terminal, an electric pan-tilt and a client are connected with the embedded platform developer terminal in an external mode, and the method comprises the following steps:
the embedded platform developer terminal acquires a video stream acquired by the camera equipment, wherein the video stream comprises a plurality of shooting targets and sends the video stream to the client;
the client receives a tracking target selected by a user from a plurality of shooting targets of the video stream, and sends the target to be tracked to the embedded platform developer terminal;
the embedded platform developer terminal calls an openT L D process to determine tracking parameters of the tracking target, controls the camera equipment to zoom and/or adjusts the angle of the electric holder according to the tracking parameters, so as to track the target equipment, sends video streams to a client and determines target characteristic information of the tracking target;
and after determining that the tracking target is lost, the embedded platform developer terminal identifies whether the characteristic information of the shooting target in the video stream is matched with the characteristic information of the tracking target, if so, the embedded platform developer terminal continues to call the openT L D process to determine the tracking parameter of the tracking target, and controls the camera equipment to zoom and/or adjust the angle of the electric holder according to the tracking parameter so as to track the tracking target and send the video stream to a client.
Optionally, after determining that the tracking target is lost, the embedded platform developer terminal identifies whether feature information of a shooting target in the video stream is matched with the target feature information, including:
after the embedded platform developer terminal determines that the tracking target is lost, the embedded platform developer terminal sends the video stream with the lost tracking target to the client;
the client receives a region delineated in the video stream after the tracking target is lost by a user, and sends the delineated region to the embedded platform developer terminal;
the embedded platform developer terminal controls the camera equipment to collect the video stream of the delineated region according to the delineated region and calls the openT L D process to identify the video stream of the delineated region;
when the tracking target reappears in the delineated area, the openT L D process identifies whether the feature information of the shooting target in the video stream after the tracking target reappears is matched with the tracking target feature information.
Optionally, after determining that the tracking target is lost, the embedded platform developer terminal identifies whether feature information of a shooting target in the video stream matches the target feature information, and the method further includes:
and if the characteristic information of the shooting target in the video stream is not matched with the characteristic information of the tracking target, the embedded platform developer terminal stops the openT L D process and finishes tracking the tracking target.
Optionally, the calling the openT L D process by the embedded platform developer terminal to determine a tracking parameter of a tracking target includes:
the tracking parameters comprise a first tracking parameter and a second tracking parameter;
the openT L D process learns the motion parameters of the tracked target to obtain a first tracking parameter of the tracked target, wherein the motion parameters comprise position, speed, acceleration and motion track characteristics;
and the openT L D process detects the motion posture of the tracked target to obtain a second tracking parameter of the tracked target, wherein the motion posture comprises various motion appearance posture characteristics.
Optionally, the embedded platform developer terminal invokes the openT L D process to determine target feature information of the tracking target, including:
and the openT L D process obtains the static appearance characteristics of the target to be tracked according to each frame image of the video stream of the target to be tracked, and establishes a tracking target characteristic information base in the embedded platform developer terminal according to the static appearance characteristics and the motion posture characteristics.
Optionally, the electric pan-tilt includes a pan-tilt support, a pan-tilt controller, and a motor, where the pan-tilt support is hard-linked with the image capture device, and the embedded platform developer terminal calls the openT L D process to determine a tracking parameter of a tracking target, and controls the image capture device to zoom and/or adjust the angle of the electric pan-tilt according to the tracking parameter, including:
the embedded platform developer terminal sends the tracking parameters to the holder controller;
the holder controller calculates the mechanical angle adjusting parameters of the holder bracket according to the tracking parameters;
the cloud platform controller is according to mechanical angle adjusting parameter, control motor rotates, adjusts the mechanical angle of cloud platform support, cloud platform support drives camera equipment rotates, realizes the tracking to the tracking target.
The embodiment of the invention also provides a target tracking device, which is applied to a tracking system, wherein the tracking system comprises a camera device, an embedded platform developer terminal, an electric holder and a client, the embedded platform developer terminal is externally connected with the camera device, the electric holder and the client, the embedded platform developer terminal comprises an openT L D process, and the device comprises:
the acquisition and sending module is used for the embedded platform developer terminal to acquire a video stream acquired by the camera equipment, wherein the video stream comprises a plurality of shooting targets and images of the surrounding area of the targets, and the video stream is sent to the client, the client receives the video stream, selects a tracking target from the plurality of shooting targets, and sends the tracking target to the calling control module;
a calling control module, configured to call the openT L D process by the embedded platform developer terminal to determine a tracking parameter of a tracked target, and control the camera device to zoom and/or adjust the angle of the electric pan-tilt according to the tracking parameter, so as to track the tracked target, send a video stream to a client, and determine target feature information of the tracked target;
and the identification sending module is used for identifying whether the characteristic information of the shooting target in the video stream is matched with the characteristic information of the tracking target after the embedded platform developer terminal determines that the tracking target is lost, if so, continuing to call the openT L D process to determine the tracking parameter of the tracking target, controlling the camera equipment to zoom and/or adjusting the angle of the electric holder according to the tracking parameter, so as to track the tracking target and send the video stream to a client.
Optionally, the identification sending module includes:
the determining and sending submodule is used for sending the video stream with the lost tracking target to the client by the embedded platform developer terminal after the embedded platform developer terminal determines that the tracking target is lost, receiving the area defined by the user in the video stream with the lost tracking target by the client, and sending the defined area to the embedded platform developer terminal;
the control calling submodule is used for controlling the camera shooting equipment to collect the video stream of the delineated region according to the delineated region by the embedded platform developer terminal and calling the openT L D process to identify the video stream of the delineated region;
and the identification submodule is used for identifying whether the characteristic information of the shooting target in the video stream after the tracking target reappears in the delineated area or not by the openT L D process.
Optionally, the invoking control module includes:
calling a learning submodule for the openT L D process to learn the motion parameters of the tracked target to obtain a first tracking parameter of the tracked target, wherein the motion parameters comprise position, speed, acceleration and motion track characteristics;
calling a detection submodule for detecting the motion posture of the tracked target by the openT L D process to obtain a second tracking parameter of the tracked target, wherein the motion posture comprises various motion appearance posture characteristics;
and calling a building submodule, wherein the building submodule is used for obtaining the static appearance characteristic of the target to be tracked according to each frame image of the video stream of the target to be tracked by the openT L D process, and building a tracking target characteristic information base in the embedded platform developer terminal according to the static appearance characteristic and the motion posture.
Optionally, the apparatus further comprises:
and the identification disabling module is used for identifying whether the characteristic information of the shooting target in the video stream is matched with the characteristic information of the tracking target after the tracking target is determined to be lost, and if the characteristic information of the shooting target in the video stream is not matched with the characteristic information of the tracking target, disabling the openT L D process and ending the tracking of the tracking target.
Compared with the prior art, the target tracking method and the target tracking device provided by the invention have the advantages that the database is established for the characteristic information of the tracked target, the video stream of the lost tracked target is continuously acquired after the target is lost, the target is re-identified when the target reappears in the visual field of the camera equipment, the rotation of the electric holder is controlled according to the tracking parameters so as to adjust the position of the camera equipment, and the re-tracking of the lost target is simply and effectively realized.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a schematic networking diagram of a video network of the present invention;
FIG. 2 is a schematic diagram of a hardware architecture of a node server according to the present invention;
fig. 3 is a schematic diagram of a hardware structure of an access switch of the present invention;
fig. 4 is a schematic diagram of a hardware structure of an ethernet protocol conversion gateway according to the present invention;
FIG. 5 is a flow chart of a method of target tracking in accordance with an embodiment of the present invention;
FIG. 6 is a detailed flow chart of one step of a method for target tracking according to an embodiment of the present invention;
FIG. 7 is another detailed flow chart of one step of a method for target tracking according to an embodiment of the present invention;
FIG. 8 is a flowchart of a method for adjusting the angle of a motorized pan and tilt head according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a system for target tracking in accordance with an embodiment of the present invention;
FIG. 10 is a block diagram of an apparatus for target tracking in accordance with an embodiment of the present invention;
fig. 11 is a detailed block diagram of an apparatus for target tracking according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The video networking is an important milestone for network development, is a real-time network, can realize high-definition video real-time transmission, and pushes a plurality of internet applications to high-definition video, and high-definition faces each other.
The video networking adopts a real-time high-definition video exchange technology, can integrate required services such as dozens of services of video, voice, pictures, characters, communication, data and the like on a system platform on a network platform, such as high-definition video conference, video monitoring, intelligent monitoring analysis, emergency command, digital broadcast television, delayed television, network teaching, live broadcast, VOD on demand, television mail, Personal Video Recorder (PVR), intranet (self-office) channels, intelligent video broadcast control, information distribution and the like, and realizes high-definition quality video broadcast through a television or a computer.
To better understand the embodiments of the present invention, the following description refers to the internet of view:
some of the technologies applied in the video networking are as follows:
network Technology (Network Technology)
Network technology innovation in video networking has improved over traditional Ethernet (Ethernet) to face the potentially enormous video traffic on the network. Unlike pure network Packet Switching (Packet Switching) or network circuit Switching (circuit Switching), the Packet Switching is adopted by the technology of the video networking to meet the Streaming requirement. The video networking technology has the advantages of flexibility, simplicity and low price of packet switching, and simultaneously has the quality and safety guarantee of circuit switching, thereby realizing the seamless connection of the whole network switching type virtual circuit and the data format.
Switching Technology (Switching Technology)
The video network adopts two advantages of asynchronism and packet switching of the Ethernet, eliminates the defects of the Ethernet on the premise of full compatibility, has end-to-end seamless connection of the whole network, is directly communicated with a user terminal, and directly bears an IP data packet. The user data does not require any format conversion across the entire network. The video networking is a higher-level form of the Ethernet, is a real-time exchange platform, can realize the real-time transmission of the whole-network large-scale high-definition video which cannot be realized by the existing Internet, and pushes a plurality of network video applications to high-definition and unification.
Server Technology (Server Technology)
The server technology on the video networking and unified video platform is different from the traditional server, the streaming media transmission of the video networking and unified video platform is established on the basis of connection orientation, the data processing capacity of the video networking and unified video platform is independent of flow and communication time, and a single network layer can contain signaling and data transmission. For voice and video services, the complexity of video networking and unified video platform streaming media processing is much simpler than that of data processing, and the efficiency is greatly improved by more than one hundred times compared with that of a traditional server.
Storage Technology (Storage Technology)
The super-high speed storage technology of the unified video platform adopts the most advanced real-time operating system in order to adapt to the media content with super-large capacity and super-large flow, the program information in the server instruction is mapped to the specific hard disk space, the media content is not passed through the server any more, and is directly sent to the user terminal instantly, and the general waiting time of the user is less than 0.2 second. The optimized sector distribution greatly reduces the mechanical motion of the magnetic head track seeking of the hard disk, the resource consumption only accounts for 20% of that of the IP internet of the same grade, but concurrent flow which is 3 times larger than that of the traditional hard disk array is generated, and the comprehensive efficiency is improved by more than 10 times.
Network Security Technology (Network Security Technology)
The structural design of the video network completely eliminates the network security problem troubling the internet structurally by the modes of independent service permission control each time, complete isolation of equipment and user data and the like, generally does not need antivirus programs and firewalls, avoids the attack of hackers and viruses, and provides a structural carefree security network for users.
Service Innovation Technology (Service Innovation Technology)
The unified video platform integrates services and transmission, and is not only automatically connected once whether a single user, a private network user or a network aggregate. The user terminal, the set-top box or the PC are directly connected to the unified video platform to obtain various multimedia video services in various forms. The unified video platform adopts a menu type configuration table mode to replace the traditional complex application programming, can realize complex application by using very few codes, and realizes infinite new service innovation.
Networking of the video network is as follows:
the video network is a centralized control network structure, and the network can be a tree network, a star network, a ring network and the like, but on the basis of the centralized control node, the whole network is controlled by the centralized control node in the network.
As shown in fig. 1, the video network is divided into an access network and a metropolitan network.
The devices of the access network part can be mainly classified into 3 types: node server, access switch, terminal (including various set-top boxes, coding boards, memories, etc.). The node server is connected to an access switch, which may be connected to a plurality of terminals and may be connected to an ethernet network.
The node server is a node which plays a centralized control function in the access network and can control the access switch and the terminal. The node server can be directly connected with the access switch or directly connected with the terminal.
Similarly, devices of the metropolitan network portion may also be classified into 3 types: a metropolitan area server, a node switch and a node server. The metro server is connected to a node switch, which may be connected to a plurality of node servers.
The node server is a node server of the access network part, namely the node server belongs to both the access network part and the metropolitan area network part.
The metropolitan area server is a node which plays a centralized control function in the metropolitan area network and can control a node switch and a node server. The metropolitan area server can be directly connected with the node switch or directly connected with the node server.
Therefore, the whole video network is a network structure with layered centralized control, and the network controlled by the node server and the metropolitan area server can be in various structures such as tree, star and ring.
The access network part can form a unified video platform (the part in the dotted circle), and a plurality of unified video platforms can form a video network; each unified video platform may be interconnected via metropolitan area and wide area video networking.
Video networking device classification
1.1 devices in the video network of the embodiment of the present invention can be mainly classified into 3 types: servers, switches (including ethernet gateways), terminals (including various set-top boxes, code boards, memories, etc.). The video network as a whole can be divided into a metropolitan area network (or national network, global network, etc.) and an access network.
1.2 wherein the devices of the access network part can be mainly classified into 3 types: node servers, access switches (including ethernet gateways), terminals (including various set-top boxes, code boards, memories, etc.).
The specific hardware structure of each access network device is as follows:
a node server:
as shown in fig. 2, the system mainly includes a network interface module 201, a switching engine module 202, a CPU module 203, and a disk array module 204;
the network interface module 201, the CPU module 203, and the disk array module 204 all enter the switching engine module 202; the switching engine module 202 performs an operation of looking up the address table 205 on the incoming packet, thereby obtaining the direction information of the packet; and stores the packet in a queue of the corresponding packet buffer 206 based on the packet's steering information; if the queue of the packet buffer 206 is nearly full, it is discarded; the switching engine module 202 polls all packet buffer queues for forwarding if the following conditions are met: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero. The disk array module 204 mainly implements control over the hard disk, including initialization, read-write, and other operations on the hard disk; the CPU module 203 is mainly responsible for protocol processing with an access switch and a terminal (not shown in the figure), configuring an address table 205 (including a downlink protocol packet address table, an uplink protocol packet address table, and a data packet address table), and configuring the disk array module 204.
The access switch:
as shown in fig. 3, the network interface module mainly includes a network interface module (a downlink network interface module 301 and an uplink network interface module 302), a switching engine module 303 and a CPU module 304;
wherein, the packet (uplink data) coming from the downlink network interface module 301 enters the packet detection module 305; the packet detection module 305 detects whether the Destination Address (DA), the Source Address (SA), the packet type, and the packet length of the packet meet the requirements, and if so, allocates a corresponding stream identifier (stream-id) and enters the switching engine module 303, otherwise, discards the stream identifier; the packet (downstream data) coming from the upstream network interface module 302 enters the switching engine module 303; the data packet coming from the CPU module 204 enters the switching engine module 303; the switching engine module 303 performs an operation of looking up the address table 306 on the incoming packet, thereby obtaining the direction information of the packet; if the packet entering the switching engine module 303 is from the downstream network interface to the upstream network interface, the packet is stored in the queue of the corresponding packet buffer 307 in association with the stream-id; if the queue of the packet buffer 307 is nearly full, it is discarded; if the packet entering the switching engine module 303 is not from the downlink network interface to the uplink network interface, the data packet is stored in the queue of the corresponding packet buffer 307 according to the guiding information of the packet; if the queue of the packet buffer 307 is nearly full, it is discarded.
The switching engine module 303 polls all packet buffer queues, which in this embodiment of the present invention is divided into two cases:
if the queue is from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queued packet counter is greater than zero; 3) obtaining a token generated by a code rate control module;
if the queue is not from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero.
The rate control module 208 is configured by the CPU module 204, and generates tokens for packet buffer queues from all downstream network interfaces to upstream network interfaces at programmable intervals to control the rate of upstream forwarding.
The CPU module 304 is mainly responsible for protocol processing with the node server, configuration of the address table 306, and configuration of the code rate control module 308.
Ethernet protocol gateway:
as shown in fig. 4, the apparatus mainly includes a network interface module (a downlink network interface module 401 and an uplink network interface module 402), a switching engine module 403, a CPU module 404, a packet detection module 405, a rate control module 408, an address table 406, a packet buffer 407, a MAC adding module 409, and a MAC deleting module 410.
Wherein, the data packet coming from the downlink network interface module 401 enters the packet detection module 405; the packet detection module 405 detects whether the ethernet MAC DA, the ethernet MAC SA, the ethernet length or frame type, the video network destination address DA, the video network source address SA, the video network packet type, and the packet length of the packet meet the requirements, and if so, allocates a corresponding stream identifier (stream-id); then, the MAC deletion module 410 subtracts MAC DA, MAC SA, length or frame type (2byte) and enters the corresponding receiving buffer, otherwise, discards it;
the downlink network interface module 401 detects the sending buffer of the port, and if there is a packet, obtains the ethernet MAC DA of the corresponding terminal according to the destination address DA of the packet, adds the ethernet MAC DA of the terminal, the MACSA of the ethernet coordination gateway, and the ethernet length or frame type, and sends the packet.
The other modules in the ethernet protocol gateway function similarly to the access switch.
A terminal:
the system mainly comprises a network interface module, a service processing module and a CPU module; for example, the set-top box mainly comprises a network interface module, a video and audio coding and decoding engine module and a CPU module; the coding board mainly comprises a network interface module, a video and audio coding engine module and a CPU module; the memory mainly comprises a network interface module, a CPU module and a disk array module.
1.3 devices of the metropolitan area network part can be mainly classified into 2 types: node server, node exchanger, metropolitan area server. The node switch mainly comprises a network interface module, a switching engine module and a CPU module; the metropolitan area server mainly comprises a network interface module, a switching engine module and a CPU module.
2. Video networking packet definition
2.1 Access network packet definition
The data packet of the access network mainly comprises the following parts: destination Address (DA), Source Address (SA), reserved bytes, payload (pdu), CRC.
As shown in the following table, the data packet of the access network mainly includes the following parts:
Figure BDA0001854343900000111
wherein:
the Destination Address (DA) is composed of 8 bytes (byte), the first byte represents the type of the data packet (such as various protocol packets, multicast data packets, unicast data packets, etc.), there are 256 possibilities at most, the second byte to the sixth byte are metropolitan area network addresses, and the seventh byte and the eighth byte are access network addresses;
the Source Address (SA) is also composed of 8 bytes (byte), defined as the same as the Destination Address (DA);
the reserved byte consists of 2 bytes;
the payload part has different lengths according to different types of datagrams, and is 64 bytes if the datagram is various types of protocol packets, and is 32+1024 or 1056 bytes if the datagram is a unicast packet, of course, the length is not limited to the above 2 types;
the CRC consists of 4 bytes and is calculated in accordance with the standard ethernet CRC algorithm.
Based on the characteristics of the video network, one of the core concepts of the embodiment of the invention is provided, the video embedded platform developer terminal, the camera equipment, the electric pan-tilt and the client are terminals in the video network according to a protocol of the video network, the client sends a signaling, data transmission is carried out between the terminals through a video network server, and the video network can greatly shorten the transmission delay of audio and video streams compared with the internet due to the characteristics of the video network, improve the quality of audio and video, be more beneficial to tracking a moving target, confirm and recognize the tracking target again after being lost, and provide higher audio and video quality and higher speed when tracking is carried out.
FIG. 5 shows a flow chart of a method of target tracking in an embodiment of the invention. The method is applied to a tracking system, and the system comprises the following steps: the target tracking method comprises the following steps of:
step 101: the embedded platform developer terminal obtains a video stream collected by the camera equipment, wherein the video stream comprises a plurality of shooting targets, and sends the video stream to the client.
Referring to fig. 9, in the embodiment of the present invention, an embedded platform developer terminal s30 is based on computer technology and is adaptable to a dedicated computer system with strict requirements on functions, reliability, cost, volume and power consumption of an application system, and generally includes four parts, namely an embedded microprocessor, a peripheral hardware device, an embedded operating system and an application program, for controlling, monitoring or managing other devices, the terminal is connected to a camera device s20, an electric pan-tilt s10 and a PC client s50, the PC client s50 is used for a user to check, select a video stream picture and send a control instruction, when a certain moving target needs to be tracked, the camera device s20 is needed to shoot the tracked target to obtain an image video stream of the tracked target, the video stream includes the tracked target and an image of the environment around the tracked target, since the range of the camera device s20 is relatively large, and a captured image may include many moving targets, the embedded platform developer terminal s30 may actively acquire video streams of a plurality of moving targets including a tracking target captured by the camera device s20, and send the video streams to the PC client s50, it can be understood that all devices that can achieve the above functions fall within the protection range of the present invention.
Step 102: the client receives a tracking target selected by a user from a plurality of shooting targets of the video stream, and sends the target to be tracked to the embedded platform developer terminal.
Referring to fig. 9, in the embodiment of the present invention, a user views video streams of multiple moving targets including a tracking target through a PC client s50, selects a target to be tracked from the multiple moving targets, where the number of the target to be tracked may be one or more, and then sends the selected target to be tracked to an embedded platform developer terminal s 30.
And 103, calling an open L D process by the embedded platform developer terminal to determine tracking parameters of the tracked target, controlling the camera equipment to zoom and/or adjusting the angle of the electric holder according to the tracking parameters, tracking the tracked target, sending the video stream to the client, and determining target characteristic information of the tracked target.
Referring to fig. 9, an openT L D program is installed in an embedded platform developer terminal s30, openT L D is a visual tracking algorithm, and can continuously learn a locked target to obtain the latest appearance characteristics of the target, and at the same time, detect, extract, identify and track a moving target in an image sequence to obtain the moving parameters of the moving target, such as position, speed, acceleration, moving trajectory, and the like, so as to perform further processing and analysis, the embedded platform developer terminal s30 calls an openT L D process to obtain the tracking parameters and target characteristic information of the tracking target, and the embedded platform developer terminal s30 can control a camera device s20 to zoom according to the tracking parameters of the tracking target to achieve the best effect of the tracking target in the obtained video stream, and simultaneously send the tracking parameters to an electric pan-tilt head s10, and the electric pan-tilt head 10 controls the camera device s20 to track the tracking target.
Optionally, referring to fig. 6, step 103 may further include the following steps:
and 103a, learning motion parameters of the tracked target by an openT L D process to obtain first tracking parameters of the tracked target, wherein the motion parameters comprise position, speed, acceleration and motion track characteristics.
In the embodiment of the invention, when the tracking target is tracked, openT L D obtains the motion parameters of the tracking target according to the motion of the tracking target such as walking, running and jumping and the relative position relation with the surrounding environment, wherein the motion parameters comprise the characteristics of the position, the speed, the acceleration, the motion track and the like of the tracking target, and the motion parameters are included in the tracking parameters of the tracking target.
And 103b, detecting the motion posture of the tracked target by the open T L D process to obtain a second tracking parameter of the tracked target, wherein the motion posture comprises various motion appearance posture characteristics.
In the embodiment of the invention, when the tracking target is tracked, openT L D detects the motion appearance postures of the tracking target such as walking, running, jumping, squatting and rolling during motion, and obtains the motion appearance posture characteristics of the tracking target, wherein the appearance postures are included in the tracking parameters of the tracking target.
And 103c, the openT L D process obtains the static appearance characteristics of the target to be tracked according to each frame image of the video stream of the target to be tracked, and establishes a tracking target characteristic information base in the embedded platform developer terminal according to the static appearance characteristics and the motion posture characteristics.
In the embodiment of the present invention, openT L D further needs to obtain a static appearance feature of the target to be tracked, where the static appearance feature includes a feature type that hair, eyes, nose, body contour, and the like of the target to be tracked can be clearly distinguished from other targets similar to the target to be tracked, openT L D performs appearance analysis according to each frame image of a video stream of the target to be tracked to obtain the static appearance feature of the target to be tracked, and establishes a target tracking feature information base in the embedded platform developer terminal s30 by integrating the static appearance feature and the motion posture feature.
And 104, after determining that the tracking target is lost, the embedded platform developer terminal identifies whether the characteristic information of the shooting target in the video stream is matched with the characteristic information of the tracking target, if so, the embedded platform developer terminal continues to call an openT L D process to determine the tracking parameter of the tracking target, and controls the camera equipment to zoom and/or adjust the angle of the electric holder according to the tracking parameter so as to track the tracking target and send the video stream to the client.
Referring to fig. 9, in the embodiment of the present invention, when a target to be tracked is blocked by a blocking object or a tracking target is not within a shooting range of an image capturing apparatus due to other reasons, an embedded platform developer terminal s30 determines that the tracking target is lost, at this time, the image capturing apparatus s20 stays at a last position before the tracking target is lost, an image video stream at the position is continuously acquired, an openT L D still identifies the video stream after the tracking target is lost, it is determined whether an active target matching a target feature information base appears in the video stream, when the tracking target reappears within the shooting range of the image capturing apparatus s20, the openT L D identifies that an active target matching a target feature information base appears in the video stream.
It should be noted that the openT L D may recognize that there is an active target in the video stream that matches the target feature information base, for example, whether the static appearance features of the active target appearing in the video stream match the static appearance features in the feature base, such as the shape and color of the hair of the active target, the proportion of the head to the whole body, the size of the eyes, the shape of the appearance, and whether the body contour is appropriate, such as the tail of the original tracked target, the absence of the tail of the active target, the health of the limbs of the original tracked target, the health of the limbs of the active target, and the like, is first determined, and then the motion appearance pose of the active target matches the motion pose in the feature base, the motion appearance pose of the active target refers to the pose of the active target during the dynamic motion, such as the pose of the active target when walking, the swing amplitude and frequency of the motion pose of the active target when running, and various motion poses stored in the feature base, and then the tracking target tracking platform tracking video camera tracking platform tracking parameters 63s, and zoom camera tracking platform tracking parameters are adjusted to obtain tracking parameters for the tracking video camera tracking platform tracking camera tracking platform tracking camera tracking parameters of the camera tracking platform tracking camera tracking platform tracking camera tracking platform tracking camera tracking platform tracking camera tracking platform tracking camera tracking platform tracking camera tracking platform tracking camera.
Optionally, referring to fig. 7, step 104 may further include the following steps:
step 104 a: and after the embedded platform developer terminal determines that the tracking target is lost, the embedded platform developer terminal sends the video stream with the lost tracking target to the client.
Step 104 b: the client receives a region delineated in the video stream after the tracking target is lost by the user, and sends the delineated region to the embedded platform developer terminal.
And 104c, controlling the camera equipment to collect the video stream of the delineated region by the embedded platform developer terminal according to the delineated region, and calling an openT L D process to identify the video stream of the delineated region.
And 104D, after the tracking target reappears in the defined area, identifying whether the characteristic information of the shooting target in the video stream after the tracking target reappears is matched with the characteristic information of the tracking target by the openT L D process.
Referring to fig. 9, in the embodiment of the present invention, when a target to be tracked is blocked by a blocking object or a tracking target is not within a shooting range of a camera device s20 due to other reasons, an embedded platform developer terminal s30 determines that the tracking target is lost, at this time, a camera device s20 stays at the last position before the tracking target is lost, image video streams at the position are continuously collected, the embedded platform developer terminal s30 sends the collected video streams after the tracking target is lost to a PC client s50, a user calls a video picture before the tracking target is lost and a video picture after the tracking target is lost, according to the pictures, some areas where the tracking target appears are selected by the user, the areas include an area where the tracking target is moved before the tracking target is lost and an area where the tracking target is likely to move after the tracking target is lost, the areas are sent to the embedded platform developer terminal s30, the terminal s30 controls the camera device s20 to collect images in the areas selected by the user, the openT L D still identifies the video streams after the tracking target is lost, it is judged that the tracking target is matched with the tracking target characteristic information library, the tracking target tracking video stream is found again, the tracking target tracking video stream is found by the tracking video stream, the tracking video stream is not adjusted by the tracking device s 3638, the tracking target tracking video stream recognition method disclosed embodiments, the tracking target tracking video camera device s 6338, and the tracking target tracking method disclosed herein, and the tracking method disclosed embodiments, and the tracking method disclosed herein, and the tracking target tracking method disclosed herein, and the invention, the invention can be implemented by the invention, the invention further includes that the.
Optionally, fig. 8 shows a flowchart of a method for adjusting an angle of a motorized pan and tilt head according to an embodiment of the present invention. Electric pan head includes: the method for adjusting the angle of the electric pan-tilt by the embedded platform developer terminal according to the tracking parameters comprises the following steps:
step 201: and the embedded platform developer terminal sends the tracking parameters of the tracking target to the holder controller.
Step 202: and the cradle head controller calculates the mechanical angle adjusting parameters of the cradle head bracket according to the tracking parameters.
Step 203: the pan-tilt controller controls the motor to rotate according to the mechanical angle adjusting parameters, adjusts the mechanical angle of the pan-tilt support, and the pan-tilt support drives the camera shooting equipment to rotate, so that the tracking of the tracking target is realized.
Referring to fig. 9, in the implementation of the present invention, the electric pan-tilt s10 includes: a tripod head support s101, a tripod head controller s102 and a motor s104, the electric tripod head s10 needs to be connected with an external power supply s103, the tripod head support s101 and the camera equipment s20 are connected by hard links to ensure that the camera equipment s20 is stable and does not shake during shooting, the tripod head controller s102 and the embedded platform developer terminal s30 can perform data transmission in a wireless mode or a communication line, both have interfaces for communication, the tripod head controller s102 receives tracking parameters, calculates mechanical angle adjusting parameters of the tripod head support s101 according to data of the position, speed, acceleration and the like of a tracking target in the tracking parameters and characteristics of a motion track and the like, controls the motor s104 to rotate according to the mechanical angle adjusting parameters, and drives the tripod head support s101 to rotate through transmission equipment such as gears or belts and the like, and it can be understood that the rotation of the tripod head support s101 can be in a left-right direction, the pan-tilt support s101 and the camera device s20 are linked by hard links, so that the pan-tilt support s101 can drive the camera device s20 to rotate to track the tracked target.
For example, referring to fig. 9, a system capable of tracking a moving target includes an embedded platform developer terminal s30, a camera device s20, an electric pan/tilt s10, a network server s40, and a PC client s50, where the embedded platform developer terminal s30 is used to control, monitor, or manage other devices, and is internally installed with an openT L D program, openT L D is a visual tracking algorithm, and can continuously learn a locked target to obtain the latest appearance characteristics of the target, and at the same time, detect, extract, identify, and track the moving target in an image sequence, to obtain the motion parameters of the moving target, such as position, speed, acceleration, motion trajectory, and the like, so as to perform further processing and analysis, the embedded platform developer terminal s30 is connected with the camera device s 588, the electric pan/tilt 10, and the PC client s50 through a communication line, if the devices all have a wireless communication function, data transmission can be performed in a wireless mode, a wireless communication base station 40 or a wireless communication module in the network server s40, a wireless communication module is connected with the camera device s 588, the electric pan/tilt/zoom/.
When a certain moving target needs to be tracked, the image pickup device s20 is first required to shoot the tracked target to obtain an image video stream of the tracked target, where the video stream includes the tracked target and an image of the environment around the tracked target, and since the range shot by the image pickup device s20 is relatively large, the shot image may include many moving targets, the embedded platform developer terminal s30 may actively obtain video streams of multiple moving targets including the tracked target shot by the image pickup device s20, and send the video streams to the PC client s 50. The user views the video stream of a plurality of moving targets including the tracking target through the PC client s50, and if it is found that the tracking target image in the video stream is not clear enough, an instruction for controlling the focusing of the image pickup device s20 needs to be sent to the embedded platform developer terminal s30, and the embedded platform developer terminal s30 controls the image pickup device s20 to perform focusing so as to obtain a clear image video stream of the tracking target.
After a user looks up video streams of a plurality of moving targets including a tracking target through a PC client s50, the user selects a tracked target from the plurality of moving targets, the number of the tracked targets can be one or more, the selected tracked target is sent to an embedded platform developer terminal s30, the embedded platform developer terminal s30 receives the selected tracked target, an openT L D process is called, the openT L D obtains motion parameters of the tracked target according to the movement of the tracked target such as walking, running, jumping and the relative position relation with the surrounding environment, the motion parameters include the position, speed, acceleration, motion track and other characteristics of the tracked target, the movement appearance posture of the tracked target during movement is detected, the movement appearance posture of the tracked target such as walking, running, jumping, squatting, rolling and the like, the movement appearance posture characteristics of the tracked target are obtained, according to the information, the embedded platform developer terminal s30 controls a camera device s20 to zoom, the imaging effect of the tracked target in the video stream achieves the best, the information is sent to an electronic cradle head 10, the electronic cradle head is used for controlling camera device to track the camera device, the camera device s and the cradle head 102, the camera device is capable of controlling the camera device to track the camera device according to track the camera device, the camera device is capable of tracking, the camera device is capable of tracking the camera device capable of acquiring the camera.
After a tracked target is shielded by a shelter or is not in a shooting range of a camera device s20 due to other reasons, an embedded platform developer terminal s30 determines that the tracked target is lost, at the moment, the camera device s20 stays at the last position before the tracked target is lost, image video streams of the position are continuously collected, the embedded platform developer terminal s10 sends collected video streams after the tracked target is lost to a PC client s50, a user calls video pictures before the tracked target is lost and video pictures after the tracked target is lost, according to the pictures, some areas where the tracked target appears are selected by the user, the areas include areas where the tracked target is moved before the tracked target is lost and areas where the tracked target is likely to move after the tracked target is lost, the areas are sent to the embedded platform developer terminal s10, the terminal s10 controls the camera device s20 to collect images in the areas according to the areas selected by the user, the openT L D still identifies the video streams after the tracked target is lost, whether the moving target matched with a moving target characteristic information base exists in the current tracking video stream, the tracking video stream is judged, the tracking target database, the tracking platform is controlled by an unmanned aerial vehicle tracking platform s 3638, the tracking platform which the tracking target is controlled by the tracking platform which the tracking target is found by the tracking platform which tracking target is controlled by the tracking platform, the tracking platform which tracking target is found by the tracking platform, the tracking platform which tracking target is controlled by the unmanned aerial vehicle, the unmanned aerial vehicle tracking platform which tracking target tracking platform is found by the tracking platform which tracking target tracking platform is found by the unmanned aerial vehicle is controlled by the unmanned aerial vehicle, the unmanned aerial vehicle tracking target tracking platform, the unmanned aerial vehicle is further tracking target tracking platform which is found by.
Alternatively, referring to a block diagram of an object tracking apparatus shown in fig. 10, the apparatus is applied to a tracking system, and the tracking system includes: camera equipment, embedded platform developer terminal, electronic cloud platform and customer end, external camera equipment, electronic cloud platform and customer end in embedded platform developer terminal, the device includes:
the acquisition and sending module 310 is used for the embedded platform developer terminal to acquire a video stream acquired by the camera equipment, wherein the video stream comprises a plurality of shooting targets and images of the surrounding area of the targets and sends the video stream to the client, and the client receives the video stream, selects a tracking target from the plurality of shooting targets and sends the tracking target to the calling control module;
the calling control module 320 is used for the embedded platform developer terminal to call an openT L D process to determine tracking parameters of a tracked target, control the zooming of the camera device and/or adjust the angle of the electric pan-tilt according to the tracking parameters, track the tracked target, send a video stream to a client and determine target characteristic information of the tracked target;
and the identification sending module 330 is configured to identify whether feature information of a shooting target in the video stream matches with feature information of the tracking target after the embedded platform developer terminal determines that the tracking target is lost, if so, continue to invoke an openT L D process to determine a tracking parameter of the tracking target, and control the camera to zoom and/or adjust the angle of the electric pan-tilt according to the tracking parameter, so as to track the tracking target and send the video stream to the client.
Optionally, referring to fig. 11, on the basis of fig. 10, the apparatus may further include:
the call control module 320 includes:
calling a learning submodule 3201 for an openT L D process to learn motion parameters of the tracked target, so as to obtain a first tracking parameter of the tracked target, wherein the motion parameters comprise position, speed, acceleration and motion track characteristics;
calling a detection submodule 3202 for an openT L D process to perform motion posture detection on the tracked target to obtain a second tracking parameter of the tracked target, wherein the motion posture comprises various motion appearance posture characteristics;
and calling the establishing sub-module 3203, wherein the establishing sub-module is used for the openT L D process to obtain the static appearance characteristics of the target to be tracked according to each frame image of the video stream of the target to be tracked, and establishing a tracking target characteristic information base in the embedded platform developer terminal according to the static appearance characteristics and the motion posture.
The identification transmission module 330 includes:
the determining and sending submodule 3301 is configured to, after the embedded platform developer terminal determines that the tracking target is lost, send the video stream with the lost tracking target to the client, where the client receives a region defined by the user in the video stream with the lost tracking target and sends the defined region to the embedded platform developer terminal.
And the control calling sub-module 3302 is used for controlling the camera device to collect the video stream of the circumscribed area according to the circumscribed area by the embedded platform developer terminal, and calling the openT L D process to identify the video stream of the circumscribed area.
And the identifying sub-module 3303 is configured to, after the tracking target reappears in the defined area, identify whether the feature information of the shooting target in the video stream after the tracking target reappears is matched with the feature information of the tracking target by the openT L D process.
And the identification disabling module 340 is used for identifying whether the characteristic information of the shooting target in the video stream is matched with the characteristic information of the tracking target after the tracking target is determined to be lost, and if the characteristic information of the shooting target is not matched with the characteristic information of the tracking target, disabling the openT L D process and ending the tracking of the tracking target.
Through the embodiment, the automatic identification and automatic tracking of the target are realized when the tracking target is lost and reappears, and the problem that the tracking target cannot be automatically identified and tracked after being lost and reappear in the prior art is solved.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The method and the device for tracking the target provided by the invention are described in detail, and the principle and the implementation mode of the invention are explained by applying a specific example, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A method for tracking a target, the method being applied to a tracking system, the tracking system comprising: the method comprises the following steps that the camera shooting equipment, an embedded platform developer terminal, an electric pan-tilt and a client are connected with the embedded platform developer terminal in an external mode, and the method comprises the following steps:
the embedded platform developer terminal acquires a video stream acquired by the camera equipment, wherein the video stream comprises a plurality of shooting targets and sends the video stream to the client;
the client receives a tracking target selected by a user from a plurality of shooting targets of the video stream, and sends the tracking target to the embedded platform developer terminal;
the embedded platform developer terminal calls an openT L D process to determine tracking parameters of the tracked target, controls the camera equipment to zoom and/or adjusts the angle of the electric pan-tilt according to the tracking parameters to track the tracked target and send video streams to a client, determines target characteristic information of the tracked target, and establishes a target characteristic information base of the tracked target in the embedded platform developer terminal;
and after determining that the tracking target is lost, the embedded platform developer terminal identifies whether the characteristic information of the shooting target in the video stream is matched with the characteristic information of the tracking target, if so, the embedded platform developer terminal continues to call the openT L D process to determine the tracking parameter of the tracking target, and controls the camera equipment to zoom and/or adjust the angle of the electric holder according to the tracking parameter so as to track the tracking target and send the video stream to a client.
2. The method according to claim 1, wherein the embedded platform developer terminal, after determining that the tracking target is lost, identifying whether the feature information of the shooting target in the video stream matches the target feature information, comprises:
after the embedded platform developer terminal determines that the tracking target is lost, the embedded platform developer terminal sends the video stream with the lost tracking target to the client;
the client receives a region delineated in the video stream after the tracking target is lost by a user, and sends the delineated region to the embedded platform developer terminal;
the embedded platform developer terminal controls the camera equipment to collect the video stream of the delineated region according to the delineated region and calls the openT L D process to identify the video stream of the delineated region;
when the tracking target reappears in the delineated area, the openT L D process identifies whether the feature information of the shooting target in the video stream after the tracking target reappears matches with the tracking target feature information.
3. The method according to claim 1, wherein the embedded platform developer terminal identifies whether feature information of a shooting target in the video stream matches the target feature information after determining that the tracking target is lost, the method further comprising:
and if the characteristic information of the shooting target in the video stream is not matched with the characteristic information of the tracking target, the embedded platform developer terminal stops the openT L D process and finishes tracking the tracking target.
4. The method according to claim 1, wherein the embedded platform developer terminal calls the openT L D process to determine the tracking parameters of the tracking target, and the method comprises the following steps:
the tracking parameters comprise a first tracking parameter and a second tracking parameter;
the openT L D process learns the motion parameters of the tracked target to obtain a first tracking parameter of the tracked target, wherein the motion parameters comprise position, speed, acceleration and motion track characteristics;
and the openT L D process detects the motion posture of the tracked target to obtain a second tracking parameter of the tracked target, wherein the motion posture comprises various motion appearance posture characteristics.
5. The method according to claim 4, wherein the embedded platform developer terminal calls the openT L D process to determine the target feature information of the tracked target, and establishes a target feature information base of the tracked target in the embedded platform developer terminal, including:
and the openT L D process obtains the static appearance characteristics of the tracking target according to each frame image of the video stream of the tracking target, and establishes a tracking target characteristic information base in the embedded platform developer terminal according to the static appearance characteristics and the motion posture.
6. The method according to claim 1, wherein the electric pan-tilt comprises a pan-tilt support, a pan-tilt controller and a motor, a hard link is adopted between the pan-tilt support and the image pickup device, the embedded platform developer terminal calls the openT L D process to determine tracking parameters of a tracking target, and controls the image pickup device to zoom and/or adjust the angle of the electric pan-tilt according to the tracking parameters, and the method comprises the following steps:
the embedded platform developer terminal sends the tracking parameters to the holder controller;
the holder controller calculates the mechanical angle adjusting parameters of the holder bracket according to the tracking parameters;
the cloud platform controller is according to mechanical angle adjusting parameter, control motor rotates, adjusts the mechanical angle of cloud platform support, cloud platform support drives camera equipment rotates, realizes the tracking to the tracking target.
7. An apparatus for tracking a target, the apparatus being applied to a tracking system, the tracking system comprising: camera equipment, embedded platform developer terminal, electronic cloud platform and client, external camera equipment, electronic cloud platform and client in embedded platform developer terminal, the device includes:
the acquisition and sending module is used for the embedded platform developer terminal to acquire a video stream acquired by the camera equipment, wherein the video stream comprises a plurality of shooting targets and images of the surrounding area of the targets, and the video stream is sent to the client, the client receives the video stream, selects a tracking target from the plurality of shooting targets, and sends the tracking target to the calling control module;
a calling control module, configured to call an openT L D process by the embedded platform developer terminal to determine a tracking parameter of a tracked target, control the camera device to zoom and/or adjust the angle of the electric pan-tilt according to the tracking parameter, so as to track the tracked target and send a video stream to a client, determine target feature information of the tracked target, and establish a target feature information base of the tracked target in the embedded platform developer terminal;
and the identification sending module is used for identifying whether the characteristic information of the shooting target in the video stream is matched with the characteristic information of the tracking target after the embedded platform developer terminal determines that the tracking target is lost, if so, continuing to call the openT L D process to determine the tracking parameter of the tracking target, controlling the camera equipment to zoom and/or adjusting the angle of the electric holder according to the tracking parameter, so as to track the tracking target and send the video stream to a client.
8. The apparatus of claim 7, wherein the identification transmission module comprises:
the determining and sending submodule is used for sending the video stream with the lost tracking target to the client by the embedded platform developer terminal after the embedded platform developer terminal determines that the tracking target is lost, receiving the area defined by the user in the video stream with the lost tracking target by the client, and sending the defined area to the embedded platform developer terminal;
the control calling submodule is used for controlling the camera shooting equipment to collect the video stream of the delineated region according to the delineated region by the embedded platform developer terminal and calling the openT L D process to identify the video stream of the delineated region;
and the identifying sub-module is used for identifying whether the characteristic information of the shooting target in the video stream after the tracking target reappears is matched with the characteristic information of the tracking target or not by the openT L D process after the tracking target reappears in the defined area.
9. The apparatus of claim 7, wherein the call control module comprises:
calling a learning submodule for the openT L D process to learn the motion parameters of the tracked target to obtain a first tracking parameter of the tracked target, wherein the motion parameters comprise position, speed, acceleration and motion track characteristics;
calling a detection submodule for detecting the motion posture of the tracked target by the openT L D process to obtain a second tracking parameter of the tracked target, wherein the motion posture comprises various motion appearance posture characteristics;
and calling a building submodule, wherein the building submodule is used for obtaining the static appearance characteristic of the tracking target by the openT L D process according to each frame image of the video stream of the tracking target, and building a tracking target characteristic information base in the embedded platform developer terminal according to the static appearance characteristic and the motion posture.
10. The apparatus of claim 7, further comprising:
and the identification disabling module is used for identifying whether the characteristic information of the shooting target in the video stream is matched with the characteristic information of the tracking target after the tracking target is determined to be lost, and if the characteristic information of the shooting target in the video stream is not matched with the characteristic information of the tracking target, disabling the openT L D process and ending the tracking of the tracking target.
CN201811308505.9A 2018-11-05 2018-11-05 Target tracking method and device Active CN109597431B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811308505.9A CN109597431B (en) 2018-11-05 2018-11-05 Target tracking method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811308505.9A CN109597431B (en) 2018-11-05 2018-11-05 Target tracking method and device

Publications (2)

Publication Number Publication Date
CN109597431A CN109597431A (en) 2019-04-09
CN109597431B true CN109597431B (en) 2020-08-04

Family

ID=65957573

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811308505.9A Active CN109597431B (en) 2018-11-05 2018-11-05 Target tracking method and device

Country Status (1)

Country Link
CN (1) CN109597431B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112204943B (en) * 2019-07-16 2022-05-20 深圳市大疆创新科技有限公司 Photographing method, apparatus, system, and computer-readable storage medium
WO2021026804A1 (en) * 2019-08-14 2021-02-18 深圳市大疆创新科技有限公司 Cradle head-based target following method and apparatus, cradle head and computer storage medium
CN111198561B (en) * 2019-12-05 2021-10-22 浙江大华技术股份有限公司 Motion control method and device for target tracking, computer equipment and storage medium
CN113763416A (en) * 2020-06-02 2021-12-07 璞洛泰珂(上海)智能科技有限公司 Automatic labeling and tracking method, device, equipment and medium based on target detection
CN111932579A (en) * 2020-08-12 2020-11-13 广东技术师范大学 Method and device for adjusting equipment angle based on motion trail of tracked target
CN112291480B (en) * 2020-12-03 2022-06-21 维沃移动通信有限公司 Tracking focusing method, tracking focusing device, electronic device and readable storage medium
CN112714253B (en) * 2020-12-28 2022-08-26 维沃移动通信有限公司 Video recording method and device, electronic equipment and readable storage medium
CN113606456A (en) * 2021-07-02 2021-11-05 国网江苏省电力有限公司电力科学研究院 High-precision numerical control holder capable of automatically tracking and aiming
CN115623145A (en) * 2021-07-12 2023-01-17 华为技术有限公司 Video shooting method and device, electronic equipment and storage medium
CN114356077A (en) * 2021-12-15 2022-04-15 歌尔光学科技有限公司 Data processing method and device, handle and head-mounted display system
CN114979611A (en) * 2022-05-19 2022-08-30 国网智能科技股份有限公司 Binocular sensing system and method
CN115225815B (en) * 2022-06-20 2023-07-25 南方科技大学 Intelligent target tracking shooting method, server, shooting system, equipment and medium
CN114900672B (en) * 2022-07-14 2022-10-28 杭州舜立光电科技有限公司 Zooming tracking method
CN115690924B (en) * 2022-12-30 2023-04-18 北京理工大学深圳汽车研究院(电动车辆国家工程实验室深圳研究院) Potential user identification method and device for unmanned vehicle

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102638675A (en) * 2012-04-01 2012-08-15 安科智慧城市技术(中国)有限公司 Method and system for target tracking by using multi-view videos
CN105654512A (en) * 2015-12-29 2016-06-08 深圳羚羊微服机器人科技有限公司 Target tracking method and device
CN105989367A (en) * 2015-02-04 2016-10-05 阿里巴巴集团控股有限公司 Target acquisition method and equipment
CN107197199A (en) * 2017-05-22 2017-09-22 哈尔滨工程大学 A kind of intelligent monitoring and controlling device and method for tracking target
CN107832683A (en) * 2017-10-24 2018-03-23 亮风台(上海)信息科技有限公司 A kind of method for tracking target and system
CN108062115A (en) * 2018-02-07 2018-05-22 成都新舟锐视科技有限公司 A kind of continuous tracking system of multiple target based on cradle head control technology and method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8866929B2 (en) * 2010-10-12 2014-10-21 Ability Enterprise Co., Ltd. Method of producing a still image

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102638675A (en) * 2012-04-01 2012-08-15 安科智慧城市技术(中国)有限公司 Method and system for target tracking by using multi-view videos
CN105989367A (en) * 2015-02-04 2016-10-05 阿里巴巴集团控股有限公司 Target acquisition method and equipment
CN105654512A (en) * 2015-12-29 2016-06-08 深圳羚羊微服机器人科技有限公司 Target tracking method and device
CN107197199A (en) * 2017-05-22 2017-09-22 哈尔滨工程大学 A kind of intelligent monitoring and controlling device and method for tracking target
CN107832683A (en) * 2017-10-24 2018-03-23 亮风台(上海)信息科技有限公司 A kind of method for tracking target and system
CN108062115A (en) * 2018-02-07 2018-05-22 成都新舟锐视科技有限公司 A kind of continuous tracking system of multiple target based on cradle head control technology and method

Also Published As

Publication number Publication date
CN109597431A (en) 2019-04-09

Similar Documents

Publication Publication Date Title
CN109597431B (en) Target tracking method and device
CN110166728B (en) Video networking conference opening method and device
CN110022307B (en) Control method of monitoring equipment and monitoring access server
CN110636257B (en) Monitoring video processing method and device, electronic equipment and storage medium
CN108881948B (en) Method and system for video inspection network polling monitoring video
CN109218306B (en) Audio and video data stream processing method and system
CN110719425A (en) Video data playing method and device
CN110769310A (en) Video processing method and device based on video network
CN108810457B (en) Method and system for controlling video network monitoring camera
CN108630215B (en) Echo suppression method and device based on video networking
CN108574816B (en) Video networking terminal and communication method and device based on video networking terminal
CN109905616B (en) Method and device for switching video pictures
CN111210462A (en) Alarm method and device
CN110913162A (en) Audio and video stream data processing method and system
CN108965783B (en) Video data processing method and video network recording and playing terminal
CN110557612A (en) control method of monitoring equipment and video networking system
CN109963123B (en) Camera control method and device
CN111447396A (en) Audio and video transmission method and device, electronic equipment and storage medium
CN110049069B (en) Data acquisition method and device
CN110719429B (en) High-speed shooting instrument processing method and device based on video network
CN109640194B (en) Method and device for acquiring terminal permission through two-dimensional code based on video network
CN110896461B (en) Unmanned aerial vehicle shooting video display method, device and system
CN110401633B (en) Monitoring and inspection data synchronization method and system
CN109618125B (en) Monitoring method and device based on video network
CN108882049B (en) Data display method and video networking terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20201230

Address after: 570105 room 1201, Central International Plaza, 77 Binhai street, Longhua District, Haikou City, Hainan Province

Patentee after: Hainan Qiantang Shilian Information Technology Co.,Ltd.

Address before: 100000 Beijing Dongcheng District Qinglong Hutong 1 Song Hua Building A1103-1113

Patentee before: VISIONVERA INFORMATION TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right