CN107105159B - Embedded moving target real-time detection tracking system and method based on SoC - Google Patents

Embedded moving target real-time detection tracking system and method based on SoC Download PDF

Info

Publication number
CN107105159B
CN107105159B CN201710240232.8A CN201710240232A CN107105159B CN 107105159 B CN107105159 B CN 107105159B CN 201710240232 A CN201710240232 A CN 201710240232A CN 107105159 B CN107105159 B CN 107105159B
Authority
CN
China
Prior art keywords
image
target
tracking
classifier
tracking target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710240232.8A
Other languages
Chinese (zh)
Other versions
CN107105159A (en
Inventor
邵鹏
张燕
赵伟龙
朱春健
王光
张国栋
张镇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Wanteng Digital Technology Co.,Ltd.
Original Assignee
Shandong Wanteng Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Wanteng Electronic Technology Co Ltd filed Critical Shandong Wanteng Electronic Technology Co Ltd
Priority to CN201710240232.8A priority Critical patent/CN107105159B/en
Publication of CN107105159A publication Critical patent/CN107105159A/en
Application granted granted Critical
Publication of CN107105159B publication Critical patent/CN107105159B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • H04N7/185Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source from a mobile camera, e.g. for remote control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses an embedded moving target real-time detection tracking system and method based on SoC; the system comprises: the system comprises an image sensor, an acquisition card, an image storage unit, an image processing unit, a control processing unit, a holder and an image display unit; the image sensor is driven by the holder to carry out 360-degree rotary shooting within a monitored range; the image processing unit processes the transmitted image and sends an instruction to the control processing unit according to a processing result, the control processing unit gives a cradle head control instruction, the cradle head drives the image sensor to rotate to an expected angle according to the instruction, meanwhile, the image processing unit sends the processed image to the image display unit, and the image display unit displays an image processing result.

Description

Embedded moving target real-time detection tracking system and method based on SoC
Technical Field
The invention relates to an embedded moving target real-time detection and tracking System and method based on SoC (System on Chip).
Background
With the development of society, science and technology are continuously improved, a large number of new science and technology emerge in the 21 st century, the human society and the life of people are changed day by day, intelligent technology and multimedia technology become the leading technology of the information era, and the fields of computer vision, machine vision, video image processing and the like become the leading edge and hot spots of research fields. The moving target detection and tracking technology has the tasks of finding the tracked target in the current video and completing the automatic tracking of the target, and has wide application prospect and long-term economic value in the fields of industrial manufacturing, scene monitoring, social life, medical treatment, military, traffic road monitoring and the like. However, the conventional video monitoring system has the defects of large volume, low efficiency, poor real-time performance and the like, so with the development of society, people urgently need a modern technology to monitor places or objects.
The target tracking is to track the motion track of a moving target through the characteristic information of the moving target, and the current popular target tracking methods mainly include a template matching method, a Kalman filtering method, a particle filtering method and an adaptive mean filtering algorithm, but the computation amount of the methods is particularly large, and the real-time performance and the accuracy of the methods in the high-definition video image processing cannot be powerfully guaranteed.
Disclosure of Invention
The invention aims to solve the problems, and provides an embedded moving target real-time detection and tracking system and method based on SoC (system on chip), which overcome the problems of poor real-time performance and low efficiency caused by human factors in the traditional video monitoring system and the problem of resource waste caused by large volume of the system, and can ensure the requirements of real-time performance and accuracy when a moving target is detected and tracked.
In order to achieve the purpose, the invention adopts the following technical scheme:
embedded moving target real-time detection tracker based on SoC includes: the system comprises an image sensor, an acquisition card, an image storage unit, an image processing unit, a control processing unit, a holder and an image display unit;
the image sensor is driven by the holder to carry out 360-degree rotary shooting within a monitored range; the image processing unit processes the transmitted image and sends an instruction to the control processing unit according to a processing result, the control processing unit gives a cradle head control instruction, the cradle head drives the image sensor to rotate to an expected angle according to the instruction, meanwhile, the image processing unit sends the processed image to the image display unit, and the image display unit displays an image processing result.
The image processing unit comprises an ARM processor and a DSP processor which are connected with each other; after the image processing unit receives the image transmitted by the image storage unit, a target detection tracking algorithm based on an OpenCV (open content library) library and stored in the ARM processor starts to process the transmitted image, in the processing process, when the algorithm calls a library function in the OpenCV library to process the image, the OpenCL automatically calls the DSP processor, the part needing parallel acceleration is distributed to the DSP processor through a high-speed bus to be processed, after the DSP processor processes the image, the processing result is fed back to the ARM processor through the high-speed bus, the ARM processor continues subsequent processing according to the result fed back by the DSP processor, and finally, the ARM processor sends the processing result to the image display unit to be displayed.
The image storage module refers to a DDR/SDRAM storage module; the image processing unit is an ARM processor and a DSP processor which are connected with each other through a bus; the control processing unit refers to a holder control module; the image display unit refers to a VGA display module; the acquisition card refers to a CMOS data acquisition module;
the DDR/SDRAM storage module, the CMOS data acquisition module, the VGA display module and the holder control module are all connected with the ARM processor through buses;
the DDR/SDRAM storage module, the CMOS data acquisition module, the VGA display module and the holder control module are all connected with the DSP processor through buses.
The full name of OpenCV is: open Source Computer Vision Library; OpenCV is a BSD license (open source) based distributed cross-platform computer vision library that can run on Linux, Windows, Android, and Mac OS operating systems.
The full name of OpenCL is: open Computing Language.
The part needing parallel acceleration refers to: in one clock cycle, a plurality of rows of pixels are processed in parallel within one frame of image.
The embedded moving target real-time detection tracking method based on the SoC comprises the following steps:
step (1): the method comprises the steps that an image sensor carries out video acquisition on a target to be detected and tracked to obtain a video stream tracked by the target to be detected;
step (2): carrying out format conversion on the collected video stream of the to-be-detected tracking target by a collection card: converting the analog voltage signal into digital image data;
and (3): transmitting the video stream image after format conversion to an image storage unit, storing the video image transmitted subsequently, and transmitting the video image to an image processing unit by the image storage unit for image processing;
and (4): selecting a target to be tracked through a moving target detection and tracking algorithm based on an OpenCV (open content library) library, processing a video image transmitted subsequently, automatically calling a DSP (digital signal processor) by OpenCV when a library function in the OpenCV library is called to process the image in the process of processing the video image, distributing a part needing parallel acceleration to the DSP for processing through a high-speed bus, feeding a processing result back to an ARM processor through the high-speed bus after the DSP processor processes the part, continuing subsequent processing by the ARM processor according to the result fed back by the DSP processor, finally detecting and tracking the selected target by the ARM processor, sending the position of the target detected in the current frame to a control processing unit and sending an image processing result to an image display unit;
and (5): the control processing unit receives the information sent by the image processing unit and controls the cradle head, and the cradle head drives the image sensor to rotate to the expected position according to the control instruction of the control unit;
and (6): the image display unit displays the processed image transmitted from the image processing unit and displays the processing result.
The moving target detection tracking algorithm in the step (4) comprises the following steps:
step (4-1): identifying a 1 st frame of a tracking process in a video image; selecting a target area image as a tracking target P;
step (4-2): according to the coordinates of the upper left corner of the tracking target P and the length and width of the tracking target P, creating a sliding window according to different scaling scales and moving step lengths, and storing the coordinates of the upper left corner of all the sliding windows, the size of the sliding window, the scaling scale of the sliding window and the overlapping rate of the sliding window and the tracking target P window;
step (4-3): searching 10 windows with the minimum distance to a tracking target P in all sliding windows, storing the found windows into a good _ box container, generating a positive sample through affine transformation, and classifying the windows with the overlapping rate lower than 0.2 into a bad _ box container to serve as negative samples;
step (4-4): prepare the classifier, there are three parts: the system comprises a variance classifier, a set classifier and a nearest neighbor classifier, wherein the variance classifier, the set classifier and the nearest neighbor classifier are sequentially connected;
step (4-5): sampling N characteristic points in the tracking target P in a mean value mode, tracking the N characteristic points by utilizing a pyramid optical flow method, predicting a forward track, tracking back to generate a backward track, calling library functions in an OpenCV library to calculate errors of the forward track and the backward track, judging whether the tracking target is contained according to an error threshold, if the errors of the forward track and the backward track are larger than the error threshold, indicating that the tracking target is contained, and entering the step (4-6); if the forward and backward track errors are smaller than the error threshold, indicating that no target is tracked; entering the detection of the next frame image;
step (4-6): sliding windows which are screened layer by layer through the three variance classifiers, the set classifier and the nearest neighbor classifier are used as final candidate target windows;
step (4-7): and selecting the window closest to the tracking target P as the to-be-detected tracking target of the next frame according to the finally screened candidate target window.
The steps of the step (4-6) are as follows:
step (4-6-1): the variance classifier calculates the variances of all the sliding windows by using the integral graph, the variance is larger than the sliding window with a first set threshold value and indicates that the sliding window contains a tracking target P, and the screened sliding window containing the tracking target P is sent to the set classifier;
step (4-6-2): the set classifier is used for firstly carrying out normalization processing on the sliding window to obtain an image block, then obtaining a characteristic value of the image block, then calculating a posterior probability accumulated value corresponding to the characteristic value, if the posterior probability value is larger than a second set threshold value, determining that the tracking target P is contained, and sending the screened sliding window containing the tracking target P into a nearest neighbor classifier;
step (4-6-3): and the nearest neighbor classifier calculates the correlation similarity from the image block to the tracking target P, if the correlation similarity is greater than a third set threshold, the tracking target P is considered to be contained, and sliding windows which are screened layer by layer through the three variance classifiers, the set classifier and the nearest neighbor classifier are used as the final candidate target window.
The overlap ratio is the intersection of each sliding window and the tracking target P window divided by the union of each sliding window and the tracking target P window.
The image blocks are vectors with the size of 15 x 15, which are obtained by normalizing the sliding window;
the set classifier (random forest) is composed of 10 basic classifiers (10 trees), each tree is composed of 13 judgment nodes, each image block is compared with each judgment node to generate 0 or 1, then the 13 0 or 1 are composed into 13-bit binary codes x (2^13 possibilities), the 13-bit binary codes x are characteristic values, and each binary code x corresponds to a posterior probability p (y/x) ═ p+/(p++n-),p+Representing the number of positive samples, n-Representing the number of negative samples, one set classifier has 10 posterior probabilities, and if the average value is larger than a threshold value, the tracking target P is contained.
And N is 100.
The invention has the beneficial effects that:
1, accurately tracking a moving target in real time;
2, the visual angle of the camera is not fixed and continuously rotates along with the movement of the target;
3 the monitoring system has small equipment and does not need excessive manual intervention and control.
Drawings
FIG. 1 is a schematic structural view of the present invention;
FIG. 2 is a diagram of the system hardware architecture of the present invention;
FIG. 3 is a flow chart of the operation of the system of the present invention.
Detailed Description
The invention is further described with reference to the following figures and examples.
Fig. 1 is a schematic structural diagram of the present invention, and as shown in fig. 1, the embedded real-time moving target detection and tracking system based on SoC provided in the present invention includes an image sensor, an acquisition card, an image storage unit, an image processing unit, a control processing unit, a pan-tilt and an image display unit.
The image sensor is used for shooting video images.
The acquisition card is connected with the image sensor and converts the analog voltage signal of the image shot by the image sensor into digital image data.
The image storage unit is connected with the acquisition card and stores the image data transmitted by the acquisition card.
The image Processing unit is connected with the image storage unit and comprises an ARM (Advanced RISC Machine, RISC microprocessor) chip and a DSP (Digital Signal Processing) chip, and the image Processing unit selects a tracked target in the first frame image and detects and tracks a moving target in a subsequent input image frame.
The ARM mainly stores a moving target detection tracking algorithm, the moving target tracking algorithm processes an input image by depending on an image processing function provided by an OpenCV (Open Computer Vision) library, and the DSP is mainly used for accelerating operation.
Specifically, as shown in fig. 2, it is a schematic diagram of a system hardware structure of the present invention. As shown in the figure, the hardware structure of the embedded moving target real-time monitoring and tracking system based on the SoC comprises an ARM processor, a DSP processor, a DDR/SDRAM storage module, a CMOS data acquisition module, a VGA display module and a holder control module. Firstly, the CMOS data acquisition module acquires image information, and transmits the acquired image to the DDR/SDRAM storage module for buffer storage, then the image is transmitted to an ARM processor through a bus, a target detection tracking algorithm stored in the ARM processor starts to process the transmitted image, and as the algorithm is developed based on an OpenCV (open computer vision library), when a library function in OpenCV is called to process an image, OpenCL will automatically retrieve existing devices (such as CPU, DSP, etc.), and the part needing parallel acceleration is distributed to the retrieved device (the DSP is adopted by the invention) through a high-speed bus (such as AHB, PCIE and the like), the DSP returns the processed result to the ARM processor through the high-speed bus, and continuing subsequent processing according to the returned result, and finally sending the processed result to the VGA for display by the ARM processor and sending the control instruction to the pan-tilt control.
And the control processing unit receives the instruction sent by the image processing unit and controls the rotation of the holder.
The cloud platform with image sensor links to each other, drives image sensor's rotation, can catch moving target in real time.
The image display unit is connected with the image processing unit and displays the processing result of the image.
Fig. 3 is a flow chart of the present invention, and as shown in the figure, the image sensor can be driven by the pan/tilt head to perform 360-degree rotation shooting within the monitored range. The image processing unit processes the transmitted image and sends an instruction to the control processing unit according to a processing result, the control processing unit gives a servo mechanism control instruction, the cradle head drives the image sensor to rotate to an expected angle according to the instruction, meanwhile, the image processing unit sends the processed image to the image display unit, and the image display unit displays an image processing result. The specific implementation steps are as follows:
step 1: carrying out video acquisition on a target to be detected and tracked to obtain a video stream tracked by the target to be detected;
step 2: carrying out format conversion on the acquired video stream of the to-be-detected tracking target by an acquisition card;
and step 3: transmitting the video stream image after format conversion to an image storage unit, storing the video image transmitted subsequently, and transmitting the video image to an image processing unit for image processing;
and 4, step 4: selecting a target to be tracked through a moving target detection and tracking algorithm, processing a video image transmitted subsequently, detecting and tracking the selected target, sending the position of the target detected in the current processing frame to a control processing unit and sending an image processing result to an image display unit;
and 5: the control processing unit receives the information sent by the image processing unit and controls the cradle head, and the cradle head drives the image sensor to rotate to the expected position according to the control instruction of the control unit;
step 6: the image display unit displays the processed image transmitted from the image processing unit and displays the processing result.
The step 4 of the moving target detection tracking algorithm comprises the following steps:
step (4-1): identifying a 1 st frame of a tracking process in a video image; selecting a target area image as a tracking target P;
step (4-2): creating a sliding window according to the coordinate and the size of the tracking target P, and storing the coordinate, the size, the scale and the overlapping rate of the tracking target P of all the sliding windows;
step (4-3): searching 10 windows which are the shortest distance, namely the most similar windows, from the tracking target P in all sliding windows, storing the windows into a good _ box container, generating positive samples through affine transformation, and classifying the windows with the overlapping rate lower than 0.2 into a bad _ box container to serve as negative samples;
step (4-4): prepare the classifier, there are three parts: the system comprises a variance classifier, a set classifier and a nearest neighbor classifier, wherein the three classifiers are cascaded;
step (4-5): sampling 100 characteristic points in the mean value of a tracking target P, tracking and predicting a forward track by utilizing a pyramid optical flow method, tracking back to generate a backward track, calculating errors of the forward track and the backward track, and judging whether the tracking target is contained according to an error threshold;
step (4-6): the variance classifier calculates the variance of each window to be detected by using the integral graph, if the variance is greater than a threshold value, the window to be detected is considered to contain a foreground target, and the foreground target enters the set classifier through the classifier;
step (4-7): the set classifier firstly obtains the characteristic value of the image block, then calculates the posterior probability accumulated value corresponding to the characteristic value, if the posterior probability value is larger than a threshold value, the image block is considered to contain a foreground target, and enters the nearest neighbor classifier;
step (4-7): the nearest neighbor classifier calculates the correlation similarity and the conservative similarity from the image block to the online model, if the similarity is greater than a threshold value, the image block is considered to contain a foreground target, and the foreground target passes through the windows of the three classifiers and serves as a final candidate target window;
step (4-8): and selecting the window which is closest to the tracking target P, namely the most similar window, as the to-be-detected tracking target of the next frame according to the finally screened candidate target window.
Wherein the analog voltage signal is converted into digital image data by a video decoding chip in image acquisition.
The target detection algorithm adopts a detection-tracking method.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, it is not intended to limit the scope of the present invention, and it should be understood by those skilled in the art that various modifications and variations can be made without inventive efforts by those skilled in the art based on the technical solution of the present invention.

Claims (8)

1. An embedded moving target real-time detection tracking system based on SoC is characterized by comprising: the system comprises an image sensor, an acquisition card, an image storage unit, an image processing unit, a control processing unit, a holder and an image display unit;
the image sensor is driven by the holder to carry out 360-degree rotary shooting within a monitored range; the image processing unit processes the transmitted image and sends an instruction to the control processing unit according to a processing result, the control processing unit gives a cradle head control instruction, the cradle head drives the image sensor to rotate to an expected angle according to the instruction, meanwhile, the image processing unit sends the processed image to the image display unit, and the image display unit displays an image processing result;
the image processing unit comprises an ARM processor and a DSP processor which are connected with each other; after the image processing unit receives the image transmitted by the image storage unit, a target detection tracking algorithm based on an OpenCV (open content library) library and stored in an ARM processor starts to process the transmitted image, in the processing process, when the algorithm calls a library function in the OpenCV library to process the image, OpenCL automatically calls a DSP processor, the part needing parallel acceleration is distributed to the DSP processor through a high-speed bus to be processed, after the DSP processor processes the image, the processing result is fed back to the ARM processor through the high-speed bus, the ARM processor continues subsequent processing according to the result fed back by the DSP processor, and finally, the ARM processor sends the processing result to an image display unit to be displayed;
the target detection tracking algorithm is as follows:
identifying a 1 st frame of a tracking process in a video image; selecting a target area image as a tracking target P;
according to the coordinates of the upper left corner of the tracking target P and the length and width of the tracking target P, creating a sliding window according to different scaling scales and moving step lengths, and storing the coordinates of the upper left corner of all the sliding windows, the size of the sliding window, the scaling scale of the sliding window and the overlapping rate of the sliding window and the tracking target P window;
searching 10 windows with the minimum distance to a tracking target P in all sliding windows, storing the found windows into a good _ box container, generating a positive sample through affine transformation, and classifying the windows with the overlapping rate lower than 0.2 into a bad _ box container to serve as negative samples;
prepare the classifier, there are three parts: the system comprises a variance classifier, a set classifier and a nearest neighbor classifier, wherein the variance classifier, the set classifier and the nearest neighbor classifier are sequentially connected;
sampling N characteristic points in the tracking target P, tracking the N characteristic points by using a pyramid optical flow method, predicting a forward track, tracking back to generate a backward track, calling a library function in an OpenCV library to calculate errors of the forward track and the backward track, judging whether the tracking target is contained according to an error threshold, if the errors of the forward track and the backward track are larger than the error threshold, indicating that the tracking target is contained, and screening sliding windows layer by layer through three variance classifiers, an aggregate classifier and a nearest neighbor classifier to serve as a final candidate target window;
selecting a window closest to the tracking target P as a to-be-detected tracking target of the next frame according to the finally screened candidate target window;
if the forward and backward track errors are smaller than the error threshold, indicating that no target is tracked; entering the detection of the next frame image;
the sliding window screened layer by the three variance classifiers, the set classifier and the nearest neighbor classifier is used as the final candidate target window:
the variance classifier calculates the variances of all the sliding windows by using the integral graph, the variance is larger than the sliding window with a first set threshold value and indicates that the sliding window contains a tracking target P, and the screened sliding window containing the tracking target P is sent to the set classifier;
the set classifier is used for firstly carrying out normalization processing on the sliding window to obtain an image block, then obtaining a characteristic value of the image block, then calculating a posterior probability accumulated value corresponding to the characteristic value, if the posterior probability value is larger than a second set threshold value, determining that the tracking target P is contained, and sending the screened sliding window containing the tracking target P into a nearest neighbor classifier;
and the nearest neighbor classifier calculates the correlation similarity from the image block to the tracking target P, if the correlation similarity is greater than a third set threshold, the tracking target P is considered to be contained, and sliding windows which are screened layer by layer through the three variance classifiers, the set classifier and the nearest neighbor classifier are used as the final candidate target window.
2. The SoC-based embedded real-time detection and tracking system for moving objects as claimed in claim 1, wherein the image storage module is a DDR/SDRAM storage module; the image processing unit is an ARM processor and a DSP processor which are connected with each other through a bus; the control processing unit refers to a holder control module; the image display unit refers to a VGA display module; the acquisition card refers to a CMOS data acquisition module.
3. The SoC-based embedded real-time detection and tracking system for the moving target as claimed in claim 2, wherein the DDR/SDRAM storage module, the CMOS data acquisition module, the VGA display module and the pan-tilt control module are all connected to the ARM processor through a bus.
4. The SoC-based embedded real-time detection and tracking system for the moving target as claimed in claim 2, wherein the DDR/SDRAM storage module, the CMOS data acquisition module, the VGA display module and the pan-tilt control module are all connected to the DSP processor through a bus.
5. The embedded real-time detection and tracking system for moving object based on SoC as claimed in claim 1,
the part needing parallel acceleration refers to: in one clock cycle, a plurality of rows of pixels are processed in parallel within one frame of image.
6. The embedded moving target real-time detection tracking method based on the SoC is characterized by comprising the following steps:
step (1): the method comprises the steps that an image sensor carries out video acquisition on a target to be detected and tracked to obtain a video stream tracked by the target to be detected;
step (2): carrying out format conversion on the collected video stream of the to-be-detected tracking target by a collection card: converting the analog voltage signal into digital image data;
and (3): transmitting the video stream image after format conversion to an image storage unit, storing the video image transmitted subsequently, and transmitting the video image to an image processing unit by the image storage unit for image processing;
and (4): the target to be detected and tracked is selected by a moving target detection and tracking algorithm based on an OpenCV library, and the video image transmitted subsequently is processed,
in the process of processing a video image, when a library function in an OpenCV library is called to process the image, OpenCL automatically calls a DSP (digital signal processor), a part needing parallel acceleration is distributed to the DSP through a high-speed bus to be processed, after the DSP is processed, a processing result is fed back to an ARM processor through the high-speed bus, the ARM processor continues subsequent processing according to the result fed back by the DSP processor, and finally, the ARM processor detects and tracks a selected target, sends the position of the target detected in the current frame to a control processing unit and sends the image processing result to an image display unit;
and (5): the control processing unit receives the information sent by the image processing unit and controls the cradle head, and the cradle head drives the image sensor to rotate to the expected position according to the control instruction of the control unit;
and (6): the image display unit displays the processed image sent by the image processing unit and displays the processing result;
the moving target detection tracking algorithm in the step (4) comprises the following steps:
step (4-1): identifying a 1 st frame of a tracking process in a video image; selecting a target area image as a tracking target P;
step (4-2): according to the coordinates of the upper left corner of the tracking target P and the length and width of the tracking target P, creating a sliding window according to different scaling scales and moving step lengths, and storing the coordinates of the upper left corner of all the sliding windows, the size of the sliding window, the scaling scale of the sliding window and the overlapping rate of the sliding window and the tracking target P window;
step (4-3): searching 10 windows with the minimum distance to a tracking target P in all sliding windows, storing the found windows into a good _ box container, generating a positive sample through affine transformation, and classifying the windows with the overlapping rate lower than 0.2 into a bad _ box container to serve as negative samples;
step (4-4): prepare the classifier, there are three parts: the system comprises a variance classifier, a set classifier and a nearest neighbor classifier, wherein the variance classifier, the set classifier and the nearest neighbor classifier are sequentially connected;
step (4-5): sampling N characteristic points in the tracking target P in a mean value mode, tracking the N characteristic points by utilizing a pyramid optical flow method, predicting a forward track, tracking back to generate a backward track, calling library functions in an OpenCV library to calculate errors of the forward track and the backward track, judging whether the tracking target is contained according to an error threshold, if the errors of the forward track and the backward track are larger than the error threshold, indicating that the tracking target is contained, and entering the step (4-6); if the forward and backward track errors are smaller than the error threshold, indicating that no target is tracked; entering the detection of the next frame image;
step (4-6): sliding windows which are screened layer by layer through the three variance classifiers, the set classifier and the nearest neighbor classifier are used as final candidate target windows;
step (4-7): selecting a window closest to the tracking target P as a to-be-detected tracking target of the next frame according to the finally screened candidate target window;
the steps of the step (4-6) are as follows:
step (4-6-1): the variance classifier calculates the variances of all the sliding windows by using the integral graph, the variance is larger than the sliding window with a first set threshold value and indicates that the sliding window contains a tracking target P, and the screened sliding window containing the tracking target P is sent to the set classifier;
step (4-6-2): the set classifier is used for firstly carrying out normalization processing on the sliding window to obtain an image block, then obtaining a characteristic value of the image block, then calculating a posterior probability accumulated value corresponding to the characteristic value, if the posterior probability value is larger than a second set threshold value, determining that the tracking target P is contained, and sending the screened sliding window containing the tracking target P into a nearest neighbor classifier;
step (4-6-3): and the nearest neighbor classifier calculates the correlation similarity from the image block to the tracking target P, if the correlation similarity is greater than a third set threshold, the tracking target P is considered to be contained, and sliding windows which are screened layer by layer through the three variance classifiers, the set classifier and the nearest neighbor classifier are used as the final candidate target window.
7. The method as set forth in claim 6, wherein,
the overlap ratio is the intersection of each sliding window and the tracking target P window divided by the union of each sliding window and the tracking target P window.
8. The method as set forth in claim 6, wherein,
the set classifier consists of 10 basic classifiers, each tree consists of 13 judgment nodes, each image block is compared with each judgment node to generate 0 or 1, the 13 0 or 1 are combined into 13-bit binary codes x, the 13-bit binary codes x are characteristic values, and each binary code x corresponds to one posterior probability p (y/x) ═ p+/(p++n-),p+Representing the number of positive samples, n-Representing the number of negative samples, one set classifier has 10 posterior probabilities, and if the average value is larger than a threshold value, the tracking target P is contained.
CN201710240232.8A 2017-04-13 2017-04-13 Embedded moving target real-time detection tracking system and method based on SoC Active CN107105159B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710240232.8A CN107105159B (en) 2017-04-13 2017-04-13 Embedded moving target real-time detection tracking system and method based on SoC

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710240232.8A CN107105159B (en) 2017-04-13 2017-04-13 Embedded moving target real-time detection tracking system and method based on SoC

Publications (2)

Publication Number Publication Date
CN107105159A CN107105159A (en) 2017-08-29
CN107105159B true CN107105159B (en) 2020-01-07

Family

ID=59676069

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710240232.8A Active CN107105159B (en) 2017-04-13 2017-04-13 Embedded moving target real-time detection tracking system and method based on SoC

Country Status (1)

Country Link
CN (1) CN107105159B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109543496B (en) * 2017-09-22 2020-11-27 杭州海康威视数字技术股份有限公司 Image acquisition method and device, electronic equipment and system
CN107909024B (en) * 2017-11-13 2021-11-05 哈尔滨理工大学 Vehicle tracking system and method based on image recognition and infrared obstacle avoidance and vehicle
CN108009554A (en) * 2017-12-01 2018-05-08 国信优易数据有限公司 A kind of image processing method and device
CN109635749B (en) * 2018-12-14 2021-03-16 网易(杭州)网络有限公司 Image processing method and device based on video stream
CN109753036A (en) * 2018-12-27 2019-05-14 四川艾格瑞特模具科技股份有限公司 A kind of precision machinery processing Schedule tracking method
CN111861291B (en) * 2019-04-25 2024-05-17 北京京东振世信息技术有限公司 Warehouse-in method and warehouse-in system
CN110211153A (en) * 2019-05-28 2019-09-06 浙江大华技术股份有限公司 Method for tracking target, target tracker and computer storage medium
CN111008994A (en) * 2019-11-14 2020-04-14 山东万腾电子科技有限公司 Moving target real-time detection and tracking system and method based on MPSoC
CN111832797B (en) * 2020-04-10 2024-06-04 北京嘀嘀无限科技发展有限公司 Data processing method, data processing device, storage medium and electronic equipment
CN113392777A (en) * 2021-06-17 2021-09-14 西安应用光学研究所 Real-time target detection method based on online learning strategy
CN113724324B (en) * 2021-08-30 2023-12-19 杭州华橙软件技术有限公司 Control method and device of cradle head, storage medium and electronic device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103826105A (en) * 2014-03-14 2014-05-28 贵州大学 Video tracking system and realizing method based on machine vision technology
CN104517125A (en) * 2014-12-26 2015-04-15 湖南天冠电子信息技术有限公司 Real-time image tracking method and system for high-speed article
CN105631798A (en) * 2016-03-04 2016-06-01 北京理工大学 Low-power consumption portable real-time image target detecting and tracking system and method thereof

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102438118B (en) * 2011-11-30 2013-09-25 哈尔滨工业大学 High-speed vision capture apparatus of moving object characteristic
US20140173203A1 (en) * 2012-12-18 2014-06-19 Andrew T. Forsyth Block Memory Engine

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103826105A (en) * 2014-03-14 2014-05-28 贵州大学 Video tracking system and realizing method based on machine vision technology
CN104517125A (en) * 2014-12-26 2015-04-15 湖南天冠电子信息技术有限公司 Real-time image tracking method and system for high-speed article
CN105631798A (en) * 2016-03-04 2016-06-01 北京理工大学 Low-power consumption portable real-time image target detecting and tracking system and method thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于ARM和DSP空中运动目标实时跟踪***研究;王永,熊显名,李小勇;《电视技术》;20151130;参见第68页第1节,图1 *

Also Published As

Publication number Publication date
CN107105159A (en) 2017-08-29

Similar Documents

Publication Publication Date Title
CN107105159B (en) Embedded moving target real-time detection tracking system and method based on SoC
WO2019101220A1 (en) Deep learning network and average drift-based automatic vessel tracking method and system
CN109325456B (en) Target identification method, target identification device, target identification equipment and storage medium
WO2019129255A1 (en) Target tracking method and device
CN110781964A (en) Human body target detection method and system based on video image
CN101022505A (en) Method and device for automatically detecting moving target under complex background
CN116309781B (en) Cross-modal fusion-based underwater visual target ranging method and device
CN110175528B (en) Human body tracking method and device, computer equipment and readable medium
CN110827320B (en) Target tracking method and device based on time sequence prediction
CN111008994A (en) Moving target real-time detection and tracking system and method based on MPSoC
CN108710879B (en) Pedestrian candidate region generation method based on grid clustering algorithm
CN101320477B (en) Human body tracing method and equipment thereof
CN114897762B (en) Automatic positioning method and device for coal mining machine on coal mine working face
CN111767826A (en) Timing fixed-point scene abnormity detection method
CN107358621B (en) Object tracking method and device
CN107909024B (en) Vehicle tracking system and method based on image recognition and infrared obstacle avoidance and vehicle
CN113112479A (en) Progressive target detection method and device based on key block extraction
CN111382606A (en) Tumble detection method, tumble detection device and electronic equipment
CN111695404A (en) Pedestrian falling detection method and device, electronic equipment and storage medium
CN111275733A (en) Method for realizing rapid tracking processing of multiple ships based on deep learning target detection technology
CN115512263A (en) Dynamic visual monitoring method and device for falling object
Huang et al. Motion characteristics estimation of animals in video surveillance
CN114494355A (en) Trajectory analysis method and device based on artificial intelligence, terminal equipment and medium
Singh et al. Visual Monitoring of Many Objects in Real Time Using Embedded GPU
Chen et al. A vehicle trajectory extraction method for traffic simulating modeling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Shao Peng

Inventor after: Zhang Yan

Inventor after: Zhao Weilong

Inventor after: Zhu Chunjian

Inventor after: Wang Guang

Inventor after: Zhang Zhen

Inventor after: Zhang Guodong

Inventor before: Zhang Yan

Inventor before: Zhao Weilong

Inventor before: Zhu Chunjian

Inventor before: Wang Guang

Inventor before: Zhang Zhen

Inventor before: Zhang Guodong

CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Zhang Guodong

Inventor after: Zhao Weilong

Inventor after: Zhang Yan

Inventor after: Zhu Chunjian

Inventor after: Shao Peng

Inventor after: Wang Guang

Inventor after: Zhang Zhen

Inventor before: Shao Peng

Inventor before: Zhang Yan

Inventor before: Zhao Weilong

Inventor before: Zhu Chunjian

Inventor before: Wang Guang

Inventor before: Zhang Zhen

Inventor before: Zhang Guodong

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 250103 room 1-101, office building, 2269 development road, high tech Zone, Ji'nan, Shandong

Patentee after: Shandong Wanteng Digital Technology Co.,Ltd.

Address before: 250103 room 1-101, office building, 2269 development road, high tech Zone, Ji'nan, Shandong

Patentee before: SHANDONG WANTENG ELECTRONIC TECHNOLOGY CO.,LTD.

CP01 Change in the name or title of a patent holder