CN108648210A - It is a kind of static state complex scene under fast multi-target detection method and device - Google Patents

It is a kind of static state complex scene under fast multi-target detection method and device Download PDF

Info

Publication number
CN108648210A
CN108648210A CN201810437311.2A CN201810437311A CN108648210A CN 108648210 A CN108648210 A CN 108648210A CN 201810437311 A CN201810437311 A CN 201810437311A CN 108648210 A CN108648210 A CN 108648210A
Authority
CN
China
Prior art keywords
pixel
background
code book
target
symbol
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810437311.2A
Other languages
Chinese (zh)
Other versions
CN108648210B (en
Inventor
胡锦龙
韩天剑
李涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
XIAN TIANHE DEFENCE TECHNOLOGY Co Ltd
Original Assignee
XIAN TIANHE DEFENCE TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by XIAN TIANHE DEFENCE TECHNOLOGY Co Ltd filed Critical XIAN TIANHE DEFENCE TECHNOLOGY Co Ltd
Priority to CN201810437311.2A priority Critical patent/CN108648210B/en
Publication of CN108648210A publication Critical patent/CN108648210A/en
Application granted granted Critical
Publication of CN108648210B publication Critical patent/CN108648210B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

Fast multi-target detection method and device under a kind of static complex scene of the present invention, are related to information monitoring technical field more particularly to a kind of multi-target detection method in photoelectric detecting system, and the device measured using this method.This method is as follows:The image in input video is acquired, background model is established;Each pixel in current frame image is matched with background model, carries out the label of background or foreground;Foreground comprising candidate target, background binary image are handled, false-alarm is removed;The present invention simplifies optimization by being carried out to classical code book model in editable device, in combination in DSP module to object filtering, sequence, the processing of number, solves the disadvantage that conventional method false-alarm is high, verification and measurement ratio is low, flase drop is high, the fast and effective detection to multiple target under static complex scene is realized, preferable effect is achieved in actual scene application.

Description

It is a kind of static state complex scene under fast multi-target detection method and device
Technical field
The present invention relates to information monitoring technical field more particularly to a kind of multi-target detections in photoelectric detecting system Method and device.
Background technology
For still camera, existing technology can accurately detect the foreground moving object under simple background.And For complicated dynamic background, a variety of interference of dynamic background are contained in scene, leaf, fountain and the light such as shaken According to variation so that either large or small dynamic change constantly occurs for background in video, greatly reduces the Detection accuracy of the prior art, A large amount of false-alarms are brought simultaneously, can not accurately be quickly detected target.
In the complex background for shooting dynamic change for stationary cameras, the method for obtaining foreground moving object has very much, packet Include inter-frame difference, background difference and optical flow method, but that there are accuracy of detection is low for these methods, detection target is imperfect, or even to slow There is " cavity " phenomenon in the target moved slowly, more even exist and calculate complexity, hardware requirement is high, leads to existing hardware platform The case where cannot be satisfied, and easily influenced by factors such as noise, illumination variations, it cannot be satisfied actual application.
Invention content
In view of the deficiencies of the prior art, fast multi-target detection method under a kind of static complex scene of present invention offer, with And the device being detected using this method.
Fast multi-target detection method under a kind of static complex scene of present invention proposition, this method are as follows:
The image in input video is acquired, background model is established;
Each pixel in current frame image is matched with background model, if successful match, which is labeled as background, it is no Then it is labeled as foreground;
Foreground comprising candidate target, background binary image are handled, false-alarm is removed, obtains the number of candidate target region And label;
Candidate target region is screened and is sorted, is exported after obtaining the number and number in effective target region;
Fast multi-target detection is completed under static complex scene.
Further, specific implementation is as follows:
At least 1 frame image for acquiring and preserving inputting video data, establishes background model using background modeling algorithm and stores;
Temporarily, each pixel in image to be matched with established background model, if successful match, by this in present frame Pixel is labeled as background, is otherwise labeled as foreground;
The foreground and background bianry image for including candidate target is obtained, to carrying out cluster point comprising foreground and background bianry image Analysis, obtains the number and label of candidate target region;
Above-mentioned candidate target region mark is screened and sorted, false-alarm is removed, obtains the number and number in real goal region After export
Fast multi-target detection is completed under static complex scene.
Further, the method for establishing background model is as follows:
The image of modeling is obtained from inputting video data;
Several frame images are taken to be modeled, wherein it is preferred that 50-200 frame images;
Code book modeling is carried out to each pixel in each frame image, obtains code book model;
Code book model is simplified, all symbols in check image in each pixel code book;If some symbol in code book Longest not renewal time λ > 50, then be set as 0 by the corresponding flag bit of the symbol;
Complete background modeling.
Further, the method for the code book modeling is as follows:
Initialization code book model is set as sky, gray processing processing is carried out to input picture, is built for each pixel of input picture A code book is found, the space of each code book of the present invention is 12 symbols, effective marker position is initialized as 0, the study of symbol Range determines that each code element structure is C according to the reserved number of symbol spacei={ Ymax, Ymin, Ylow, Yhigh, tlast, λ },
Wherein Ymax, YminThe respectively maximum and minimum value of current pixel gray value, Ylow, YhighFor symbol study lower limit and The upper limit, initial value are the gray value of current pixel;tlastIt is renewal time, initial value 0 for symbol last time matching;λ For symbol longest not renewal time, initial value 0, the value adds 1 when not updating;
It will be divided into four pieces per frame image, carry out concurrent operation, when a new frame image arrives, the code book time adds 1, by pixel Gray value is limited to the value range of current grayvalue [ ± 15, ± 25 ], and the present invention is preferably ± 20, therefore under the study of code word It is limited to Ylow- 20, upper limit Yhigh+ 20, wherein code book timing definition is current modeling frame number, and initial value 0, per frame, modeling adds 1;
If the gray value Y of pixel learns in symbol between the upper limit and lower limit, i.e. Ylow≤Y≤Yhigh, then successful match, when code book Between plus 1, as the following formula the maximin of more new symbol, update the study bound of the symbol:
Ct={ max (Ymax, Y), min (Ymin, Y), Y-20, Y+20, t, λ };
If the gray value of pixelLearn between the upper limit and lower limit beyond symbol, then it is assumed that pixel does not find matched code word, then New symbol C is created as the following formulaL, code book series adds 1:
CL={ Ymax, Ymin, Ylow, Yhigh, 0,0 };
Then the study bound of the symbol is updated, and its effective marker position is set as 1;
By other symbol longests of the code book, renewal time λ does not add 1;
Check that all flag bits are 1 effective code element, if the longest of some symbol not renewal time λ > 50, effective marker position It is set as 0, modeling terminates.
Further, the code book model of corresponding position pixel carries out in each pixel Yu background model of the present frame Matching, if the pixel is between code book upper and lower limit in present frame, is denoted as background, is otherwise denoted as foreground;
Matching formula:
Ylow≤Y≤Yhigh
Wherein Ylow, YhighTo obtain the code book upper and lower limit of the pixel after background model is trained, Y is the pixel in present frame Gray value.
Further, it is described by the candidate target region comprising background by cluster screen after sort, remove false-alarm after It is stored in the form of queue, when a new frame arrives, obtained candidate target region is put into queue and is owned with current queue Target area matched, if successful match with current goal region replace queue in corresponding target area;If It fails to match is then added to the target area end of queue, meanwhile, the target area duration of successful match in queue Adding 1, the target area that it fails to match is deleted from queue, and countdown subtracts 1, when there are the target area that disappearance countdown is 0, It is deleted from queue.
Further, the candidate target region matches with existing target area in current queue and should simultaneously meet such as Lower requirement:
Each angular coordinate difference < 20% in target area;
Pixel number number difference < 20% in target area.
Further, the method that the target area after described pair of cluster screening is ranked up is as follows:
When the duration of the target area in object queue being more than 5, target area binding number 0-9 will be given, it is bound Number cannot unbind automatically before the target area of binding is removed out queue, and each number can only bind a target simultaneously Region has 10 target areas to be numbered every time.
This method of the present invention has the advantages that compared with prior art:The present invention, will by background modeling Present image is matched with background image, and the binary map of candidate target region is obtained by analysis, the processing to matching result Picture, by the way that binary image analysis, screening, sequencing numbers, multiple target under Condition of Complicated Ground Background is realized using the uniqueness of number It quick and precisely detects, the method for the invention is simple and effective, it is easy to accomplish, solve that conventional method false-alarm is high, verification and measurement ratio is low, accidentally High disadvantage is examined, the fast and effective detection to multiple target under static complex scene is realized.
Fast multi-target detection device under a kind of static complex scene of present invention proposition comprising the processor of interconnection Module and editable device;Wherein, it is equipped with background modeling algorithm in editable device and code book compares algorithm, acquires input video Acquisition input video in image, establish background model, by each pixel in current frame image background model carry out code This comparison obtains background, the output of foreground bianry image and processor result;
Object filtering algorithm, number sorting algorithm are equipped in processor module, before comprising candidate target Scape, background binary image removal false-alarm, the number for obtaining candidate target region and label are screened and are arranged to candidate target region Sequence obtains the number and number in effective target region, and exports to editable logic gate module.
Processor module of the present invention is DSP module, and the editable device is FGPA modules.
The present invention simplifies optimization by being carried out to classical code book model in editable device, in combination in DSP module To object filtering, sequence, the processing of number, solves the disadvantage that conventional method false-alarm is high, verification and measurement ratio is low, flase drop is high, realize Fast and effective detection to multiple target under static complex scene achieves preferable effect in actual scene application.
Description of the drawings
Fig. 1 is the flow chart of the method for the invention.
Fig. 2 is the attachment structure schematic diagram of device of the present invention.
Fig. 3 is the flow chart of the method for the present invention background modeling method.
Fig. 4 is the flow chart that the method for the present invention is claim 6 matching process.
Fig. 5 is the present invention to the vehicle detection result figure outside 3 kilometers of target under static complex scene.
Fig. 6 is vehicle detection result figure of the present invention to multiple mobile object under haze weather under static complex scene.
Fig. 7 is the present invention to short distance, the vehicle result figure of target occlusion under static complex scene.
Specific implementation mode
It elaborates below in conjunction with the accompanying drawings to the embodiment of the present invention.The present embodiment before being with technical solution of the present invention Put and implemented, give detailed embodiment and specific operating process, but protection scope of the present invention be not limited to Under embodiment.
The present invention is based on fast multi-targets under a kind of static complex scene to detect embedded equipment, by taking vehicle detection as an example, Input picture is the image sequence for including vehicle target under the static state complex scene of ground.
The present invention provides fast multi-targets under a kind of static complex scene to detect embedded equipment, shown in Fig. 2 FPGA module and the DSP module being connected with FPGA module realize detection, as shown in Figure 1, this method is as follows:
(1), the 50-200 frame images that are acquired using FPGA module and preserve inputting video data, utilize background modeling algorithm to establish Background model simultaneously stores;The quantity of image is selected according to the situation of change of scene and actual application demand;
(2), come in present frame it is interim, by each pixel in image and the progress of established background model in FPGA module Match, after successful match, which is labeled as background, is otherwise labeled as foreground, obtains the foreground and background for including candidate target Bianry image, FPGA module are sent to DSP module by what is obtained comprising foreground and background bianry image;
(3), DSP module to comprising foreground and background bianry image carry out clustering, obtain candidate target region number and Label;
(4), candidate target region mark is screened and is sorted, remove false-alarm, obtain the number and volume of true target area Number, the numbered target area information of the tool of acquisition is sent to FPGA module by DSP module, is exported by FPGA module.
The present invention utilizes the realizability of FPGA module available resources and hardware, the performance of comprehensive assessment algorithm and in real time Property.The present invention by the method for interpolation by the size reduction of original input picture to original 1/4, in this way, required storage Amount reduces original 3/4, and operand also narrows down to original 1/4.
On the other hand, since classical code book model is directed to color video frequency image, R, G and B triple channel for using pixel are needed Information, in order to further decrease storage and operand, the RGB image of input is converted YUV image by the present invention, and it is logical only to extract Y The data in road are used for modeling, and compared with traditional rgb space, code word description is simpler, and amount of storage and operand greatly reduce.
The flow that background model is established using the codebook approach simplified after optimizing is as shown in Figure 3:
(1)In FPGA module, initialization code book model is set as sky, gray processing processing, input picture are carried out to input picture Each pixel establish a code book, FPGA module reserves the space of 12 symbols to each pixel code book, and will have criterion Will position is initialized as 0, and the study range of symbol determines that each code element structure is according to the reserved number of symbol space:
Ci={ Ymax, Ymin, Ylow, Yhigh, tlast, λ },
Wherein Ymax, YminThe respectively maximum and minimum value of current pixel gray value, Ylow, YhighFor symbol study lower limit and The upper limit, initial value are the gray value of current pixel;tlastIt is renewal time, initial value 0 for symbol last time matching;λ For symbol longest not renewal time, initial value 0, the value adds 1 when not updating;
(2)In modeling process, FPGA will be divided into four pieces per frame image, concurrent operation be carried out, when a new frame image data arrives When coming, the code book time adds 1, and grey scale pixel value is limited to the value range of current grayvalue [ ± 15, ± 25 ], and the present invention is preferred It is ± 20, therefore the study lower limit of code word is Ylow- 20, upper limit Yhigh+ 20, wherein code book timing definition is current modeling frame Number, initial value 0, the modeling plus 1 per frame;
(3)During model training, if the gray value of some pixelLearn between the upper limit and lower limit in symbol, i.e. Ylow≤Y ≤Yhigh, then it is assumed that the pixel finds matched code word, then the code book time add 1, the maximin of more new symbol as the following formula, more The study bound of the new symbol:
Ct={ max (Ymax, Y), min (Ymin, Y), Y-20, Y+20, t, λ };
If code book is empty or matched symbol is not present, new symbol C is created as the following formulaL
CL={ Ymax, Ymin, Ylow, Yhigh, 0,0 };
Then the study bound of the symbol is updated, and its effective marker position is set as 1;
By other symbol longests of the code book, renewal time λ does not add 1;
At the end of modeling, check that all flag bits are 1 effective code element, if the longest of some symbol not renewal time λ > 50, Effective marker position is set as 0.
After completing background modeling, by the code book model of corresponding position pixel in each pixel of present frame and background model into Row matching, matching rule are as follows:
Ylow≤Y≤Yhigh
Wherein Ylow, YhighThe code book bound of the pixel is obtained for the background model training stage, Y is the pixel in present frame Gray value.
Successful match is thought if meeting above formula, which is labeled as background, is denoted as 0, foreground is otherwise labeled as, is denoted as 1, obtains The bianry image for including candidate target is obtained, obtained bianry image is sent to DSP module by FPGA module.
The present invention is needing to carry out 100 frame images respectively since the number of image frames used in background modeling is 100 frames Modeling, after modeling, simplifies code book model.
As shown in figure 4, after model foundation, present frame that temporarily, the candidate target region that above-mentioned steps obtain is led to It is ranked up after crossing cluster screening, is stored in the form of queue after removing false-alarm, when a new frame arrives, the candidate mesh that will obtain Mark region is put into queue and is matched with target area all in current queue, and matching rule is as follows:Work as candidate target region Following requirement should be met simultaneously by being matched with existing target area in current queue:Each angular coordinate difference < 20% in target area; In target area when pixel number number difference < 20%, it is believed that successful match, otherwise for it fails to match.
It is replaced in queue and is corresponding to it with current goal region if successful match(Matching)Target area;If matching is lost Lose, which be added to the end of queue, meanwhile, in queue the target area duration of successful match add 1, With failure target area deleted from queue, countdown subtracts 1, when there are disappearance countdown be 0 target area when, by its from It is deleted in queue.
Target area after being screened to cluster is ranked up, and ordering rule is as follows:When the target area in object queue When duration is more than 5, number 0-9 will be bound to target area, bound number is removed out in the target area of binding It cannot be unbinded automatically before queue, each number can only bind a target area simultaneously, have 10 target areas to obtain every time Number.
In order to verify validity of the method for the present invention to multi-target detection in practical different complexity scenes, using actual field Scape data are tested, and target is vehicle in scene.
Fig. 53 kilometers of targets under static complex scene are outer, the public affairs being located at 3 kilometers of the 1080p ball machines shooting on steel tower The scene that road vehicle passes through, in the prior art, row's car that roadside is stopped have greatly the detection of actual motion vehicle Interference, farther out due to distance, target imaging very little, while background clutter adds the interference of similar purpose, these are all given actually Target detection made great challenge.And the method for the present invention can accurately exclude the interference of these pseudo- targets, it is final to obtain Accurate testing result is very clear in red rectangle frame in figure.
Fig. 6 under haze weather, is located at high speed at 3 kilometers of the 1080p ball machines shooting on steel tower under static complex scene Road scene, being interlocked and blocked due to haze influence, the interference of dynamic background, multiple target increases very greatly to actually detected task Difficulty.The method of the present invention exclude dynamic background interference after, mark real target in figure in red rectangle frame very Clearly, i.e., the vehicle moved on expressway.
Fig. 7 under static complex scene, network ball machine shooting 500m at urban road crossing scene, wherein target easily by Trees, building block.From the results of view, the method for the present invention remains to realize under above-mentioned challenge the quick and precisely detection of target, It is very clear in red rectangle frame in figure, largely solve the problems, such as in practical application.
In addition, in order to verify the advantage of the method for the present invention compared with prior art, using the static state of existing two kinds of mainstreams Scene objects detection method GMM and VIBE is compared with the method for the present invention, using verification and measurement ratio, false alarm rate and computational complexity as Evaluation criterion, in embedded platform(DSP+FPGA)On actual complex contextual data is tested, as a result as shown in table 1 below. It is time-consuming higher since GMM is only emulated in PC machine, do not consider to realize on embedded platform.As can be seen from the table, with it is existing Technology is compared, and the method for the present invention can obtain higher verification and measurement ratio and lower false alarm rate under complex scene, and by optimization Algorithm runs real-time height on embedded platform afterwards, it is easy to accomplish.
1 the method for the present invention of table and prior art Contrast on effect:
The present invention simplifies optimization by being carried out to classical code book model in editable device, in combination in DSP module To object filtering, sequence, the processing of number, solve that traditional multi-target detection method false-alarm is high, verification and measurement ratio is low, flase drop is high lacks Point realizes the fast and effective detection to multiple target under static complex scene, and preferable effect is achieved in actual scene application Fruit.
Part of that present invention that are not described in detail belong to the well-known technology of those skilled in the art.
Those of ordinary skill in the art it should be appreciated that more than embodiment be intended merely to illustrate the present invention, And be not used as changing embodiment described above as long as in the spirit of the present invention for limitation of the invention, Modification will all be fallen in the range of claims of the present invention.

Claims (10)

1. fast multi-target detection method under a kind of static state complex scene, which is characterized in that this method is as follows:
The image in input video is acquired, background model is established;
Each pixel in current frame image is matched with background model, carries out the label of background or foreground;
Foreground comprising candidate target, background binary image are handled, false-alarm is removed;
Candidate target region is screened and is sorted, the number and number in output effective target region;
Fast multi-target detection is completed under static complex scene.
2. fast multi-target detection method under static complex scene according to claim 1, which is characterized in that method is as follows:
At least 1 frame image for acquiring and preserving inputting video data, establishes background model using background modeling algorithm and stores;
Temporarily, each pixel in image to be matched with established background model, if successful match, by this in present frame Pixel is labeled as background, is otherwise labeled as foreground;
The foreground and background bianry image for including candidate target is obtained, to carrying out cluster point comprising foreground and background bianry image Analysis, obtains the number and label of candidate target region;
Above-mentioned candidate target region mark is screened and sorted, false-alarm is removed, obtains the number and number in real goal region After export
Fast multi-target detection is completed under static complex scene.
3. fast multi-target detection method under static complex scene according to claim 2, which is characterized in that establish background mould The method of type is as follows:
The image of modeling is obtained from inputting video data;
Several frame images are taken to be modeled;
Code book modeling is carried out to each pixel in each frame image, obtains code book model;
Code book model is simplified, all symbols in check image in each pixel code book;If some symbol in code book Longest not renewal time λ > 50, then be set as 0 by the corresponding flag bit of the symbol;
Complete background modeling.
4. fast multi-target detection method under static complex scene according to claim 3, which is characterized in that code book modeling Method is as follows:
Initialization code book model is set as sky, gray processing processing is carried out to input picture, is built for each pixel of input picture A code book is found, effective marker position is initialized as 0, the study range of symbol is determined according to the reserved number of symbol space, often A symbol structures are Ci={ Ymax, Ymin, Ylow, Yhigh, tlast, λ },
Wherein Ymax, YminThe respectively maximum and minimum value of current pixel gray value, Ylow, YhighFor the study lower limit of symbol and upper Limit, initial value is the gray value of current pixel;tlastIt is renewal time, initial value 0 for symbol last time matching;λ is Symbol longest not renewal time, initial value 0, the value adds 1 when not updating;
It will be divided into four pieces per frame image, carry out concurrent operation, when a new frame image arrives, the code book time adds 1, by pixel Gray value is limited to current grayvalue ± 20, therefore the study lower limit of code word is Ylow- 20, upper limit Yhigh+ 20, wherein code book Timing definition is current modeling frame number, initial value 0, the modeling plus 1 per frame;
If the gray value of pixelLearn between the upper limit and lower limit in symbol, i.e. Ylow≤Y≤Yhigh, then successful match, when code book Between plus 1, as the following formula the maximin of more new symbol, update the study bound of the symbol:
Ct={ max (Ymax, Y), min (Ymin, Y), Y-20, Y+20, t, λ };
If the gray value Y of pixel is beyond between the symbol study upper limit and lower limit, then it is assumed that pixel does not find matched code word, then presses Following formula creates new symbol CL, code book series adds 1:
CL={ Ymax, Ymin, Ylow, Yhigh, 0,0 };
Then the study bound of the symbol is updated, and its effective marker position is set as 1;
By other symbol longests of the code book, renewal time λ does not add 1;
Check that all flag bits are 1 effective code element, if the longest of some symbol not renewal time λ > 50, effective marker position It is set as 0, modeling terminates.
5. fast multi-target detection method under static complex scene according to claim 4, which is characterized in that present frame it is every A pixel is matched with the code book model of corresponding position pixel in background model, if the pixel is upper and lower in code book in present frame Between limit, then it is denoted as background, is otherwise denoted as foreground;
Matching formula:Ylow≤Y≤Yhigh
Wherein Ylow, YhighTo obtain the code book upper and lower limit of the pixel after background model is trained, Y is the pixel in present frame Gray value.
6. fast multi-target detection method under static complex scene according to claim 5, which is characterized in that will include background Candidate target region by cluster screen after sort, stored in the form of queue after removing false-alarm, will when a new frame arrives Obtained candidate target region is put into queue and is matched with target area all in current queue, with working as if successful match Replace corresponding target area in queue in preceding target area;The target area is added to the end of queue if it fails to match End, meanwhile, in queue the target area duration of successful match add 1, the target area that it fails to match is deleted from queue, Timing subtracts 1, and when there are the target area that disappearance countdown is 0, it is deleted from queue.
7. fast multi-target detection method under static complex scene according to claim 6, which is characterized in that candidate target area Domain is matched with existing target area in current queue should meet following requirement simultaneously:
Each angular coordinate difference < 20% in target area;
Pixel number number difference < 20% in target area.
8. fast multi-target detection method under static complex scene according to claim 7, which is characterized in that screened to cluster The method that target area afterwards is ranked up is as follows:
When the duration of the target area in object queue being more than 5, target area binding number 0-9 will be given, it is bound Number cannot unbind automatically before the target area of binding is removed out queue, and each number can only bind a target simultaneously Region has 10 target areas to be numbered every time.
9. the dress being detected using fast multi-target detection method under any one of the claim 1-8 static complex scenes It sets, which is characterized in that the processor module including interconnection and editable device;
Wherein, it is equipped with background modeling algorithm in editable device and code book compares algorithm, the acquisition input for acquiring input video regards Image in frequency establishes background model, each pixel in current frame image is carried out code book comparison in background model, is carried on the back Scape, the output of foreground bianry image and processor result;
Object filtering algorithm, number sorting algorithm are equipped in processor module, before comprising candidate target Scape, background binary image removal false-alarm, the number for obtaining candidate target region and label are screened and are arranged to candidate target region Sequence obtains the number and number in effective target region, and exports to editable logic gate module.
10. fast multi-target detection device under static complex scene according to claim 9, which is characterized in that the processing Device module is DSP module, and the editable device is FGPA modules.
CN201810437311.2A 2018-05-09 2018-05-09 Rapid multi-target detection method and device under static complex scene Active CN108648210B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810437311.2A CN108648210B (en) 2018-05-09 2018-05-09 Rapid multi-target detection method and device under static complex scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810437311.2A CN108648210B (en) 2018-05-09 2018-05-09 Rapid multi-target detection method and device under static complex scene

Publications (2)

Publication Number Publication Date
CN108648210A true CN108648210A (en) 2018-10-12
CN108648210B CN108648210B (en) 2022-06-14

Family

ID=63753937

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810437311.2A Active CN108648210B (en) 2018-05-09 2018-05-09 Rapid multi-target detection method and device under static complex scene

Country Status (1)

Country Link
CN (1) CN108648210B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109785356A (en) * 2018-12-18 2019-05-21 北京中科晶上超媒体信息技术有限公司 A kind of background modeling method of video image
CN112101135A (en) * 2020-08-25 2020-12-18 普联国际有限公司 Moving target detection method and device and terminal equipment
CN114694092A (en) * 2022-03-15 2022-07-01 华南理工大学 Expressway monitoring video object-throwing detection method based on mixed background model
CN116228544A (en) * 2023-03-15 2023-06-06 阿里巴巴(中国)有限公司 Image processing method and device and computer equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103325259A (en) * 2013-07-09 2013-09-25 西安电子科技大学 Illegal parking detection method based on multi-core synchronization
US20140369552A1 (en) * 2013-06-14 2014-12-18 National Yunlin University Of Science And Technology Method of Establishing Adjustable-Block Background Model for Detecting Real-Time Image Object
CN104835163A (en) * 2015-05-11 2015-08-12 华中科技大学 Embedded real-time high-speed binocular vision system for moving target detection
CN105389831A (en) * 2015-11-11 2016-03-09 南京邮电大学 Multi-target detection method based on YUV color space

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140369552A1 (en) * 2013-06-14 2014-12-18 National Yunlin University Of Science And Technology Method of Establishing Adjustable-Block Background Model for Detecting Real-Time Image Object
CN103325259A (en) * 2013-07-09 2013-09-25 西安电子科技大学 Illegal parking detection method based on multi-core synchronization
CN104835163A (en) * 2015-05-11 2015-08-12 华中科技大学 Embedded real-time high-speed binocular vision system for moving target detection
CN105389831A (en) * 2015-11-11 2016-03-09 南京邮电大学 Multi-target detection method based on YUV color space

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109785356A (en) * 2018-12-18 2019-05-21 北京中科晶上超媒体信息技术有限公司 A kind of background modeling method of video image
CN109785356B (en) * 2018-12-18 2021-02-05 北京中科晶上超媒体信息技术有限公司 Background modeling method for video image
CN112101135A (en) * 2020-08-25 2020-12-18 普联国际有限公司 Moving target detection method and device and terminal equipment
CN114694092A (en) * 2022-03-15 2022-07-01 华南理工大学 Expressway monitoring video object-throwing detection method based on mixed background model
CN116228544A (en) * 2023-03-15 2023-06-06 阿里巴巴(中国)有限公司 Image processing method and device and computer equipment
CN116228544B (en) * 2023-03-15 2024-04-26 阿里巴巴(中国)有限公司 Image processing method and device and computer equipment

Also Published As

Publication number Publication date
CN108648210B (en) 2022-06-14

Similar Documents

Publication Publication Date Title
CN108648210A (en) It is a kind of static state complex scene under fast multi-target detection method and device
CN109903312A (en) A kind of football sportsman based on video multi-target tracking runs distance statistics method
CN110136154A (en) Remote sensing images semantic segmentation method based on full convolutional network and Morphological scale-space
CN111784685A (en) Power transmission line defect image identification method based on cloud edge cooperative detection
CN108288075A (en) A kind of lightweight small target detecting method improving SSD
CN105975929A (en) Fast pedestrian detection method based on aggregated channel features
CN111985499B (en) High-precision bridge apparent disease identification method based on computer vision
CN105574550A (en) Vehicle identification method and device
CN107273832B (en) License plate recognition method and system based on integral channel characteristics and convolutional neural network
CN113658192B (en) Multi-target pedestrian track acquisition method, system, device and medium
CN106778734A (en) A kind of insulator based on rarefaction representation falls to go here and there defect inspection method
CN113128335B (en) Method, system and application for detecting, classifying and finding micro-living ancient fossil image
CN106157323A (en) The insulator division and extracting method that a kind of dynamic division threshold value and block search combine
CN108664939A (en) A kind of remote sensing images aircraft recognition method based on HOG features and deep learning
CN110222604A (en) Target identification method and device based on shared convolutional neural networks
CN111967313A (en) Unmanned aerial vehicle image annotation method assisted by deep learning target detection algorithm
CN113435407B (en) Small target identification method and device for power transmission system
CN110059539A (en) A kind of natural scene text position detection method based on image segmentation
CN112489055B (en) Satellite video dynamic vehicle target extraction method fusing brightness-time sequence characteristics
CN106503638A (en) For the image procossing of colour recognition, vehicle color identification method and system
CN109815798A (en) Unmanned plane image processing method and system
Zhang et al. Application research of YOLO v2 combined with color identification
CN109785288A (en) Transmission facility defect inspection method and system based on deep learning
CN110516707A (en) A kind of image labeling method and its device, storage medium
CN110135248A (en) A kind of natural scene Method for text detection based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant