CN101286239A - Aerial shooting traffic video frequency vehicle rapid checking method - Google Patents
Aerial shooting traffic video frequency vehicle rapid checking method Download PDFInfo
- Publication number
- CN101286239A CN101286239A CN 200810104681 CN200810104681A CN101286239A CN 101286239 A CN101286239 A CN 101286239A CN 200810104681 CN200810104681 CN 200810104681 CN 200810104681 A CN200810104681 A CN 200810104681A CN 101286239 A CN101286239 A CN 101286239A
- Authority
- CN
- China
- Prior art keywords
- execution
- zone
- image
- frame
- background
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
An aerial photography transport video vehicle rapid detection method comprises the steps that: step 100, the space-based coding part adopts the global motion estimation to determine the global motion vector of a background; step 200, the residual difference value is calculated according to the global motion vector to divide background regions and motion regions; step 300, whether the regions are the background regions or not is judged, an image of which the regions are all the background regions is changed to the next frame, and the step 200 is executed; otherwise, the motion regions of the image execute the step 400; step 400, the ground part firstly determines a self-adaptive gradient threshold which is lower and can correctly divide various objects to carry out the primary marker extraction; then two parameters of an area and a water collection depth are introduced for further screening of the extracted marker, thus determining a final marker point; then the marker point is taken as the region minimum value for carrying out the VS watershed division; finally, the regions are merged according to the texture information of the regions; step 500, the shadow is detected in an HSV color space, thus filtering out phony targets and finally detecting vehicles. The method of the invention solves the problems that the calculation amount during the decompression of the aerial photography image and the detection of motion targets is greater, and the real-time property and the robustness requirement are difficult to be satisfied at the same time.
Description
Technical field
The present invention relates to the detection method of moving vehicle in the estimation and the detection method, particularly traffic video image of video image, belong to traffic monitoring, field of video processing.
Background technology
Over past ten years, traffic congestion, traffic hazard and environmental pollution have produced significant effects to socio-economic development and life, and intelligent transportation system becomes the main means that address this problem.Mostly existing traffic monitoring means are harvester is fixed in the road surface, because fixing surveillance equipment is restricted by pavement behavior, dirigibility is strong inadequately, have therefore occurred gathering at the space base platform method of traffic video in recent years again.The image that the present invention just is being based on the collection of space base platform proposes.
Moving object detection is an important component part of digital image processing techniques, it is the emphasis and the difficult point of research fields such as computer vision, pattern-recognition, target recognition and tracking, moving image encoding, security monitoring, has a wide range of applications in fields such as military affairs, national defence and industry.The motion analysis of sequence image is subjected to paying attention to widely because of its huge using value.Its basic task is to detect movable information from image sequence, and the simplified image processing procedure obtains required motion vector, thereby can the recognition and tracking object.
Present moving target detecting method mainly contains time-domain difference method, background subtraction division, optical flow method.
Wherein the time-domain difference method is that front and back 2 frames or 3 two field pictures are subtracted each other, and moving target occurs if difference greater than a certain threshold value, is just judged, and reports to the police.
The time-domain difference motion detection method has stronger adaptivity for dynamic environment, and robustness is better, can adapt to various dynamic environment, but generally can not extract all relevant feature pixels fully, is easy to generate cavitation in movement entity inside like this.This algorithm is the algorithm of operand minimum in the three class foreground extraction algorithms, only needs continuous several two field pictures are carried out the difference computing and carry out threshold decision.Though this algorithm can extract the moving region efficiently, but in comprising the image sequence of moving vehicle, often be subjected to the influence of capture apparatus shake, climate change, factors such as day alternates with night, light flash, can not distinguish moving vehicle and other moving object effectively, cause very big interference for the correct extraction of vehicle.
The background subtraction division is a kind of moving target detecting method commonly used, its basic thought is that current each two field picture and the background image of storing in advance or obtain are in real time subtracted each other, if pixel value difference is greater than a certain threshold value, judge that then this pixel is to appear on the moving target, and the result who obtains after the threshold operation of subtracting each other information such as the position, size, shape of target have directly been provided.
This method is very responsive to the changes in environmental conditions of illumination, is easy to generate false alarm, and upgrades very slow when background changes.Use the background subtraction method to carry out moving object detection and can run into the following problem usually: background is obtained: the simplest method of obtaining of background image is not carry out under scene has the situation of moving target, but can't satisfy this requirement in actual applications; The disturbance of background: can contain the object of slight disturbance in the background, as shaking of branch, leaf, the disturbance part should not be counted as the foreground moving target; The variation of extraneous illumination condition: the variation of different time sections light, weather etc. is to the influence of testing result in one day; Fixed object is mobile in the background: the fixed object in the background may move, and drives away as a car in the scene, and it is moving target that the zone after object is removed may be mistaken as in one section, but should not be counted as the foreground moving target forever; The renewal of background: the variation mobile and extraneous illumination condition of fixed object can make background image change in the background, needs in time upgrade background model, to adapt to this variation; The influence of shade: the shade of foreground target is also detected as the part of motility target usually, will influence the further processing and the analysis of moving target like this, brings error for later tracking and identification.
The computing method of light stream generally are divided into four classes: (1) is based on the method for gradient; (2) based on the method for mating; (3) based on the method for energy; (4) based on the method for phase place.The algorithm of studying at most that is based on gradient.
The optical flow analysis method can detect moving target by the analysis long to the light stream of screen image under the situation of camera motion, but calculation of complex, real-time is relatively poor.Motion detection based on optical flow approach has adopted the time dependent light stream characteristic of moving target, thereby can extract effectively and the pursuit movement target.The advantage of this method is also can detect independently moving target under the prerequisite that camera motion exists.Yet most optical flow computation method is quite complicated, and noiseproof feature is poor, if there is not special hardware unit can not be applied to the real-time processing of full frame video stream.
The vehicle checking method of aerial photography traffic video image commonly used all is that the video image that will aloft collect is passed ground back through behind the compressed encoding at present, the ground processing platform detects vehicle after with the compressed image decompress(ion) again, existing method is not all utilized existing motion vector and moving region information in the compressed image, because the operand of decompress(ion) and motion detection is all bigger, be difficult to satisfy simultaneously real-time, the requirement of stability and accuracy.
Summary of the invention
The present invention is directed to the deficiency that prior art exists, a kind of aerial shooting traffic video frequency vehicle rapid checking method is provided, this method is in conjunction with overall motion estimation in the compressed encoding and watershed segmentation technology, solved to the image of taking photo by plane decompress and during moving object detection operand bigger, be difficult to satisfy simultaneously the problem that real-time and robustness require.
Technical solution of the present invention: aerial shooting traffic video frequency vehicle rapid checking method, step is as follows:
Space base traffic monitoring platform is modal to be to hover in the air target area (stadium, traffic intersection) fixed a point to monitor, or follow the tracks of along road, its motion can be approximately translation motion, for the video sequence of taking photo by plane, can be divided into background image and foreground image.The motion of any pixel can be decomposed into " global motion " that is caused by camera motion and " local motion " that is caused by object of which movement in the image.In cataloged procedure, utilize the correlation technique of overall motion estimation and moving object detection to mark off foreground area (moving region) and background area usually, and estimation there emerged a the motion vector of background area and foreground area.Present traffic monitoring system generally all aloft monitoring platform be transferred to ground again after by hardware video sequence being compressed, ground uses the correlation theory of motion estimation and compensation, moving object detection and pattern-recognition to detect vehicle in the face of behind its decompress(ion) again, do the estimation that do not have to utilize in cataloged procedure and the correlated results of motion detection like this, repetitive operation, simultaneously, because the calculated amount of above ground portion is bigger, is difficult to satisfy simultaneously the requirement of real-time and accuracy.
The present invention's advantage compared with prior art is:
(1) moving region of cutting apart in the inventive method is that aloft the compressed encoding on the video acquisition compressed platform is partly finished.It is poor that continuous two width of cloth images are carried out frame, asks SAD to each 4 * 4 then, if SAD surpasses threshold value, is judged to be the moving region; Otherwise be judged to be the background area.For avoiding the problem inaccurate even that mistake is cut apart of cutting apart that the image sequence global motion causes, before the frame difference, carry out overall motion estimation.By being carried out overall motion estimation, sequence of video images is partitioned into the moving region; On ground watershed segmentation is directly carried out in the moving region that receives, omitted the image behind the decompress(ion) is carried out the bigger step of this operand of motion segmentation again, improved the real-time of vehicle detection;
(2) in the video processing part on ground, watershed segmentation is carried out in the moving region, improved the labeling method of traditional watershed segmentation, when extracting mark, determine earlier one on the low side but still can correctly cut apart the self-adaption gradient threshold value of each object, carry out preliminary marker extraction; Introduce area again and two parameters of the degree of depth of catchmenting, the mark that has extracted is further screened, to determine final gauge point; Be regional minimal value with gauge point at last, carry out watershed segmentation, the texture information according to the zone carries out the zone merging again, has suppressed over-segmentation effectively, has improved the accuracy of separation.This method has suppressed over-segmentation.By shadow Detection is carried out in the zone that is partitioned into, remove pseudo-target at last, make the result of vehicle detection more accurate.
Experimental result shows, this method has overcome the bigger shortcoming of classic method operand, satisfies the real-time requirement that aerial shooting traffic video frequency vehicle detects, and can accurately detect vehicle in the monitoring scene, to light change, background interference is insensitive, has robustness preferably.
Description of drawings
Fig. 1 is an overview flow chart of the present invention;
Fig. 2 is the overall motion estimation process flow diagram among Fig. 1
Fig. 3 is for cutting apart background and moving region process flow diagram among Fig. 1;
Fig. 4 carries out the watershed segmentation process flow diagram for the extraction mark among Fig. 1;
Fig. 5 is that the zone among Fig. 1 merges process flow diagram;
Fig. 6 is the process flow diagram of the shadow Detection among Fig. 1.
Embodiment
Fig. 1 is an overview flow chart of the present invention, is specially:
Fig. 2 is the process flow diagram of overall motion estimation of the present invention, and in technical scheme shown in Figure 1, described step 100 is specially:
x′=G
x+x
y′=G
y+y
In the formula: x, y and x ', y ' are respectively the coordinate of present frame and previous frame corresponding point, and Gx, Gy are global motion vectors.
Fig. 3 is cut apart the process flow diagram of moving region and background area for the present invention, and in technical scheme shown in Figure 1, described step 200 is specially:
Fig. 4 extracts the process flow diagram that mark carries out watershed segmentation for the present invention, and in technical scheme shown in Figure 1, described step 400 is specially:
Step 410, select the morphology gradient algorithm for use,
Wherein
The corrosion of the expansion of presentation video, Θ presentation video, (x y) is disc-shaped structure unit to b, and image is carried out medium filtering;
The self-adaption gradient threshold value of image after step 420, the pre-service of computed image medium filtering;
Step 430, whether judge the each point gradient, be execution in step 430 then, otherwise change down frame execution in step 410 over to less than the self-adaption gradient threshold value;
Step 440, note S
xBe the area in zone, t
1Be threshold value, choose the connected region that the gauge point that obtains forms according to its corresponding coordinate position by step 430,, judge whether to satisfy S for the residing connected region S of any gauge point q
x〉=t
1, be execution in step 450 then, otherwise change next frame execution in step 410 over to;
Step 450, note D
xBe the retaining basin degree of depth of region S, t
2Be threshold value, the connected region for the gauge point p place of choosing by step 430,440 arbitrarily judges whether to satisfy D
x〉=t
2, be then to be execution in step 460 then, otherwise change down frame execution in step 410 over to;
Step 460, the gauge point that obtains with step 450 are that regional minimal value is carried out the VS watershed segmentation.
Step 470, utilize texture information to merge the zone.
The principle of being cut apart by above-mentioned moving region as can be known, when a car is in different 4*4 pieces, the moving region that is partitioned into may comprise bigger background information, simultaneously, when two or many cars when nearer, they may be divided into same moving region, therefore need cut apart to extract vehicle wherein the moving region.Extraction mark of the present invention and dividing method: determine earlier one on the low side but still can correctly cut apart the self-adaption gradient threshold value of each object, carry out preliminary marker extraction; Introduce area again and two parameters of the degree of depth of catchmenting, the mark that has extracted is further screened, to determine final gauge point; Be regional minimal value with gauge point at last, carry out the VS watershed segmentation, the texture information according to the zone carries out the zone merging again, has suppressed over-segmentation effectively, has improved the accuracy of separation.
The process flow diagram that Fig. 5 merges for the present invention zone, in technical scheme shown in Figure 4, described step 470 is specially:
Fig. 6 is the process flow diagram of shadow Detection of the present invention, and in technical scheme shown in Figure 1, described step 500 is specially:
In step 510, the zone after merging, the gray-scale value of single-frame images is added up, the road of the video image of taking photo by plane accounts for major part, and the pixel of getting the gray-scale value statistic and be peak value is as road;
Merged same vehicle region through watershed segmentation and area filtering, filtering the background area, but the shade of vehicle is easily become vehicle by flase drop sometimes because shape is similar to vehicle.The present invention paid close attention to is that the cast shadow pixel of moving target the same with the moving region pixel as background has significant difference, but simultaneously, shadow region itself also has some and other consistance feature of moving region phase region, and the present invention utilizes following 3 features to come shade is handled.(1) point in the shadow region becomes relatively stricter linearity with the ratio of corresponding point in the background.The shade that produces under the various light field conditions is analyzed, found that its ratio is between 1.0~2.5.(2) brightness of relative background area, shadow region reduces, but color does not have marked change.(3) relative background area, shadow region saturation degree reduces.Because for the aerial photography traffic video that is partitioned into the moving region, its background is road, studies show that, it is less to the influence of shadow Detection to regard each component of HSV of road as definite value within a certain period of time.Through shadow Detection, the pseudo-target of filtering has improved accuracy in detection.
Claims (6)
1, aerial shooting traffic video frequency vehicle rapid checking method is characterized in that step is as follows:
Step 100, at the space base coded portion, to acquired image frames series, adopt overall motion estimation, determine the global motion vector of background;
Step 200, calculate residual values, be partitioned into background area and moving region according to global motion vector;
Step 300, judge whether to be the background area, change next frame over to for the image that is the background area, execution in step 100; Otherwise, for the moving region of image, execution in step 400;
Step 400, at above ground portion, to the moving region, determine earlier one on the low side but still can correctly cut apart the self-adaption gradient threshold value of each object, carry out preliminary marker extraction; Introduce area again and two parameters of the degree of depth of catchmenting, the mark that has extracted is further screened, to determine final gauge point; Be regional minimal value with gauge point then, carry out the VS watershed segmentation; At last, carrying out the zone according to the texture information in zone again merges;
Step 500, the zone after being combined are detected shade in the hsv color space, and the pseudo-target of filtering finally detects vehicle.
2, aerial shooting traffic video frequency vehicle rapid checking method according to claim 1 is characterized in that: described step 100 is specially:
Step 110, judging whether to be first frame, is execution in step 120 then, otherwise execution in step 130;
Step 120, use I frame coding do not carry out Region Segmentation;
Step 130, judging whether the former frame motion vector exists, is execution in step 140 then, otherwise execution in step 150;
Step 140, with the motion vector of former frame as the search starting point;
Step 150, when this motion vector does not exist, with zero vector as the search starting point;
Step 160, the search starting point that calculates with step 140 and step 150 use the melee template to search for as the center, obtain
Minimum a bit, x wherein, y is a global motion vector, I
N, I
N-1Represent the brightness value of N frame and N-1 frame respective pixel, W, H are the wide and high of sub-piece;
Step 170, judging that whether this put minimum or melee template center's point or arrive the search window border, is execution in step 190 then; Otherwise execution in step 180;
Step 180, with this point as searching for starting point;
Step 190, will (x, y) global motion vector is as a setting carried out overall motion estimation, the two parameter translation model that overall motion estimation is used are:
x′=G
x+x
y′=G
y+y
In the formula: x, y and x ', y ' are respectively the coordinate of present frame and previous frame corresponding point, and Gx, Gy are global motion vectors.
3, aerial shooting traffic video frequency vehicle rapid checking method according to claim 1 is characterized in that: described step 200 is specially:
Step 210, judge final
Whether greater than a certain threshold value T
1, wherein
It is execution in step 220 then; Otherwise execution in step 230;
Step 220, think the scene switching has taken place, this frame is encoded as the I frame, do not carry out Region Segmentation;
Step 230, use global motion vector calculate the S of each 4 * 4 Block Brightness component
N(x, y);
Step 240, judgement S
N(whether x is y) less than a certain threshold value T
2, be execution in step 250 then, otherwise execution in step 260;
Step 250, judge that this zone is the background area;
Step 260, judge that this zone is the moving region.
4, aerial shooting traffic video frequency vehicle rapid checking method according to claim 1 is characterized in that: described step 400 is specially:
Step 410, select the morphology gradient algorithm for use,
Wherein
The corrosion of the expansion of presentation video, Θ presentation video, (x y) is disc-shaped structure unit to b, and image is carried out medium filtering;
The self-adaption gradient threshold value of image after step 420, the pre-service of computed image medium filtering;
Step 430, whether judge the each point gradient, be execution in step 430 then, otherwise change down frame execution in step 410 over to less than the self-adaption gradient threshold value;
Step 440, note S
xBe the area in zone, t
1Be threshold value, choose the connected region that the gauge point that obtains forms according to its corresponding coordinate position by step 430,, judge whether to satisfy S for the residing connected region S of any gauge point q
x〉=t
1, be execution in step 450 then, otherwise change next frame execution in step 410 over to;
Step 450, note D
xBe the retaining basin degree of depth of region S, t
2Be threshold value, the connected region for the gauge point p place of choosing by step 430,440 arbitrarily judges whether to satisfy D
x〉=t
2, be then to be execution in step 460 then, otherwise change down frame execution in step 410 over to;
Step 460, the gauge point that obtains with step 450 are that regional minimal value is carried out the VS watershed segmentation;
Step 470, utilize texture information to merge the zone.
5, aerial shooting traffic video frequency vehicle rapid checking method according to claim 4 is characterized in that described step 470 is specially:
Step 471, according to the histogrammic statistical moment of area gray scale, i.e. texture carries out the zone and merges.
Region area after step 472, judgement merge and adaptive threshold bound relation, wherein the flight parameter passed back by the space base platform of adaptive threshold and the empirical value of vehicle physical size obtain, if less than lower bound then execution in step 473, greater than lower bound execution in step 474 then, execution in step 475 then between bound;
Step 473, should the zone as a setting or noise filtering;
Step 474, should the regional as a setting filtering in zone;
Step 475, should the zone as doubtful moving target.
6, aerial shooting traffic video frequency vehicle rapid checking method according to claim 1 is characterized in that described step 500 is specially:
In step 510, the zone after merging, the gray-scale value of single-frame images is added up, the road of the video image of taking photo by plane accounts for major part, and the pixel of getting the gray-scale value statistic and be peak value is as road;
Step 520, the H that extracts these points, S, V component take statistics on average, obtain the colourity B of background
H, saturation degree B
SWith brightness value B
V
Step 530, judgement
Wherein (x y) is current input image to I, and B is the current background image, and α and β are shadow spots and the upper and lower dividing value of the brightness ratio of corresponding background dot; τ
HAnd τ
SBe respectively the threshold value of color harmony saturation degree, shadow spots is D with the tonal difference of corresponding background dot
H(x, y)=min (| I
H(x, y)-B
H|, 360-|I
H(x, y)-B
H|), be execution in step 540 then, otherwise execution in step 550;
Step 540, as the shade filtering;
Step 550, obtain final vehicle detection result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2008101046810A CN100545867C (en) | 2008-04-22 | 2008-04-22 | Aerial shooting traffic video frequency vehicle rapid checking method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2008101046810A CN100545867C (en) | 2008-04-22 | 2008-04-22 | Aerial shooting traffic video frequency vehicle rapid checking method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101286239A true CN101286239A (en) | 2008-10-15 |
CN100545867C CN100545867C (en) | 2009-09-30 |
Family
ID=40058429
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB2008101046810A Expired - Fee Related CN100545867C (en) | 2008-04-22 | 2008-04-22 | Aerial shooting traffic video frequency vehicle rapid checking method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN100545867C (en) |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101577052B (en) * | 2009-05-14 | 2011-06-08 | 中国科学技术大学 | Device and method for detecting vehicles by overlooking |
CN101635852B (en) * | 2009-08-26 | 2011-08-31 | 北京航空航天大学 | Method for detecting real-time moving object based on adaptive background modeling |
CN102194109A (en) * | 2011-05-25 | 2011-09-21 | 浙江工业大学 | Vehicle segmentation method in traffic monitoring scene |
CN102208016A (en) * | 2010-03-30 | 2011-10-05 | 索尼公司 | Image processing apparatus and method, and program |
CN101739553B (en) * | 2009-12-10 | 2012-01-11 | 青岛海信网络科技股份有限公司 | Method for identifying target in parallax image |
CN102446346A (en) * | 2010-09-30 | 2012-05-09 | 北京中电兴发科技有限公司 | Method for quickly removing motion image shadow |
CN102622895A (en) * | 2012-03-23 | 2012-08-01 | 长安大学 | Video-based vehicle speed detecting method |
CN102917218A (en) * | 2012-10-18 | 2013-02-06 | 北京航空航天大学 | Movable background video object extraction method based on self-adaptive hexagonal search and three-frame background alignment |
CN102917220A (en) * | 2012-10-18 | 2013-02-06 | 北京航空航天大学 | Dynamic background video object extraction based on hexagon search and three-frame background alignment |
CN102917217A (en) * | 2012-10-18 | 2013-02-06 | 北京航空航天大学 | Movable background video object extraction method based on pentagonal search and three-frame background alignment |
CN102917222A (en) * | 2012-10-18 | 2013-02-06 | 北京航空航天大学 | Mobile background video object extraction method based on self-adaptive hexagonal search and five-frame background alignment |
CN103051893A (en) * | 2012-10-18 | 2013-04-17 | 北京航空航天大学 | Dynamic background video object extraction based on pentagonal search and five-frame background alignment |
CN103903254A (en) * | 2012-12-31 | 2014-07-02 | 中国科学院深圳先进技术研究院 | X-ray image processing method and system and X-ray image processing equipment |
CN103997630A (en) * | 2014-06-13 | 2014-08-20 | ***通信集团广东有限公司 | Intelligent primary code stream and secondary code stream switching method and system based on TD-LTE network |
CN104125471A (en) * | 2014-08-07 | 2014-10-29 | 成都瑞博慧窗信息技术有限公司 | Video image compression method |
CN104125470A (en) * | 2014-08-07 | 2014-10-29 | 成都瑞博慧窗信息技术有限公司 | Video data transmission method |
CN104751138A (en) * | 2015-03-27 | 2015-07-01 | 东华大学 | Vehicle mounted infrared image colorizing assistant driving system |
CN105069411A (en) * | 2015-07-24 | 2015-11-18 | 深圳市佳信捷技术股份有限公司 | Road recognition method and device |
CN105791825A (en) * | 2016-03-11 | 2016-07-20 | 武汉大学 | Screen image coding method based on H.264 and HSV color quantization |
WO2017114168A1 (en) * | 2015-12-29 | 2017-07-06 | Sengled Co., Ltd. | Method and device for target detection |
CN106952474A (en) * | 2017-04-12 | 2017-07-14 | 湖南源信光电科技股份有限公司 | The statistical method of traffic flow detected based on moving vehicle |
CN109472742A (en) * | 2018-10-09 | 2019-03-15 | 江苏裕兰信息科技有限公司 | The algorithm and its implementation of adjust automatically integration region |
CN112967511A (en) * | 2021-02-26 | 2021-06-15 | 安徽达尔智能控制***股份有限公司 | Intelligent road network command method and system based on video traffic flow |
-
2008
- 2008-04-22 CN CNB2008101046810A patent/CN100545867C/en not_active Expired - Fee Related
Cited By (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101577052B (en) * | 2009-05-14 | 2011-06-08 | 中国科学技术大学 | Device and method for detecting vehicles by overlooking |
CN101635852B (en) * | 2009-08-26 | 2011-08-31 | 北京航空航天大学 | Method for detecting real-time moving object based on adaptive background modeling |
CN101739553B (en) * | 2009-12-10 | 2012-01-11 | 青岛海信网络科技股份有限公司 | Method for identifying target in parallax image |
CN102208016A (en) * | 2010-03-30 | 2011-10-05 | 索尼公司 | Image processing apparatus and method, and program |
CN102446346A (en) * | 2010-09-30 | 2012-05-09 | 北京中电兴发科技有限公司 | Method for quickly removing motion image shadow |
CN102194109B (en) * | 2011-05-25 | 2013-09-11 | 浙江工业大学 | Vehicle segmentation method in traffic monitoring scene |
CN102194109A (en) * | 2011-05-25 | 2011-09-21 | 浙江工业大学 | Vehicle segmentation method in traffic monitoring scene |
CN102622895A (en) * | 2012-03-23 | 2012-08-01 | 长安大学 | Video-based vehicle speed detecting method |
CN102622895B (en) * | 2012-03-23 | 2014-04-30 | 长安大学 | Video-based vehicle speed detecting method |
CN102917217B (en) * | 2012-10-18 | 2015-01-28 | 北京航空航天大学 | Movable background video object extraction method based on pentagonal search and three-frame background alignment |
CN102917222A (en) * | 2012-10-18 | 2013-02-06 | 北京航空航天大学 | Mobile background video object extraction method based on self-adaptive hexagonal search and five-frame background alignment |
CN103051893A (en) * | 2012-10-18 | 2013-04-17 | 北京航空航天大学 | Dynamic background video object extraction based on pentagonal search and five-frame background alignment |
CN102917217A (en) * | 2012-10-18 | 2013-02-06 | 北京航空航天大学 | Movable background video object extraction method based on pentagonal search and three-frame background alignment |
CN102917220A (en) * | 2012-10-18 | 2013-02-06 | 北京航空航天大学 | Dynamic background video object extraction based on hexagon search and three-frame background alignment |
CN103051893B (en) * | 2012-10-18 | 2015-05-13 | 北京航空航天大学 | Dynamic background video object extraction based on pentagonal search and five-frame background alignment |
CN102917218B (en) * | 2012-10-18 | 2015-05-13 | 北京航空航天大学 | Movable background video object extraction method based on self-adaptive hexagonal search and three-frame background alignment |
CN102917220B (en) * | 2012-10-18 | 2015-03-11 | 北京航空航天大学 | Dynamic background video object extraction based on hexagon search and three-frame background alignment |
CN102917222B (en) * | 2012-10-18 | 2015-03-11 | 北京航空航天大学 | Mobile background video object extraction method based on self-adaptive hexagonal search and five-frame background alignment |
CN102917218A (en) * | 2012-10-18 | 2013-02-06 | 北京航空航天大学 | Movable background video object extraction method based on self-adaptive hexagonal search and three-frame background alignment |
CN103903254B (en) * | 2012-12-31 | 2017-08-11 | 中国科学院深圳先进技术研究院 | A kind of x-ray image processing method, system and x-ray image processing equipment |
CN103903254A (en) * | 2012-12-31 | 2014-07-02 | 中国科学院深圳先进技术研究院 | X-ray image processing method and system and X-ray image processing equipment |
CN103997630B (en) * | 2014-06-13 | 2018-11-27 | ***通信集团广东有限公司 | Primary and secondary code stream intelligent switch method and system based on TD-LTE network |
CN103997630A (en) * | 2014-06-13 | 2014-08-20 | ***通信集团广东有限公司 | Intelligent primary code stream and secondary code stream switching method and system based on TD-LTE network |
CN104125471A (en) * | 2014-08-07 | 2014-10-29 | 成都瑞博慧窗信息技术有限公司 | Video image compression method |
CN104125470B (en) * | 2014-08-07 | 2017-06-06 | 成都瑞博慧窗信息技术有限公司 | A kind of method of transmitting video data |
CN104125470A (en) * | 2014-08-07 | 2014-10-29 | 成都瑞博慧窗信息技术有限公司 | Video data transmission method |
CN104751138B (en) * | 2015-03-27 | 2018-02-23 | 东华大学 | A kind of vehicle mounted infrared image colorization DAS (Driver Assistant System) |
CN104751138A (en) * | 2015-03-27 | 2015-07-01 | 东华大学 | Vehicle mounted infrared image colorizing assistant driving system |
CN105069411B (en) * | 2015-07-24 | 2019-03-29 | 深圳市佳信捷技术股份有限公司 | Roads recognition method and device |
CN105069411A (en) * | 2015-07-24 | 2015-11-18 | 深圳市佳信捷技术股份有限公司 | Road recognition method and device |
WO2017114168A1 (en) * | 2015-12-29 | 2017-07-06 | Sengled Co., Ltd. | Method and device for target detection |
CN105791825B (en) * | 2016-03-11 | 2018-10-26 | 武汉大学 | A kind of screen picture coding method based on H.264 with hsv color quantization |
CN105791825A (en) * | 2016-03-11 | 2016-07-20 | 武汉大学 | Screen image coding method based on H.264 and HSV color quantization |
CN106952474A (en) * | 2017-04-12 | 2017-07-14 | 湖南源信光电科技股份有限公司 | The statistical method of traffic flow detected based on moving vehicle |
CN109472742A (en) * | 2018-10-09 | 2019-03-15 | 江苏裕兰信息科技有限公司 | The algorithm and its implementation of adjust automatically integration region |
CN109472742B (en) * | 2018-10-09 | 2023-05-23 | 珠海大轩信息科技有限公司 | Algorithm for automatically adjusting fusion area and implementation method thereof |
CN112967511A (en) * | 2021-02-26 | 2021-06-15 | 安徽达尔智能控制***股份有限公司 | Intelligent road network command method and system based on video traffic flow |
Also Published As
Publication number | Publication date |
---|---|
CN100545867C (en) | 2009-09-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN100545867C (en) | Aerial shooting traffic video frequency vehicle rapid checking method | |
Chiu et al. | A robust object segmentation system using a probability-based background extraction algorithm | |
JP5325899B2 (en) | Intrusion alarm video processor | |
KR100459476B1 (en) | Apparatus and method for queue length of vehicle to measure | |
CN105894701B (en) | The identification alarm method of transmission line of electricity external force damage prevention Large Construction vehicle | |
CN104978567B (en) | Vehicle checking method based on scene classification | |
CN103208185A (en) | Method and system for nighttime vehicle detection on basis of vehicle light identification | |
JP2015514278A (en) | Methods, systems, products, and computer programs for multi-queue object detection and analysis (multi-queue object detection and analysis) | |
CN108229256B (en) | Road construction detection method and device | |
CN106951898B (en) | Vehicle candidate area recommendation method and system and electronic equipment | |
CN103093198A (en) | Crowd density monitoring method and device | |
CN108765453B (en) | Expressway agglomerate fog identification method based on video stream data | |
CN112489055B (en) | Satellite video dynamic vehicle target extraction method fusing brightness-time sequence characteristics | |
Indrabayu et al. | Blob modification in counting vehicles using gaussian mixture models under heavy traffic | |
Xia et al. | Automatic multi-vehicle tracking using video cameras: An improved CAMShift approach | |
Muniruzzaman et al. | Deterministic algorithm for traffic detection in free-flow and congestion using video sensor | |
Yuan et al. | Day and night vehicle detection and counting in complex environment | |
CN111339824A (en) | Road surface sprinkled object detection method based on machine vision | |
Srilekha et al. | A novel approach for detection and tracking of vehicles using Kalman filter | |
Kaur et al. | An Efficient Method of Number Plate Extraction from Indian Vehicles Image | |
Ha et al. | Improved Optical Flow Estimation In Wrong Way Vehicle Detection. | |
CN115376106A (en) | Vehicle type identification method, device, equipment and medium based on radar map | |
Wibowo et al. | Implementation of Background Subtraction for Counting Vehicle Using Mixture of Gaussians with ROI Optimization | |
Oh et al. | Development of an integrated system based vehicle tracking algorithm with shadow removal and occlusion handling methods | |
CN109308809A (en) | A kind of tunnel device for monitoring vehicle based on dynamic image characteristic processing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20090930 Termination date: 20160422 |