CN102930242A - Bus type identifying method - Google Patents

Bus type identifying method Download PDF

Info

Publication number
CN102930242A
CN102930242A CN2012103371150A CN201210337115A CN102930242A CN 102930242 A CN102930242 A CN 102930242A CN 2012103371150 A CN2012103371150 A CN 2012103371150A CN 201210337115 A CN201210337115 A CN 201210337115A CN 102930242 A CN102930242 A CN 102930242A
Authority
CN
China
Prior art keywords
bus
vehicle
model
judge
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012103371150A
Other languages
Chinese (zh)
Other versions
CN102930242B (en
Inventor
杨华
马文琪
董莉莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN201210337115.0A priority Critical patent/CN102930242B/en
Publication of CN102930242A publication Critical patent/CN102930242A/en
Application granted granted Critical
Publication of CN102930242B publication Critical patent/CN102930242B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a bus type identifying method, belonging to the technical field of processing a computer video. The bus type identifying method comprises the following steps of: carrying out mixed Gaussian modeling on a monitoring video, and carrying out bus type initial identification on a vehicle; secondly, establishing a corresponding 3D (three-dimensional) model according to the position information of the vehicle and the characteristics of a bus; meanwhile, extracting a characteristic segment by using an LSD (Large Screen Display) segment extracting algorithm; and finally, matching the 3D model of the vehicle and the segment characteristics by using a comprehensive algorithm of combining template matching and shortest distance matching. According to the bus type identifying method provided by the invention, the 3D model is established by using the vehicle position information and the bus type characteristics, so that the preparation work of a car type database can be avoided; meanwhile, the vehicle to be judged is initially judged by using the bus type characteristics, so that the 3D modeling and matching process can be carried out on all vehicles in the scene, and the calculation amount is reduced. Finally, the calculation accuracy is improved, and the calculation amount can be reduced by using a combined matching algorithm.

Description

A kind of bus model recognizing method
Technical field
The invention belongs to the computer video processing technology field, be specially a kind of bus model recognizing method, especially design a kind of bus model recognizing method that the monitoring of public security bayonet socket is used that is suitable for.
Background technology
At present, the vehicle recognition technology is being brought into play more and more important effect in public security monitoring, and bus is as a kind of important urban transportation tool, especially the key monitoring object in the monitoring.
Feature Points Matching method and 3D model matching method are two kinds of common vehicle detection methods.But the Feature Points Matching method usually need to be with whole 2D features of extracting (such as the edge line segment, edge pixel point etc.) mate with the 2D feature of model (referring to: Grimson, W., " The combinatorics of heuristic search termination for object recognition in cluttered environment; " IEEE Trans.PAMI., vol.13, no.9, pp.920-935,1991.) thereby its calculated amount larger, real-time is relatively relatively poor, can not directly apply to the bayonet socket real-time monitoring system.The method of traditional 3D coupling (referring to: (and referring to: Tan, T.N., Sullivan, G.D., Baker, K.D., " Model-based localization and recognition ofroad vehicles, " Int.J.Comput.Vis., vol.27, no.1, pp.5-25,1998.) its recognition accuracy depends on the complete degree of 3D model bank, however major applications equipment is difficult to have complete 3D auto model database, and the method complexity can linearity increase with the increase of model quantity, therefore also is difficult to directly apply on the bayonet socket watch-dog.
Find through retrieval, publication number is the Chinese invention patent of 101783076A, this disclosure of the invention the method for quick vehicle type recognition under a kind of video monitoring mode, implement according to following steps: road monitoring apparatus is set, and with vehicle be divided into car, with taxi, minibus, in-between car, bus and the high capacity waggon of particular color sign, step 1, initialization are carried out training study to video monitoring apparatus; The length of step 2, the area that extracts the vehicle target zone and boundary rectangle thereof and wide, the structure individual features, and be compact car, in-between car, large car with the vehicle target rough sort; Step 3, the dominant hue feature that a plurality of targets of compact car are extracted respectively vehicle body identify taxi, extract compact car vehicle window relative seat feature parameter again, further determine minibus or car; Step 4, extraction roof brightness parameter and roof textural characteristics parameter determine whether large car is bus.This patent adopts roof brightness and roof texture as principal character when whether judgement is bus, thereby is subject to the impact of illumination variation; And the present invention with the shape of bus and LSD Eigenvector as principal character, thereby the variation of illumination is less on the impact of method, so that the robustness of method is better, is conducive to improve the accuracy of identification.
Summary of the invention
The object of the invention is to overcome above-mentioned the deficiencies in the prior art part, a kind of new bus model recognizing method is proposed, the method is based on 3D Model Matching and LSD Eigenvector extraction algorithm, avoided the preliminary work in model data storehouse and all vehicles in the scene have been carried out 3D modeling and matching process, reduced calculated amount, improved the computing accuracy rate and reduce simultaneously calculated amount.
For achieving the above object, the technical solution used in the present invention is: at first monitor video is carried out Gaussian modeling, obtain pending vehicle foreground image; Then vehicle being carried out the bus vehicle tentatively identifies; Secondly according to the positional information of vehicle and the characteristics of bus vehicle, set up corresponding 3D model; Use simultaneously LSD line segments extraction algorithm to extract the Eigenvector of vehicle; Adopt at last a kind of integration algorithm in conjunction with template matches and bee-line coupling, 3D model and the line segment feature of vehicle mated.Through experiment confirm, identification has higher accuracy for the bus vehicle in the present invention, and real-time satisfies bayonet socket watch-dog demand.
The inventive method specifically may further comprise the steps:
The first step: monitor video is carried out the mixed Gaussian background modeling, obtain pending vehicle foreground image.
Second step: under world coordinate system, resulting vehicle is carried out the bus vehicle tentatively identify.
Concrete steps are:
1. resulting vehicle foreground image in the first step is carried out profile and extract, obtain N connected region Ω k, k=1,2 ..., N is to each connected region Ω k, can obtain one and comprise Ω kAnd the rectangular area R of area minimum k, k=1,2 ..., N.
2. under world coordinate system, by rectangle R k2 p in bottom 1, p 2Structure judging point p m, specifically:
p m . x = ( p 1 . x + p 2 . x ) / 2 p m . y = ( p 1 . y + p 2 . y ) / 2 - l p m . z = p 1 . z
Wherein, l represents bus length of wagon, R kFor comprising the minimum rectangle frame of the whole profiles of vehicle to be identified, (x, y, z) is illustrated in the three-dimensional coordinate of each point in the world coordinate system.
3. under image coordinate system, according to p mAnd R kPosition relationship judge tentatively whether this vehicle is bus, if p m∈ R k, judge tentatively that then this vehicle is bus, carries out for the 3rd step; If
Figure BDA00002129491100022
Judge that then this vehicle is non-bus, judge and finish.
The 3rd step: under world coordinate system, according to bus vehicle characteristics and R kPositional information makes up the 3D model when vehicle in front.
Concrete steps are:
1. R k2 p in bottom 1, p 2As 3D model front bottom end 2 points.
2. by p 1, p 2Make up 2 p of bus 3D model front top 3, p 4, specifically:
p 3 . x = p 1 . x , p 3 . y = p 1 . y , p 3 . z = p 1 . z + h p 4 . x = p 2 . x , p 4 . y = p 2 . y , p 4 . z = p 2 . z + h
Wherein, h represents the bus height, and (x, y, z) is illustrated in each point three-dimensional coordinate in the world coordinate system;
3. according to the perspective view of bus on xOy, make up 3D model tail end 2 p in bottom 5, p 6, specifically:
θ = arctan ( p 1 . y - p 2 . y p 2 . x - p 1 . x ) p j . x = p i . x - l sin θ p j . y = p i . x - l cos θ
Wherein, (i, j) ∈ { (1,5), (2,6) }, θ is illustrated in the perspective view, line segment (p 1, p 2) with the angle of y axle, l represents the bus length of wagon, (x, y, z) is illustrated in each point three-dimensional coordinate in the world coordinate system.
4. by p 5, p 6Make up 3D model tail end 2 p in top 7, p 8, be specially:
p 7 . x = p 5 . x , p 7 . y = p 5 . y , p 7 . z = p 5 . z + h p 8 . x = p 6 . x , p 8 . y = p 6 . y , p 8 . z = p 6 . z + h
Wherein, h represents the bus height, and (x, y, z) is illustrated in each point three-dimensional coordinate in the world coordinate system.。
5. obtain 12 line segment aggregates by 8 end points of 3D model, wherein can judge according to the visual angle of bayonet socket camera and the position of end points: line segment (p 1, p 2) (p 1, p 3) (p 2, p 4) (p 3, p 4) (p 3, p 7) (p 4, p 8) be the camera visible segment, line segment (p 5, p 6) be the invisible line segment of camera, line segment (p 2, p 6) (p 6, p 8) then need according to p 6The position judge, specifically: under image coordinate system, if
Figure BDA00002129491100034
(p then 2, p 6) (p 6, p 8) be the camera visible segment; If p 6∈ R (p 1234), then it is invisible line segment.R (p 1234) represent by a p 1, p 2, p 3, p 4The rectangle that consists of.In like manner, to line segment (p 1, p 5) (p 5, p 7) judge.Select the camera visible segment as the line segment aggregate of 3D model.So far can obtain the 3D model of this vehicle.
The 4th step: adopt LSD line segments extraction algorithm that the vehicle that obtains in the second step is carried out Eigenvector and extract.
The 5th step: the method that adopts stencil matching method and bee-line method to combine is mated calculating to the 3D model that obtains in the 3rd step and the Eigenvector that obtains in the 4th step, obtains recognition result.
Concrete steps are:
1. adopt template matching method, calculate matching factor η 1, specifically:
Figure BDA00002129491100041
Wherein,
Figure BDA00002129491100042
Overlapping between expression 3D model gray-scale map and the Eigenvector gray-scale map is regional; ψ is 3D model gray-scale map, and its pixel value is 0 or 1;
Figure BDA00002129491100043
Be LSD Eigenvector gray-scale map; Expression is carried out morphological dilations to image and is processed; ∑ I represents gray level image I is carried out the pixel summation;
Figure BDA00002129491100045
Expression is done threshold process to gray-scale map I, specifically:
Threshold t v ( I x , y ) = v , I x , y ≥ v 0 , otherwise
2. according to η 1Judge and work as whether vehicle in front is bus, specifically: if η 1TH l, TH lBe setting threshold, the representation feature line segment overlaps large percentage with the 3D model, thinks that this vehicle may be bus, enters next step, adopts the bee-line method further to judge; If η 1TH h, TH hBe setting threshold and TH hTH l, think that then this vehicle is bus, judge and finish; If η 1≤ TH l, think that then this vehicle is non-bus, judge and finish.
3. the vehicle that needs in the previous step further to judge is implemented the bee-line algorithm, make ψ
Figure BDA00002129491100047
The overlapping region is removed in expression The 3D illustraton of model;
Figure BDA00002129491100049
The overlapping region is removed in expression
Figure BDA000021294911000410
Eigenvector figure; The gray-scale value of I (α) expression pixel α; The position of p (α) expression pixel α under image coordinate system, count initialized device sum=0.Then the minimum distance method implementation is as follows:
(1) to pixel α ∈ ψ hAnd I (α) ≠ 0, centered by p (α), step is that the length of side is set up square search window.
(2) calculating pixel α arrives
Figure DEST_PATH_GDA000024835845000411
Bee-line, be specially
Figure DEST_PATH_GDA00002483584500051
(3) if d (α)≤d TH, then make sum=sum+1 and I (β 0)=0, d THBe setting threshold.
(4) repeating step (1) is until ψ hIn pixel travel through fully.
4. calculate matching factor η 2, be specially:
η 2 = sum Σ ψ h
According to η 2Judge and work as whether vehicle in front is bus, specifically: if η 2TH d, TH dBe setting threshold, the representation feature line segment overlaps large percentage with the 3D model, judges that this vehicle is bus; Otherwise, judge that then this vehicle is non-bus, judge and finish.
Compared with prior art, main contributions of the present invention and characteristics are: 1) utilize vehicle position information and bus vehicle characteristics to make up the 3D model, thereby do not need complete model data storehouse; 2) utilize bus vehicle characteristics to treat the judgement vehicle and tentatively judge, to avoid that all vehicles in the scene are carried out 3D modeling and matching process, reduced calculated amount; 3) matching algorithm of a kind of combination of employing, adopt first template matching method that 3D model and Eigenvector are mated for the first time, the bee-line method only needs to improve the computing accuracy rate and reduce simultaneously calculated amount remaining 3D model pixel after the first coupling and the Eigenvector pixel carries out matching operation.The present invention is particularly useful for the bus vehicle discriminance analysis that the public security bayonet socket is used.
Description of drawings
By reading the detailed description of non-limiting example being done with reference to the following drawings, it is more obvious that other features, objects and advantages of the present invention will become:
Fig. 1 is bus model recognizing method main-process stream block diagram of the present invention.
Fig. 2 is schematic diagram in the embodiment of the invention, and wherein (a) tentatively identifies signal for bus, and (b) (c) is bus perspective view on the xOy face.
Fig. 3 is that 8 end points of 3D model are set up the process schematic diagram in the embodiment of the invention.
Fig. 4 is visible segment schematic diagram in the embodiment of the invention, wherein: (a) bus visible segment under the video camera visual angle; (b) be the invisible line segment of bus and possibility visible segment schematic diagram.
Fig. 5 (a) (b) extracts schematic diagram for the LSD Eigenvector.
Fig. 6 is identifying schematic diagram in the embodiment of the invention: (a) be vehicle to be judged, (b) be the Eigenvector gray-scale map, (c) be 3D model gray-scale map, (d) in the expression stencil matching method, 3D model gray-scale map and Eigenvector gray-scale map stack result, (e) expression 3D illustraton of model overlaps the zone with Eigenvector figure, (f) removes the coincidence area schematic for the 3D illustraton of model.
Fig. 7 is embodiment of the invention bayonet socket monitoring scene schematic diagram, and camera and horizontal plane angle are less than 45 °.
Embodiment
The present invention is described in detail below in conjunction with specific embodiment.Following examples will help those skilled in the art further to understand the present invention, but not limit in any form the present invention.Should be pointed out that to those skilled in the art, without departing from the inventive concept of the premise, can also make some distortion and improvement.These all belong to protection scope of the present invention.
Embodiment
The video sequence that the present embodiment adopts is public security bayonet socket monitoring scene sequence.
The bus model recognizing method that the present embodiment relates to comprises following concrete steps:
The first step: video sequence is carried out the mixed Gaussian background modeling, obtain pending vehicle foreground image.
Second step: under world coordinate system, resulting vehicle is carried out the bus vehicle tentatively identify.
Concrete steps are:
1. the vehicle foreground image is carried out profile and extract, obtain N connected region Ω k, k=1,2 ..., N is to each connected region Ω k, can obtain one and comprise Ω kAnd the rectangular area R of area minimum k, k=1,2 ..., N.
2. under world coordinate system, by rectangle R k2 p in bottom 1, p 2Structure judging point p m, specifically:
p m . x = ( p 1 . x + p 2 . x ) / 2 p m . y = ( p 1 . y + p 2 . y ) / 2 - l p m . z = p 1 . z
Wherein, l represents the bus length of wagon, and l=10 in the present embodiment, (x, y, z) are illustrated in each point three-dimensional coordinate in the world coordinate system.
3. under image coordinate system, according to p mAnd R kPosition relationship tentatively judge the vehicle vehicle, if p m∈ R k, judge tentatively that then this vehicle is bus, carries out for the 3rd step; If
Figure BDA00002129491100062
Judge that then this vehicle is non-bus, judge and finish.Shown in Fig. 2 (a), rectangle frame is R among the figure k, by 2 p in its bottom 1, p 2Can construct judging point p m, because the bus vehicle body is longer, after coordinate conversion is to the image coordinate system, judging point p mSatisfy p m∈ R k
The 3rd step: under world coordinate system, be approximately rectangular parallelepiped and R according to the bus vehicle kPositional information makes up the 3D rectangular parallelepiped model when vehicle in front.
Concrete steps are:
1.R k2 p in bottom 1, p 2As 3D model front bottom end 2 points.
2. be approximately square by the bus headstock, can be according to p 1, p 2Make up 2 p of bus 3D model front top 3, p 4, specifically:
p 3 . x = p 1 . x , p 3 . y = p 1 . y , p 3 . z = p 1 . z + h p 4 . x = p 2 . x , p 4 . y = p 2 . y , p 4 . z = p 2 . z + h
Wherein, h represents the bus height, and h=2.5 in the present embodiment, (x, y, z) are illustrated in each point three-dimensional coordinate in the world coordinate system.
3. according to the perspective view of bus on xOy, as Fig. 2 (b) (c) shown in, can make up 2 p in 3D model tail end bottom by geometric relationship 5, p 6, specifically:
θ = arctan ( p 1 . y - p 2 . y p 2 . x - p 1 . x ) p j . x = p i . x - l sin θ p j . y = p i . x - l cos θ
Wherein, (i, j) ∈ { (1,5), (2,6) }, θ is illustrated in the perspective view, line segment (p 1, p 2) with the angle of y axle, shown in Fig. 2 (c).L represents the bus length of wagon, and l=10 in the present embodiment, (x, y, z) are illustrated in each point three-dimensional coordinate in the world coordinate system.
4. by p 5, p 6Make up 3D model tail end 2 p in top 7, p 8, be specially:
p 7 . x = p 5 . x , p 7 . y = p 5 . y , p 7 . z = p 5 . z + h p 8 . x = p 6 . x , p 8 . y = p 6 . y , p 8 . z = p 6 . z + h
Wherein, h represents the bus height, and h=2.5 in the present embodiment, (x, y, z) are illustrated in each point three-dimensional coordinate in the world coordinate system.
Fig. 3 is the building process schematic diagram of above-mentioned 8 end points.
5. can obtain 12 line segment aggregates by 8 end points of 3D model, but in camera perspective, only have the part line segment as seen, thereby need to select the camera visible segment as the model line segment aggregate.Selection mode is as follows: shown in Fig. 4 (a), and line segment (p 1, p 2) (p 1, p 3) (p 2, p 4) (p 3, p 4) (p 3, p 7) (p 4, p 8) be the camera visible segment; Shown in Fig. 4 (b), line segment (p 5, p 6) be the invisible line segment of camera, line segment (p 2, p 6) (p 6, p 8) then need according to p 6The position judge, specifically: under image coordinate system, if (p then 2, p 6) (p 6, p 8) be the camera visible segment; If p 6∈ R (p 1234), then it is invisible line segment.R (p 1234) represent by a p 1, p 2, p 3, p 4The rectangle that consists of.In like manner, to line segment (p 1, p 5) (p 5, p 7) judge.Select the camera visible segment as the line segment aggregate of 3D model.So far can obtain the 3D model of this vehicle.
The 4th step: adopt LSD line segments extraction algorithm that the vehicle that obtains in the second step is carried out Eigenvector and extract, Fig. 5 is LSD extraction algorithm effect schematic diagram.
The 5th step: the 3D model that obtains in the 3rd step and the Eigenvector that obtains in the 4th step are mated calculating, specifically:
1. the employing template matching method calculates matching factor η 1, specifically:
Figure BDA00002129491100082
Wherein,
Figure BDA00002129491100083
Overlapping between expression 3D model gray-scale map and the Eigenvector gray-scale map is regional, shown in Fig. 6 (e); ψ is 3D model gray-scale map, and its pixel value is 0 or 1, shown in Fig. 6 (c);
Figure BDA00002129491100084
Be LSD Eigenvector gray-scale map, shown in Fig. 6 (b);
Figure BDA00002129491100085
Expression is carried out the morphological dilations processing to image, shown in Fig. 6 (d); ∑ I represents gray level image I is carried out the pixel summation;
Figure BDA00002129491100086
Expression is done threshold process to gray-scale map I, specifically:
Threshold t v ( I x , y ) = v , I x , y ≥ v 0 , otherwise
2. according to η 1Judge and work as whether vehicle in front is bus, specifically: if η 1TH l, TH lBe setting threshold, then the representation feature line segment overlaps large percentage with the 3D model, thinks that this vehicle may be bus, enters next step, adopts the bee-line method further to judge; If η 1TH h, TH hBe setting threshold, think that then this vehicle is bus, judge and finish; If η 1≤ TH l, think that then this vehicle is non-bus, judge and finish.TH in the present embodiment l=0.7, TH h=0.8.
3. the vehicle that needs in the previous step further to judge is implemented the bee-line algorithm, make ψ
Figure BDA00002129491100088
The overlapping region is removed in expression
Figure BDA00002129491100091
The 3D illustraton of model, shown in Fig. 6 (f); The overlapping region is removed in expression
Figure BDA00002129491100093
Eigenvector figure; The gray-scale value of I (α) expression pixel α; The position of p (α) expression pixel α under image coordinate system, count initialized device sum=0.Then the minimum distance method implementation is as follows:
(1) to pixel α ∈ ψ hAnd I (α) ≠ 0 centered by p (α), sets up window size and is 3 * 3 square search window.
(2) calculating pixel α arrives Bee-line, be specially
Figure BDA00002129491100095
(3) if d (α)≤d TH, then sum=sum+1 and I (β 0)=0, the present embodiment threshold value d TH=1.5.
(4) repeating step (1) is until ψ hIn pixel travel through fully.
4. calculate matching factor η 2, be specially:
η 2 = sum Σ ψ h
According to η 2Judge and work as whether vehicle in front is bus, specifically: if η 2TH d, TH dBe setting threshold, the representation feature line segment overlaps large percentage with the 3D model, judges that this vehicle is bus; Otherwise, judge that then this vehicle is non-bus, judge and finish.TH in the present embodiment d=0.9.
The present embodiment is tested the video sequence that comprises 100 buses and 150 other types vehicles, and test result is as shown in table 1.
Table 1. bus vehicle recognition result
Figure BDA00002129491100097
Above specific embodiments of the invention are described.It will be appreciated that, the present invention is not limited to above-mentioned specific implementations, and those skilled in the art can make various distortion or modification within the scope of the claims, and this does not affect flesh and blood of the present invention.

Claims (5)

1. a bus model recognizing method is characterized in that, may further comprise the steps:
The first step: monitor video is carried out the mixed Gaussian background modeling, obtain pending vehicle foreground image;
Second step: under world coordinate system, resulting vehicle is carried out the bus vehicle tentatively identify; Concrete steps are:
1. resulting vehicle foreground image in the first step is carried out profile and extract, obtain N connected region Ω k, k=1,2 ..., N is to each connected region Ω k, obtain one and comprise Ω kAnd the rectangular area R of area minimum k, k=1,2 ..., N;
2. under world coordinate system, by rectangle R k2 p in bottom 1, p 2Structure judging point p m:
p m . x = ( p 1 . x + p 2 . x ) / 2 p m . y = ( p 1 . y + p 2 . y ) / 2 - l p m . z = p 1 . z
Wherein, l represents the bus length of wagon; (x, y, z) is illustrated in the three-dimensional coordinate of each point in the world coordinate system;
3. under image coordinate system, according to p mAnd R kPosition relationship judge tentatively whether this vehicle is bus, if p m∈ R k, judge tentatively that then this vehicle is bus, carries out for the 3rd step; If
Figure FDA00002129491000012
Judge that then this vehicle is non-bus, judge and finish;
The 3rd step: under world coordinate system, according to bus vehicle characteristics and R kPositional information makes up the 3D model when vehicle in front;
In the 4th step, adopt LSD line segments extraction algorithm that the vehicle that obtains in the second step is carried out Eigenvector and extract;
The 5th step: the method that adopts stencil matching method and bee-line method to combine is mated calculating to the 3D model that obtains in the 3rd step and the Eigenvector that obtains in the 4th step, obtains recognition result.
2. bus model recognizing method according to claim 1 is characterized in that: 3D model described in the 3rd step, the method for building up of its end points is: R k2 p in bottom 1, p 2As 3D model front bottom end 2 points; Again by p 1, p 2Make up 2 p of bus 3D model front top 3, p 4:
p 3 . x = p 1 . x , p 3 . y = p 1 . y , p 3 . z = p 1 . z + h p 4 . x = p 2 . x , p 4 . y = p 2 . y , p 4 . z = p 2 . z + h
Wherein, h represents the bus height, and (x, y, z) is illustrated in the three-dimensional coordinate of each point in the world coordinate system;
Then according to the perspective view of bus on xOy, make up 3D model tail end 2 p in bottom 5, p 6:
θ = arctan ( p 1 . y - p 2 . y p 2 . x - p 1 . x ) p j . x = p i . x - l sin θ p j . y = p i . x - l cos θ
Wherein, (i, j) ∈ { (1,5), (2,6) }, θ is illustrated in the perspective view, line segment (p 1, p 2) with the angle of y axle, l represents the bus length of wagon, (x, y, z) is illustrated in the three-dimensional coordinate of each point in the world coordinate system; At last by p 5, p 6Make up 3D model tail end 2 p in top 7, p 8:
p 7 . x = p 5 . x , p 7 . y = p 5 . y , p 7 . z = p 5 . z + h p 8 . x = p 6 . x , p 8 . y = p 6 . y , p 8 . z = p 6 . z + h
Wherein, h represents the bus height, and (x, y, z) is illustrated in the three-dimensional coordinate of each point in the world coordinate system.
3. bus model recognizing method according to claim 2 is characterized in that: in the 3rd step, and behind the structure of finishing 3D model end points, route selection section (p 1, p 2) (p 1, p 3) (p 2, p 4) (p 3, p 4) (p 3, p 7) (p 4, p 8) be the camera visible segment, line segment (p 5, p 6) be the invisible line segment of camera, line segment (p 2, p 6) (p 6, p 8) then need according to p 6The position judge, under image coordinate system, if
Figure FDA00002129491000023
(p then 2, p 6) (p 6, p 8) be the camera visible segment; If p 6∈ R (p 1234), then it is invisible line segment; R (p 1234) represent by a p 1, p 2, p 3, p 4The rectangle that consists of; In like manner, to line segment (p 1, p 5) (p 5, p 7) judge; Select the camera visible segment as the line segment aggregate of 3D model.
4. bus model recognizing method according to claim 1, it is characterized in that: in the 5th step: at first adopt the stencil matching method to identify to 3D model gray-scale map and Eigenvector gray-scale map, adopt the bee-line method further to judge to the vehicle that satisfies condition for identification, the bee-line method only needs remaining 3D model pixel and Eigenvector pixel after the template coupling are mated.
5. bus model recognizing method according to claim 4 is characterized in that: the 5th step concrete steps are:
1. adopt template matching method, calculate matching factor η 1,
Figure FDA00002129491000024
Wherein,
Figure FDA00002129491000025
Overlapping between expression 3D model gray-scale map and the Eigenvector gray-scale map is regional; ψ is 3D model gray-scale map, and its pixel value is 0 or 1;
Figure FDA00002129491000026
Be LSD Eigenvector gray-scale map;
Figure FDA00002129491000031
Expression is carried out morphological dilations to image and is processed; ∑ I represents gray level image I is carried out the pixel summation;
Figure FDA00002129491000032
Expression is done threshold process to gray-scale map I,
Threshold t v ( I x , y ) = v , I x , y ≥ v 0 , otherwise
2. according to η 1Judge and work as whether vehicle in front is bus, if η 1TH l, TH lBe setting threshold, the representation feature line segment overlaps large percentage with the 3D model, thinks that this vehicle may be bus, enters next step, adopts the bee-line method further to judge; If η 1TH h, TH hBe setting threshold and TH hTH l, think that then this vehicle is bus, judge and finish; If η 1≤ TH l, think that then this vehicle is non-bus, judge and finish;
3. the vehicle that needs in the previous step further to judge is implemented the bee-line algorithm, make ψ
Figure FDA00002129491000034
The overlapping region is removed in expression
Figure FDA00002129491000035
The 3D illustraton of model;
Figure FDA00002129491000036
The overlapping region is removed in expression
Figure FDA00002129491000037
Eigenvector figure; The gray-scale value of I (α) expression pixel α; The position of p (α) expression pixel α under image coordinate system, count initialized device sum=0, then the minimum distance method implementation is as follows:
(1) to pixel α ∈ ψ hAnd I (α) ≠ 0, centered by p (α), step is that the length of side is set up square search window;
(2) calculating pixel α arrives
Figure FDA00002129491000038
Bee-line, be specially
Figure FDA00002129491000039
(3) if d (α)≤d TH, then make sum=sum+1 and I (β 0)=0, d THBe setting threshold;
(4) repeating step (1) is until ψ hIn pixel travel through fully;
4. calculate matching factor η 2,
Figure FDA000021294910000310
According to η 2Judge and work as whether vehicle in front is bus, if η 2TH d, TH dBe setting threshold, the representation feature line segment overlaps large percentage with the 3D model, judges that this vehicle is bus; Otherwise, judge that then this vehicle is non-bus, judge and finish.
CN201210337115.0A 2012-09-12 2012-09-12 Bus type identifying method Expired - Fee Related CN102930242B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210337115.0A CN102930242B (en) 2012-09-12 2012-09-12 Bus type identifying method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210337115.0A CN102930242B (en) 2012-09-12 2012-09-12 Bus type identifying method

Publications (2)

Publication Number Publication Date
CN102930242A true CN102930242A (en) 2013-02-13
CN102930242B CN102930242B (en) 2015-07-08

Family

ID=47645039

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210337115.0A Expired - Fee Related CN102930242B (en) 2012-09-12 2012-09-12 Bus type identifying method

Country Status (1)

Country Link
CN (1) CN102930242B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104301735A (en) * 2014-10-31 2015-01-21 武汉大学 Method and system for global encoding of urban traffic surveillance video
CN108932857A (en) * 2017-05-27 2018-12-04 西门子(中国)有限公司 A kind of method and apparatus controlling traffic lights
CN109614950A (en) * 2018-12-25 2019-04-12 黄梅萌萌 Remotely-sensed data on-line checking mechanism, method and storage medium
CN110307809A (en) * 2018-03-20 2019-10-08 中移(苏州)软件技术有限公司 A kind of model recognizing method and device
CN111340888A (en) * 2019-12-23 2020-06-26 首都师范大学 Light field camera calibration method and system without white image
WO2021175119A1 (en) * 2020-03-06 2021-09-10 华为技术有限公司 Method and device for acquiring 3d information of vehicle

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080294401A1 (en) * 2007-05-21 2008-11-27 Siemens Corporate Research, Inc. Active Shape Model for Vehicle Modeling and Re-Identification
CN101783076A (en) * 2010-02-04 2010-07-21 西安理工大学 Method for quick vehicle type recognition under video monitoring mode
CN101976341A (en) * 2010-08-27 2011-02-16 中国科学院自动化研究所 Method for detecting position, posture, and three-dimensional profile of vehicle from traffic images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080294401A1 (en) * 2007-05-21 2008-11-27 Siemens Corporate Research, Inc. Active Shape Model for Vehicle Modeling and Re-Identification
CN101783076A (en) * 2010-02-04 2010-07-21 西安理工大学 Method for quick vehicle type recognition under video monitoring mode
CN101976341A (en) * 2010-08-27 2011-02-16 中国科学院自动化研究所 Method for detecting position, posture, and three-dimensional profile of vehicle from traffic images

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
D.KOLLER ETC.: "Model-Based Object Tracking in Monocular Image Sequences of Road Traffic Scenes", 《INTERNATIONAL JOURNAL OF COMPUTER VISION》, vol. 10, no. 3, 30 June 1993 (1993-06-30), pages 257 - 281, XP000378021, DOI: 10.1007/BF01539538 *
魏晓慧 等: "基于混合高斯模型的运动目标检测方法研究", 《应用光学》, vol. 31, no. 4, 31 July 2010 (2010-07-31), pages 574 - 578 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104301735A (en) * 2014-10-31 2015-01-21 武汉大学 Method and system for global encoding of urban traffic surveillance video
CN104301735B (en) * 2014-10-31 2017-09-29 武汉大学 The overall situation coding method of urban transportation monitor video and system
CN108932857A (en) * 2017-05-27 2018-12-04 西门子(中国)有限公司 A kind of method and apparatus controlling traffic lights
CN110307809A (en) * 2018-03-20 2019-10-08 中移(苏州)软件技术有限公司 A kind of model recognizing method and device
CN110307809B (en) * 2018-03-20 2021-08-06 中移(苏州)软件技术有限公司 Vehicle type recognition method and device
CN109614950A (en) * 2018-12-25 2019-04-12 黄梅萌萌 Remotely-sensed data on-line checking mechanism, method and storage medium
CN111340888A (en) * 2019-12-23 2020-06-26 首都师范大学 Light field camera calibration method and system without white image
WO2021175119A1 (en) * 2020-03-06 2021-09-10 华为技术有限公司 Method and device for acquiring 3d information of vehicle
CN113435224A (en) * 2020-03-06 2021-09-24 华为技术有限公司 Method and device for acquiring 3D information of vehicle

Also Published As

Publication number Publication date
CN102930242B (en) 2015-07-08

Similar Documents

Publication Publication Date Title
Zhou et al. LIDAR and vision-based real-time traffic sign detection and recognition algorithm for intelligent vehicle
CN102930242A (en) Bus type identifying method
CN109190444B (en) Method for realizing video-based toll lane vehicle feature recognition system
CN102880877B (en) Target identification method based on contour features
Omachi et al. Detection of traffic light using structural information
CN102799859B (en) Method for identifying traffic sign
CN106127107A (en) The model recognizing method that multi-channel video information based on license board information and vehicle's contour merges
CN106156752B (en) A kind of model recognizing method based on inverse projection three-view diagram
CN103886760B (en) Real-time vehicle detecting system based on traffic video
CN103679205B (en) Assume based on shade and the Foregut fermenters method of layering HOG symmetrical feature checking
CN101783076A (en) Method for quick vehicle type recognition under video monitoring mode
CN102592114A (en) Method for extracting and recognizing lane line features of complex road conditions
CN102880863B (en) Method for positioning license number and face of driver on basis of deformable part model
CN108090429A (en) Face bayonet model recognizing method before a kind of classification
CN102902957A (en) Video-stream-based automatic license plate recognition method
CN103413145A (en) Articulation point positioning method based on depth image
CN105205785A (en) Large vehicle operation management system capable of achieving positioning and operation method thereof
CN103324958B (en) Based on the license plate locating method of sciagraphy and SVM under a kind of complex background
CN107886752B (en) A kind of high-precision vehicle positioning system and method based on transformation lane line
CN103544489A (en) Device and method for locating automobile logo
CN106919939B (en) A kind of traffic signboard tracks and identifies method and system
CN105404859A (en) Vehicle type recognition method based on pooling vehicle image original features
CN104102909A (en) Vehicle characteristic positioning and matching method based on multiple-visual information
CN105354533A (en) Bag-of-word model based vehicle type identification method for unlicensed vehicle at gate
CN107918775B (en) Zebra crossing detection method and system for assisting safe driving of vehicle

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150708