EP3104357A1 - Dispositif de détection de véhicules sur une zone de trafic - Google Patents

Dispositif de détection de véhicules sur une zone de trafic Download PDF

Info

Publication number
EP3104357A1
EP3104357A1 EP15171194.2A EP15171194A EP3104357A1 EP 3104357 A1 EP3104357 A1 EP 3104357A1 EP 15171194 A EP15171194 A EP 15171194A EP 3104357 A1 EP3104357 A1 EP 3104357A1
Authority
EP
European Patent Office
Prior art keywords
cameras
virtual
traffic area
evaluation unit
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP15171194.2A
Other languages
German (de)
English (en)
Inventor
Martin Mayer
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kapsch TrafficCom AG
Original Assignee
Kapsch TrafficCom AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kapsch TrafficCom AG filed Critical Kapsch TrafficCom AG
Priority to EP15171194.2A priority Critical patent/EP3104357A1/fr
Priority to PCT/EP2016/061108 priority patent/WO2016198242A1/fr
Publication of EP3104357A1 publication Critical patent/EP3104357A1/fr
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/015Detecting movement of traffic to be counted or controlled with provision for distinguishing between two or more types of vehicles, e.g. between motor-cars and cycles
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules

Definitions

  • the present invention relates to a device for detecting vehicles on a traffic area in accordance with the preamble of claim 1.
  • a known method of recognizing, tracking and classifying objects is to use laser scanners.
  • the laser scanners are mounted above or to the side of the road surface and detect the objects when travelling past.
  • a disadvantage of this solution is that the laser scanners only scan the objects in one plane and therefore complete detection of the objects is only possible as long as the vehicles are moving. If the objects do not move evenly through the scanning area, the measurement is impaired. For example, measurement of the object length is either not possible or only in an imprecise manner. Particularly, this method is not well suitable in the case of traffic congestion or stop-and-go situations.
  • a further known solution is to detect the objects by means of using stereo cameras.
  • the objects are detected from different viewing directions with at least two cameras. From the geometric position of corresponding points in the camera images the position of the points can be calculated in three-dimensional space.
  • a disadvantage is that stereo cameras are very expensive instruments and that if only one of the two cameras is soiled or malfunctioning, no calculations from the images of the remaining unsoiled and functioning camera are possible, resulting in a complete failure of the stereo camera system.
  • the aim set by the invention is to provide a device for detecting vehicles on a traffic area, which overcomes the disadvantages of the known prior art, which is inexpensive regarding its installation and operation and nevertheless allows a precise and reliable detection of both moving and stationary vehicles on a large traffic area.
  • the device for detecting vehicles on a traffic area comprises a plurality of monocular digital cameras being arranged in a distributed manner above the traffic area transversely to the traffic area, wherein the viewing direction, i.e. the optical axis, of each camera is oriented downwards.
  • the viewing directions of the cameras lie in one plane, the cameras capture images at synchronized points in time, and an evaluation unit is provided which receives the images captured by the cameras via a wired or wireless data transmission path, wherein the relative positions of the cameras with regard to each other and to the traffic area as well as their viewing directions are known to the evaluation unit.
  • the evaluation unit is configured to combine at least two cameras into a virtual camera, wherein the fields of view of these at least two cameras overlap each other in an image-capturing space, by calculating virtual images having virtual fields of view from the images captured by said cameras at the synchronized point in time and from the known positions and viewing directions of the combined cameras.
  • the image-capturing space is defined as an area underneath the cameras as far as down to the traffic area.
  • This device for detecting vehicles on a traffic area provides the advantages that the calculated virtual cameras can be positioned arbitrarily with regard to their positions and viewing directions. Thereby, a so called “Multilane Free flow" monitoring can be realized, wherein the vehicles are not confined to move along predefined traffic lanes. Rather, the lanes can be altered by traffic guiding personnel, or the vehicles can change between predefined traffic lanes within the traffic area during monitoring operation, or the vehicles can even use arbitrary portions of the traffic area.
  • the use of monocular cameras is attractive in terms of price, since many camera manufacturers already offer such cameras for general purposes. Soiling or malfunctioning of a camera will not lead to a breakdown of the system, thereby allowing to provide an almost failsafe system.
  • the evaluation unit for generating a plurality of instances of virtual cameras i.e. a plurality of virtual cameras
  • the virtual cameras can be positioned arbitrarily with regard to their positions and viewing directions and said positioning can be changed retroactively.
  • Circumferential views of a vehicle can be calculated such that a virtual camera is apparently made to "travel" sideways around the vehicle.
  • traffic lanes on the traffic area are altered (added, reduced or relocated) this altering of traffic lanes on the traffic area can be compensated for by shifting the positions of the virtual cameras monitoring the traffic lines by the software being executed in the evaluation unit, without the service personnel having to readjust the actual cameras on location.
  • this embodiment of the invention it is also possible to implement automatic number plate identification. Due to the combination of a plurality of real images into one virtual image an increased resolution and dynamic range of the virtual image is achieved, compared to the resolution and dynamic range of the images captured by the real cameras. Thereby the automatic number plate identification performs much better than in hitherto used systems, since the border lines and transition patterns between the characters of the number plate and its background are sharper than in the real images.
  • the evaluation unit changes the instances of virtual cameras over the course of time by altering the combinations of cameras, with the change in the instances of virtual cameras optionally being carried out as a function of properties of detected vehicles, such as their dimensions, vehicle speed or direction of travel.
  • a virtual camera is centered with regard to the longitudinal axis of the vehicle and further virtual cameras are positioned obliquely to the left and/or to the right thereof for counting wheels via a lateral view. This allows for calculating the length of the vehicle and counting its number of axles.
  • the fields of view of at least three adjacent cameras overlap each other in an image-capturing space, which is an area underneath the cameras as far as down to the traffic area.
  • This embodiment enables multiple overlaying of portions of real images captured by the cameras, thereby tremendously increasing the image quality, particularly the resolution and dynamic range, of the calculated virtual images. This is a particular advantage, when one or some camera(s) has/have been soiled.
  • the cameras are mounted to an instrument axis of a portal or a gantry, which instrument axis is oriented transversely across the traffic area.
  • the instrument axis can be configured as a physical axis, e.g. a beam or rail of the portal or gantry, or as a geometrical axis, i.e. a line along the portal or gantry.
  • one portal or gantry, respectively is sufficient.
  • Said portal or gantry does not have to be arranged precisely transversely to the traffic area, since deviations can be compensated by the virtual image calculation software of the evaluation unit. Algorithms therefor are known to skilled persons.
  • monocular digital cameras of the same type For the purpose of the invention it is preferred to use monocular digital cameras of the same type. This guarantees for cheap purchase prices because of high piece numbers and for a simple spare part warehousing. Further, it is sufficient to use monocular digital cameras with an image resolution of at least 640x480 pixels. Hence, almost all commercially available, cheap cameras may be used. Of course, due to the ever growing image resolutions of cheap commercially available cameras it is to be expected that in the future cameras with higher resolutions might be employed without raising costs. On the other hand, a low resolution of the image sensors of the cameras provides the advantage of a reduced calculating and storage effort. As already explained above, superimposing of the real images in the course of calculating virtual images enhances the image quality and resolution of the virtual image, which is another argument for using low-resolution cameras.
  • the plane on which the viewing directions of the cameras are located is inclined forward or backward onto the traffic area as viewed in the direction of travel of vehicles.
  • This embodiment allows to capture the number plates in a less oblique angle compared to vertically captured images of the number plates. Thereby, the characters of the number plate can be recognized easier and with less effort in computation.
  • the plane, which is inclined forward or backward onto the traffic area constitutes a first plane, and further monocular digital cameras are arranged in a distributed manner above the traffic area transversely to the traffic area with their viewing directions being oriented downwards, wherein the viewing directions of said further cameras lie in a second plane that is oriented vertically onto the traffic area.
  • This embodiment allows for an easy recognition of number plates by using virtual images calculated from real images captured by cameras assigned to the first plane, whereas an exact calculation of the length of vehicles can be carried out by using virtual images calculated from real images captured by cameras assigned to the second plane.
  • This embodiment further allows classifying vehicles in respect of various properties on the basis of different viewing angles.
  • Handing over a vehicle detected at the first plane to the second plane is used to track the vehicle along the path being travelled.
  • hardware data compression units are connected into the data transmission path between the cameras and the evaluation unit, which data compression units compress the image data in a loss-free manner, e.g. JPEG images with Huffman coding, or TIFF (tagged image file format) images, or in a lossy manner, e.g. JPEG images, or Wavelet compression.
  • a loss-free manner e.g. JPEG images with Huffman coding, or TIFF (tagged image file format) images
  • TIFF tagged image file format
  • the evaluation unit combines images of cameras which have been captured successively. This embodiment allows for detecting an entire vehicle and its length, as this vehicle moves underneath the cameras in the course of time.
  • the detection of vehicles can be enhanced by taking into account their weights.
  • this is accomplished by building at least one weighing sensor, preferably a piezoelectric weighing sensor, into the traffic area in an area underneath the cameras.
  • the device according to the present invention may further be equipped with wireless transceivers being adapted for tracking vehicles.
  • These wireless transceivers may be mounted on the portal or the gantry, and may further be adapted to communicate with tracking devices mounted in the vehicles, such as the so-called "GO boxes".
  • Fig. 1 shows a device 1 for detecting vehicles 2 on a traffic area 3.
  • the device 1 comprises a plurality of monocular digital cameras 5.1-5.n arranged at equal distances from each other above the traffic area 3 transversely to the traffic area 3.
  • the monocular digital cameras 5.1-5.n are of the same type and may be selected from general purpose, inexpensive cameras available on the market. There are no specific requirements to the quality and resolution of the cameras. For instance, an image resolution of 640x480 pixels will be sufficient for the purpose of the present invention.
  • the cameras 5.1-5.n are arranged at equal distances from each other, this is not mandatory, because unequal distances can computationally be balanced out by means of simple geometric transformations.
  • the viewing directions 6.1-6.n of all cameras 5.1- 5.n are oriented downwards, but not necessarily vertically downwards, as will be explained later.
  • the viewing direction 6.1- 6.n of each camera 5.1-5.n coincides with the optical axis of the optical system of each camera, i.e. the image sensor, such as a CCD sensor and a lens system.
  • the viewing direction 6.1-6.n defines the center of the field of view 7.1-7.n of each camera 5.1-5.n. It is preferred that the viewing directions 6.1.6.n of all cameras 5.1-5.n are arranged in parallel to each other. This, however, is not an indispensable prerequisite for the present invention.
  • the viewing directions 6.1.6.n of the cameras 5.1-5.n are not arranged in parallel to each other, this can computationally be balanced for by applying simple geometric transformations.
  • the cameras 5.1-5.n are mounted on an instrument axis 9 of a cross-beam 4b of a gantry 4, which gantry 4 further comprises two posts 4a connected by the cross-beam 4b, which cross-beam 4b traverses the traffic area 3.
  • the instrument axis 9 is oriented transversely across the traffic area 3.
  • the fields of view 7.1-7.n of a plurality of adjacent cameras 5.1-5.n overlap each other in an image-capturing space 10, which is an area underneath the cameras as far as down to the traffic area 3.
  • the cameras 5.1-5.n are mounted so closely to each other, that even at the margins of the traffic area 3 at least three fields of view overlap each other.
  • the image data of the images captured by the camera 5.1-5.n are sent to data compression units 11.1-11.j via a first data transmission path 12 (wired or wireless).
  • the data compression units 11.1-11.j carry out data compression algorithms on the image data, in order to considerably reduce the amount of data.
  • the data compression units 11.1-11.j are configured as embedded electronic hardware modules with built-in data compression algorithms.
  • the data compression algorithms are configured either as loss-free algorithms, e.g., JPEG with Huffman coding, or TIFF coding, or as lossy algorithms, such as JPEG or Wavelet algorithms.
  • the number of cameras 5.1-5.n does not necessarily correspond with the number of data compression units 11.1-1l.j. As can be seen in the example of Fig.
  • each data compression unit 11.1-11.j cooperates with five cameras 5.1-5.n.
  • the compressed image data are sent from the data compression units 11.1-11.j via a second data transmission path 13 (wired or wireless) to an evaluation unit 14.
  • the evaluation unit 14 is configured as a computer, typically a server computer, which may be located either in a safe place adjacent to the traffic area 3, or remote from it.
  • the evaluation unit 14 is set up with the relative positions of the cameras 5.1-5.n with regard to each other and to the traffic area 3 as well as with the viewing directions 6.1-6.n.
  • the evaluation unit 14 is configured to carry out software code that combines the images of at least two cameras 5.1-5.n, wherein the fields of view 7.1-7.n of these at least two cameras 5.1-5.n overlap each other at least partly in the image-capturing space 10. By carrying out said software portions the evaluation unit 14 combines the selected cameras 5.1-5.n into one or more virtual cameras 15.1-15.8. It is essential that the virtual images of the virtual camera 15.1-15.8 are calculated from images captured by the real cameras 5.1-5.n at a synchronized point in time.
  • the virtual images are generated by the software code by computing a 2-dimensional feature set, e.g. in accordance with the FAST algorithms well-known to those skilled in the art, which FAST algorithms are for instance disclosed in Rosten, Edward, and Tom Drummond. "Machine learning for high-speed corner detection.” Computer Vision-ECCV 2006. Springer Berlin Heidelberg, 2006. 430-443.).
  • these algorithms are usually embedded in standard computer-vision software libraries, such as openCV or AMD Framewave.
  • This 2-dimensional feature set is then looked-up in the images having been captured at the same synchronized point in time by the other cameras 5.1-5.n combined into the virtual camera 15.1-15.8.
  • the position of an object having the 2-dimensional feature set is derived from the other images, thereby allowing putting together all these images.
  • Putting together images as explained is known in the art as "Stitching", see http://en.wikipedia.org/wiki/Image_stitching.
  • Delimiting the object, such as a vehicle 2 or its number plate 2a, from the background of the images is done by well-known foreground-background separation methods, such as Blob Detection. It is advisable to position "lateral" virtual cameras such that two separated objects in the virtual image do not interfere with each other. This allows deriving the maximum height of a vehicle from side views "captured" by the lateral virtual cameras.
  • image portions (pixels) of interest are defined several times. This means that the multiple occurring of the image portions of interests in multiple images enables combining them several times, thereby enhancing its information content, such as its dynamic range.
  • image portions and objects in the images captured at synchronized points in time in the course of calculating virtual images further pixels are allocated to the retrieved image portions and objects, which improves the "virtual" resolution of the retrieved image portions and objects in the virtual image.
  • the borders of characters in the number plates 2a of the vehicles 2 are sharpened, resulting in easier and more precise character recognition.
  • the sharpened borders of objects in the virtual images provide further advantages, such as a better recognition of contours of vehicles.
  • Fig. 3 depicts three monocular digital cameras 5.7, 5.8, 5.9 having parallel viewing directions 6.7, 6.8, 6.9 and fields of view 7.7, 7.8, 7.9.
  • the cameras 5.7, 5.8, 5.9 capture partly overlapping images 27, 28, 29 at synchronized points in time.
  • the image data of the images 27, 28, 29 are sent to the evaluation unit 14, which analyzes the images 27, 28, 29 in regard of 2-dimensional feature sets, such as a 2-dimensional feature set corresponding to an object 25 being present in all images 27, 28, 29. Having found this 2-dimensional feature set the images 27, 28, 29 are "stitched" together, as has been explained above.
  • the evaluation unit 14 selects a portion of the images 27, 28, 29 as an area of a virtual image 26 and generates the virtual image 26 by means of pixel operations, such as summing up the pixels of the portions of the images 27, 28, 29 that correspond to the area of the virtual image 26.
  • the system includes a plurality of cameras which direct multiple simultaneous streams of analog or digital input into an image transformation engine, to process those streams to remove distortion and redundant information, creating a single output image in a cylindrical or spherical perspective. After removing distortion and redundant information the pixels of the imag data streams are seamlessly merged, such that a single stream of digital or analog video is outputted from said engine.
  • the output stream of digital or analog video is directed to an image clipper, which image clipper is controlled by a pan-tilt-rotation-zoom controller to select a portion of the merged panoramic or panospheric images for displaying said portion for viewing thereof.
  • the evaluation unit 14 is configured to generate a plurality of virtual cameras. These virtual cameras may have different virtual viewing directions and different virtual fields of view. For instance, as depicted in Fig. 1 the evaluation unit 14 combines the images captured by the two cameras 5.1 and 5.2 into a virtual camera 15.1 having such a virtual viewing direction that it functions as a first side view camera for a first lane of the traffic area 3. Further, the evaluation unit 14 generates a second virtual camera 15.2 by combining the images captured by four cameras 5.1-5.4. This second virtual camera 15.2 has such a virtual viewing direction that it functions as a top view camera for the first lane of the traffic area 3. The evaluation unit 14 generates also a third virtual camera 15.3 by combining the images captured by three cameras 5.5-5.7.
  • This third virtual camera 15.3 has such a virtual viewing direction that it functions as a second side camera for the first lane of the traffic area 3. Thereby, vehicles 2 which are moving along the first lane are surveilled from different virtual viewing directions by the three virtual cameras 15.1-15.3.
  • the evaluation unit 14 is further configured to form a group 16 consisting of a plurality of virtual cameras, such as three virtual cameras 15.4, 15.5, 15.6, and to virtually move the group 16 of virtual cameras in respect of the traffic area 3.
  • the three virtual cameras 15.4, 15.5, 15.6 are combined from different real cameras 5.1 such that they surveil a second lane of the traffic area 3 from different virtual viewing directions.
  • the evaluation unit virtually moves the entire group 16 of virtual cameras to a new position above the relocated second lane. This is done by simply changing the cameras 15.1-15.n that are combined into the virtual cameras.
  • the evaluation unit 14 By combining images from a plurality of or even from all cameras 15.1-15.n the evaluation unit 14 generates a virtual camera that "captures" 3-dimensional panorama images.
  • the evaluation unit 14 is further able to reposition a virtual camera 15.7 arbitrarily with regard to its position and viewing direction, which can also be done retroactively from stored images. Thereby, circumferential views of a vehicle can be generated. This is shown in Fig. 1 where the virtual camera 15.7 is first positioned at the left side of the traffic area, then travels to the center (numeral 15.7') and later reaches an end position (numeral 15.7") at the right side of the traffic area 3.
  • the virtual images of the virtual cameras 15.1-15-7 can be used by classifying units 17 in order to detect properties of objects appearing in the virtual images.
  • properties comprise number plate characters, the number of axles of a vehicle 2, dimensions of the vehicle 2, and the like.
  • a virtual camera is centered with regard to the longitudinal axis of the vehicle and further virtual cameras are positioned obliquely to the left and/or to the right thereof. Obliquely directed virtual cameras also serve for counting wheels via a lateral view.
  • the classifying units 17 are usually realized as software code portions executed by the evaluation unit 14.
  • the evaluation unit 14 virtually move a virtual camera to and fro along a lane of the traffic area by combining images of cameras, which images have been captured and stored successively.
  • the plane 8, which is inclined forward or backward onto the traffic area 3, constitutes a first plane.
  • Further monocular digital cameras 18.1-18.m are arranged in distances from each other along the cross-beam 4b of the gantry 4 above the traffic area 3.
  • the viewing directions 19.1-19.m of the further cameras 18.1-18.m lie in a second plane 20 that is oriented vertically onto the traffic area 3.
  • the evaluation unit 14 combines the cameras 18.1-18.m into one or more vertical virtual cameras 21 in the same way as has been explained above for the cameras 5.1-5.n and virtual cameras 15.1-15.8.
  • the virtual images generated from the vertical virtual camera 21 are perfectly suited for measuring the length of a vehicle, particularly, when successively generated virtual images are used for detecting the length of the vehicle.
  • wireless transceivers 23 adapted for tracking vehicles 2 are mounted on the gantry 4. These transceivers 23 can work e.g. according to the CEN- or UNI-DSRC or IEEE 802.11p WAVE, ETSI ITS-G5 or ISO-18000 RFID standard as the relevant communication technology for electronic toll collection, but other transmission technologies in proprietary realisation is possible. Depending on the used standard, the communication uses omni- or directional antennas at the transceiver 23, which results in other mounting location as shown in Fig. 2 .
  • Fig. 4 shows an application of the invention for axle counting of vehicles.
  • Fig. 4 there are depicted the virtual fields of view of four virtual cameras CA, CB, CC, CD. These virtual cameras are offset from each other across a traffic area 3, wherein virtual camera CA is the most left camera and virtual camera CD is the most right camera.
  • virtual cameras CA, CB, CC, CD are displayed successively it seems to an observer that a camera moves across the traffic area 3, either from left to right (starting with the image of virtual camera CA), or from right to left (starting with the image of virtual camera CD).
  • the virtual cameras CA, CB, CC, CD may have different viewing directions.
  • the virtual field of view of virtual camera CA shows a part of the left side VAS of a first vehicle and a part of the top VBT of a second vehicle.
  • the virtual field of view of virtual camera CB shows a part of the top VAT of the first vehicle, a part of the left side VAS of the first vehicle depicting two wheels WA arranged one behind the other, the traffic area 3, a part of the right side VBS of the second vehicle depicting two wheels WB arranged one behind the other, and a part of the top VBT of the second vehicle.
  • the wheels WA of the first vehicle are represented by ellipses with small eccentricity
  • the wheels WB of the second vehicle are represented by very elongated ellipses, i.e.
  • the image of virtual camera CC comprises the same elements as that of virtual camera CB, with the difference that the eccentricity of the ellipses representing the wheels WA of the first vehicle has increased and the eccentricity of the ellipses representing the wheels WB of the second vehicle has decreased.
  • the image of virtual camera CD differs from that of virtual camera CC in as much as the top of the second vehicle is not shown and that the eccentricity of the ellipses representing the wheels WA of the first vehicle has further increased to a very elongated shape and the eccentricity of the ellipses representing the wheels WB of the second vehicle has further decreased having almost the shape of a circle.
  • axle counting for the first vehicle i.e. counting the wheels WA
  • axle counting for the second vehicle i.e. counting the wheels WB
  • axle counting for the second vehicle will be carried out by means of the image of virtual camera CD.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)
EP15171194.2A 2015-06-09 2015-06-09 Dispositif de détection de véhicules sur une zone de trafic Withdrawn EP3104357A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP15171194.2A EP3104357A1 (fr) 2015-06-09 2015-06-09 Dispositif de détection de véhicules sur une zone de trafic
PCT/EP2016/061108 WO2016198242A1 (fr) 2015-06-09 2016-05-18 Dispositif de détection de véhicules dans une zone de trafic

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP15171194.2A EP3104357A1 (fr) 2015-06-09 2015-06-09 Dispositif de détection de véhicules sur une zone de trafic

Publications (1)

Publication Number Publication Date
EP3104357A1 true EP3104357A1 (fr) 2016-12-14

Family

ID=53434220

Family Applications (1)

Application Number Title Priority Date Filing Date
EP15171194.2A Withdrawn EP3104357A1 (fr) 2015-06-09 2015-06-09 Dispositif de détection de véhicules sur une zone de trafic

Country Status (2)

Country Link
EP (1) EP3104357A1 (fr)
WO (1) WO2016198242A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115004273A (zh) * 2019-04-15 2022-09-02 华为技术有限公司 交通道路的数字化重建方法、装置和***

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200111235A (ko) * 2018-01-31 2020-09-28 쓰리엠 이노베이티브 프로퍼티즈 컴파니 제조된 웨브의 검사를 위한 가상 카메라 어레이

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5657073A (en) 1995-06-01 1997-08-12 Panoramic Viewing Systems, Inc. Seamless multi-camera panoramic imaging with distortion correction and selectable field of view
JP2001103451A (ja) * 1999-09-29 2001-04-13 Toshiba Corp 画像監視システム及び画像監視方法
KR100852683B1 (ko) * 2007-08-13 2008-08-18 (주)한국알파시스템 차량번호 인식 시스템 및 차량번호 인식방법
KR20090030666A (ko) * 2007-09-20 2009-03-25 조성윤 고속주행차량 자동 계중 시스템
EP2306426A1 (fr) * 2009-10-01 2011-04-06 Kapsch TrafficCom AG Dispositif de détection de véhicules sur une voie de circulation
WO2013128427A1 (fr) * 2012-03-02 2013-09-06 Leddartech Inc. Système et procédé pour une détection et une caractérisation de la circulation à objectifs multiples
KR101432512B1 (ko) * 2014-02-17 2014-08-25 (주) 하나텍시스템 도로 상황 관제를 위한 가상영상 표시장치
WO2015012219A1 (fr) * 2013-07-22 2015-01-29 株式会社東芝 Dispositif et procédé de surveillance de véhicules

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7728877B2 (en) * 2004-12-17 2010-06-01 Mitsubishi Electric Research Laboratories, Inc. Method and system for synthesizing multiview videos
WO2008012821A2 (fr) * 2006-07-25 2008-01-31 Humaneyes Technologies Ltd. Imagerie pour infographie
JP5239991B2 (ja) * 2009-03-25 2013-07-17 富士通株式会社 画像処理装置、及び画像処理システム

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5657073A (en) 1995-06-01 1997-08-12 Panoramic Viewing Systems, Inc. Seamless multi-camera panoramic imaging with distortion correction and selectable field of view
JP2001103451A (ja) * 1999-09-29 2001-04-13 Toshiba Corp 画像監視システム及び画像監視方法
KR100852683B1 (ko) * 2007-08-13 2008-08-18 (주)한국알파시스템 차량번호 인식 시스템 및 차량번호 인식방법
KR20090030666A (ko) * 2007-09-20 2009-03-25 조성윤 고속주행차량 자동 계중 시스템
EP2306426A1 (fr) * 2009-10-01 2011-04-06 Kapsch TrafficCom AG Dispositif de détection de véhicules sur une voie de circulation
WO2013128427A1 (fr) * 2012-03-02 2013-09-06 Leddartech Inc. Système et procédé pour une détection et une caractérisation de la circulation à objectifs multiples
WO2015012219A1 (fr) * 2013-07-22 2015-01-29 株式会社東芝 Dispositif et procédé de surveillance de véhicules
KR101432512B1 (ko) * 2014-02-17 2014-08-25 (주) 하나텍시스템 도로 상황 관제를 위한 가상영상 표시장치

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ROSTEN, EDWARD; TOM DRUMMOND.: "Computer Vision-ECCV 2006", SPRINGER, article "Machine learning for high-speed corner detection", pages: 430 - 443

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115004273A (zh) * 2019-04-15 2022-09-02 华为技术有限公司 交通道路的数字化重建方法、装置和***

Also Published As

Publication number Publication date
WO2016198242A1 (fr) 2016-12-15

Similar Documents

Publication Publication Date Title
TWI798305B (zh) 用於更新高度自動化駕駛地圖的系統和方法
US8548229B2 (en) Method for detecting objects
KR101647370B1 (ko) 카메라 및 레이더를 이용한 교통정보 관리시스템
EP3876141A1 (fr) Procédé de détection d'objet, dispositif associé et support d'informations d'ordinateur
CN107273788B (zh) 在车辆中执行车道检测的成像***与车辆成像***
EP1796043B1 (fr) Détection d'objets
US9363483B2 (en) Method for available parking distance estimation via vehicle side detection
Premebida et al. Fusing LIDAR, camera and semantic information: A context-based approach for pedestrian detection
JP6270102B2 (ja) 移動面境界線認識装置、これを用いた移動体機器制御システム、移動面境界線認識方法及び移動面境界線認識用プログラム
US20160232410A1 (en) Vehicle speed detection
JP2011505610A (ja) 画像センサデータに距離センサデータをマッピングする方法及び装置
US20160247398A1 (en) Device for tolling or telematics systems
JPWO2017134936A1 (ja) 物体検出装置、機器制御システム、撮像装置、物体検出方法、及びプログラム
KR20150029551A (ko) 도착 레인으로 합병되는 이동 아이템의 출발 레인을 결정
EP3029602A1 (fr) Procédé et appareil de détection d'un espace d'entraînement libre
JP2016206801A (ja) 物体検出装置、移動体機器制御システム及び物体検出用プログラム
JP2017207874A (ja) 画像処理装置、撮像装置、移動体機器制御システム、画像処理方法、及びプログラム
EP3104357A1 (fr) Dispositif de détection de véhicules sur une zone de trafic
CN111465937B (zh) 采用光场相机***的脸部检测和识别方法
CN110249366B (zh) 图像特征量输出装置、图像识别装置、以及存储介质
KR101073053B1 (ko) 자동 교통정보추출 시스템 및 그의 추출방법
JP3629935B2 (ja) 移動体の速度計測方法およびその方法を用いた速度計測装置
CN112861599A (zh) 用于对道路上的对象进行分类的方法和设备、计算机程序及存储介质
JP2021092996A (ja) 計測システム、車両、計測方法、計測装置及び計測プログラム
TW202341006A (zh) 物件追蹤整合方法及整合裝置

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20170615