CN111311656B - Moving object detection method and device suitable for vehicle-mounted fisheye camera - Google Patents
Moving object detection method and device suitable for vehicle-mounted fisheye camera Download PDFInfo
- Publication number
- CN111311656B CN111311656B CN202010106323.4A CN202010106323A CN111311656B CN 111311656 B CN111311656 B CN 111311656B CN 202010106323 A CN202010106323 A CN 202010106323A CN 111311656 B CN111311656 B CN 111311656B
- Authority
- CN
- China
- Prior art keywords
- point
- points
- spherical
- camera
- motion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 49
- 238000004364 calculation method Methods 0.000 claims abstract description 20
- 238000000034 method Methods 0.000 claims description 63
- 239000013598 vector Substances 0.000 claims description 48
- 230000003287 optical effect Effects 0.000 claims description 26
- AYFVYJQAPQTCCC-GBXIJSLDSA-N L-threonine Chemical compound C[C@@H](O)[C@H](N)C(O)=O AYFVYJQAPQTCCC-GBXIJSLDSA-N 0.000 claims description 14
- 238000003384 imaging method Methods 0.000 claims description 12
- 239000011159 matrix material Substances 0.000 claims description 6
- 238000013519 translation Methods 0.000 claims description 6
- 238000012549 training Methods 0.000 claims description 4
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 claims description 3
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 abstract description 2
- 238000012937 correction Methods 0.000 description 7
- 238000012545 processing Methods 0.000 description 4
- 206010039203 Road traffic accident Diseases 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000003702 image correction Methods 0.000 description 3
- 230000002829 reductive effect Effects 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000002411 adverse Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a moving target detection method suitable for an on-vehicle fisheye camera, which comprises the following steps: acquiring a current frame image and a previous frame image of a surrounding scene of the vehicle shot by the fisheye camera; acquiring feature points matched with a current frame image and a previous frame image to form feature point pairs; performing spherical normalized coordinate calculation on the matched characteristic point pairs; calculating the spherical offset distance l of the feature point pairs; judging whether the characteristic points of the current frame image in the characteristic point pairs are motion characteristic points or not according to the spherical offset distance l; clustering the obtained motion feature points; a moving target region is generated. The invention also discloses a moving object detection device. The moving object detection method and the moving object detection device can finish moving object detection based on the fisheye camera, are low-cost video detection and identification technologies, and can be applied to the field of automobile safe driving.
Description
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a moving target detection method and device suitable for a vehicle-mounted fisheye camera.
Background
In recent years, along with the rapid increase of the quantity of the automobile, the occurrence rate of traffic accidents also has a rapid trend, and the traffic accidents become a great public hazard. In the running process of the automobile, other running vehicles, pedestrians and other moving targets suddenly crossing the road are most likely to form traffic safety hidden trouble, if the moving targets on the road can be detected rapidly and accurately, the track of the moving targets is tracked and predicted, and when collision danger exists, the automobile alarms in time, so that the occurrence rate of traffic accidents is greatly reduced, and great significance is brought to the traffic safety of the road. Therefore, moving object detection has become an important research problem in the fields of automobile assisted driving and automatic driving.
The existing moving object detection technology can be divided into moving object detection under the condition that a sensor is stationary and moving object detection under the condition that the sensor is moving according to whether the sensor moves or not. The former is mainly applied to traffic monitoring, security protection and the like, and the existing research results are more, and related theories and methods are also mature gradually; the latter is mainly applied to intelligent car, intelligent robot etc. need to distinguish target self motion and static scene motion that causes because of the sensor removes, and more have the challenge, still many key technical problems need break through. Currently, sensors for detecting moving targets in a camera moving state mainly comprise ultrasonic, laser, millimeter wave radar and vision-based sensors. The ultrasonic, laser and radar are convenient to obtain the distance information of a scene, but have the problems of limited information obtaining amount, easy mutual interference when a plurality of sensors work together, high price and the like, and are not beneficial to large-scale vehicle-mounted application. The visual sensor has the advantages of rich information acquisition, short sampling period, difficult interference, light weight, low energy consumption, convenient use, low cost and the like, and is more and more popular with people, wherein the fisheye camera is particularly focused. Compared with a common camera, the fish-eye camera has a larger imaging view angle. Under the vehicle-mounted condition, the visual field of a driver is limited by the vehicle body structure, so that the visual field dead angle of the driver is greatly reduced by the application of the fish-eye camera, the environmental information of the periphery of the vehicle in a larger range can be provided, and the occurrence of collision accidents is effectively reduced. However, the fisheye camera also brings adverse factors such as distortion of the true shape of the object in the image, so that many traditional feature-based moving object detection methods are not available. In the prior art, in order to process the imaging deformation of the fisheye camera, the deformation removing steps such as multi-plane correction or cylindrical correction are required to be carried out on the fisheye image, the correction can bring the problem of pixel precision loss, and the image correction generally has the time-consuming problem.
In addition, the chinese patent application No. 201510591326.0 provides a method and apparatus for identifying the type of image feature points, but the patent can only obtain the positions of all the motion feature points in the image, and cannot distinguish the positions of different moving objects in the image.
The Chinese patent application No. 201410520226.4 provides a method and a device for detecting an obstacle, wherein the method is a method for detecting a moving object approaching a vehicle when the vehicle is running, the object moving in any direction cannot be detected, in addition, the method clusters all characteristic point pairs in an image, the light flow size, the direction and the position information are used in the clustering to obtain a plurality of characteristic point sets, then whether the characteristic point sets are approaching characteristic point sets is judged, and finally, the approaching characteristic point sets are determined to be part of the moving object. That is, the final detection result is all the close feature point sets, and the method cannot distinguish different moving targets with similar optical flows in the image.
Disclosure of Invention
In order to solve the problems, the invention aims to provide a moving object detection method and a moving object detection device suitable for an on-board fisheye camera. According to the method, the pixel precision loss problem caused by correction and the time-consuming problem caused by image correction are avoided by calculating the virtual unit spherical normalized coordinates of the points; the method for detecting the motion characteristic points by calculating the virtual unit spherical offset of the points can calculate more accurate spherical offset, so that the detection effect is better; the method for clustering the motion feature points of the pixel positions, the optical flow directions and the depth values of the fisheye images of the motion feature points is used for obtaining the positions of different motion targets in the fisheye images and realizing the detection of the motion targets.
In order to achieve the above purpose, the present invention adopts the following technical scheme: a moving object detection method suitable for an on-vehicle fisheye camera, applied to a vehicle equipped with a monocular fisheye camera, comprising the steps of:
acquiring a current frame image and a previous frame image of a surrounding scene of the vehicle shot by the fisheye camera;
acquiring feature points matched with a current frame image and a previous frame image to form feature point pairs;
performing spherical normalized coordinate calculation on the matched characteristic point pairs, namely converting the coordinates of each characteristic point in the characteristic point pairs into spherical normalized coordinates;
calculating the spherical offset distance l of the feature point pairs;
judging whether the characteristic points of the current frame image in the characteristic point pairs are motion characteristic points according to the spherical offset distance l: setting a threshold value l for a spherical offset distance l obtained from the feature point pairs thre Calculating whether the spherical offset distance l is larger than a set threshold value l thre If it is greater than the set threshold value l thre The feature points are considered to be motion feature points; if the spherical offset distance l is smaller than the threshold value l thre The offset distance l is considered to be caused by errors and is not a motion feature point;
clustering the obtained motion feature points;
and generating a moving target area according to the motion feature points clustered by the pair of motion feature points.
Further, the step of obtaining feature points matched with the current frame image and the previous frame image to form feature point pairs specifically includes:
the Harris feature point detection method is combined with the Lucas and Kanade's feature point tracking method to obtain a matched feature point pair in the current frame image and the previous frame image:
let the current frame image be I t The previous frame image is I t-1 Firstly, an image I is obtained by using a Harris feature point detection method t Feature point set S in (1) t Then, the Lucas and Kanade' S feature point tracking method is used for the feature point set S t In image I t-1 Tracking to obtain and S t Matched feature point set S t-1 Delete set S t Characteristic points of failure tracking in the middle, and obtaining a set S t ′;S t-1 And S is equal to t The characteristic points in' are in one-to-one correspondence to form an image I t-1 And I t Pairs of feature points that match.
The calculation of the spherical normalized coordinates of the matched characteristic point pairs specifically comprises the following steps:
taking a camera optical center as an origin and a main optical axis as a Z axis, establishing a camera coordinate system O_XYZ, and enabling P (X, Y, Z) T For real world scene points corresponding to characteristic points of the fish-eye image, defining fish-eyesThe spherical normalized coordinates of the image feature points are (x s ,y s ,z s ) T The calculation formula is that,
but for a monocular fisheye camera, the coordinates of the P point (X, Y, Z) T Is usually unknown, so that the spherical normalized coordinates (x) of the feature points will be calculated from the pixel coordinates (u, v) of the feature points, from the fisheye camera imaging model s ,y s ,z s ) T The method comprises the steps of carrying out a first treatment on the surface of the The calculation process is as follows:
according to the fisheye camera imaging model, the process of mapping the real world scene point P to the feature point (u, v) in the fisheye image, described as equation (2),
wherein,,
r(θ)=k 1 θ+k 3 θ 3 +k 5 θ 5 +k 7 θ 7 +k 9 θ 9 (4)
k 1 ,k 3 ,k 5 ,k 7 ,k 9 ,u 0 ,v 0 ,f x ,f y the method is obtained by an off-line calibration algorithm for camera internal parameters.
Known fish-eye image feature points (u, v) are obtained from the formula (2),
substituting equation (4) to find θ, and then substituting equation (3) to find θ:
and then combining (1), namely obtaining spherical normalized coordinates (x) corresponding to the characteristic points (u, v) of the fish-eye image s ,y s ,z s ) T The calculation formula is as follows:
further, the calculating the spherical offset distance l of the feature point pair specifically includes:
(1) Definition of spherical offset distance:
let the camera coordinate system at time t-1 and time t be O t-1 -X t-1 Y t-1 Z t-1 And O t -X t Y t Z t From time T-1 to time T, the rotation matrix of the camera is R, and the translation vector is t= (T x ,T y ,T z ) T The method comprises the steps of carrying out a first treatment on the surface of the For a real world scene point P, matching characteristic point pairs are formed by the characteristic points of the fisheye images corresponding to the t-1 moment and the t moment, and the spherical normalized coordinates of the characteristic points in the characteristic point pairs are P respectively t-1 And p t The method comprises the steps of carrying out a first treatment on the surface of the Let P be X in the camera coordinate system at time t-1 and time t t-1 And X t If P is a rest point, obtaining:
X t-1 =RX t +T, (6)
the two sides of formula (6) are added to the vector T to form an outer product, and then added to X t-1 And (3) performing inner product to obtain:
wherein [ T ]] × Representing an antisymmetric matrix composed of vectors T, let [ T ]] × Rp t =n,n=(n x ,n y ,n z ) T Just the plane O in the t-1 moment coordinate system t-1 O t p′ t (wherein p' t =Rp t ) Normal vector of (a), plane O t-1 O t p′ t The virtual unit sphere at the time t-1 is intersected with a large circle C t-1 Will [ T ]] × Rp t N is substituted into (8) withI.e. deduce the vector O if P is the rest point t-1 p t-1 The included angle with the vector n is 90 degrees; when P is a rest point, point P t-1 Appear in a great circle C t-1 And the point p is narrowed by the camera motion parameters t-1 In a great circle C t-1 A range of positions where the upper occurs; finding a point q within the location range t-1 So that point q t-1 To point p t-1 The distance on the virtual unit sphere is the smallest; theoretically, when P is the rest point, point q t-1 And point p t-1 Coincidence, but due to the existence of feature point detection error and camera motion parameter estimation error, when P is a stationary point, point q t-1 And point p t-1 Will not generally coincide; when the two points do not overlap, we calculate the point q t-1 And point p t-1 Defining l as the spherical offset distance of the fisheye image feature point pair corresponding to the point P; when the two points coincide, directly making the spherical offset distance l=0;
(2) The calculation method of the spherical offset distance l comprises the following steps:
in order to determine the offset distance of point P on the virtual unit sphere, it is first required thatOut of a big circle C t-1 On point p t-1 The possible position range is set as a large circle C t-1 The upper arc AB requires the endpoints A and B of the arc AB; the two endpoints are related to the motion parameters of the camera, let the translation vector t= (T x ,T y ,T z ) T Since the fish-eye camera is installed in front of or behind the vehicle, the vehicle is in a running or reversing motion state, and thus there is T z Not equal to 0; let T be z More than 0, points A and B are projection point positions on the virtual unit sphere when the point P is the closest point to the camera and the infinity point to the camera, respectively, and the optical center connecting line O when the point P is the closest point to the camera t-1 O t Large circle C on virtual unit sphere at time t-1 t-1 The intersecting point is the point A, and the spherical normalized coordinates are as follows:
when the point P is an infinity point, the coordinate of the projection point on the virtual unit sphere is the point B, and the sphere normalization coordinate is as follows:
Rp t (10)
after the coordinates of points A and B are determined, the distance point p is required to be obtained on the arc AB t-1 Nearest point q t-1 For this purpose, a large circle C is first determined t-1 Upper distance point p t-1 Nearest point q' t-1 The calculation formula is as follows:
n=[T] × Rp t
m=n×p t-1
q′ t-1 =m×n
point q 'on the big circle' t-1 May or may not be on arc AB, in three cases:
first case: when a great circle C t-1 Upper and point p t-1 Point q 'where the sphere is closest to' t-1 The positional relationship with the arc AB is q' t-1 On arc AB, at this time q' t-1 Namely q t-1 The spherical offset distance l of the point P is l 1 ,
In the above expression, abs () represents an absolute value of a value within ();
second case: when a great circle C t-1 Upper and point p t-1 Point q 'where the sphere is closest to' t-1 The positional relationship with the arc AB is q' t-1 Is not on the arc AB, but is close to the point B, the point B is q t-1 The spherical offset distance l of the point P is l 2 ,
In the above, B represents a vector O t-1 B coordinates, calculated from equation (10), represent vector O t-1 The length of B;
third case: when a great circle C t-1 Upper and point p t-1 Point q 'where the sphere is closest to' t-1 The positional relationship with the arc AB is q' t-1 Is not on the arc AB, but is close to the point A, the point A is q t-1 The spherical offset distance l of the point P is l 3 ,
In the above, A represents a vector O t-1 The coordinates of A, calculated from equation (9), represent the vector O t-1 The length of A;
in order to determine which of the above cases belongs, the marking amounts r and s are calculated,
r 1 =A×B
r 2 =p t-1 ×B
r 3 =A×p t-1
r=r 1 T r 2
s=r 1 T r 3
wherein A x B represents vector O t-1 A and vector O t-1 B is the outer product, "×" is the sign of the vector outer product, vector O t-1 A and vector O t-1 The coordinates of B are calculated by the formula (9) and the formula (10) respectively; when r is more than or equal to 0 and s is more than or equal to 0, the first condition is the first condition, otherwise, the second condition or the third condition is the second condition; therefore, the value of l is calculated,
further, the motion feature point cluster specifically comprises:
clustering the pixel coordinate positions of the motion feature points, the feature point pairs of the motion feature points, the formed optical flow directions and the depth values of the motion feature points to obtain a final motion target area formed by the motion feature points, wherein the depth information of the motion feature points is obtained by reading depth map values, a depth map corresponding to a fisheye image at each moment is obtained by calculating a depth model, and the depth model is obtained by offline and unsupervised training of a neural network;
the clustering method of the motion feature points comprises the steps of initializing all the motion feature points, setting class numbers of all the motion feature points as null, and setting mark symbols as unmarked; then, detecting a moving object according to the following steps:
step (1): selecting any unlabeled motion feature point without class numberMarking the seed points as seed points, assigning new class numbers, searching all neighbor points of the seed points according to a discrimination rule, assigning class numbers identical to the seed points to all neighbor points, and turning to the step (2); if no such point exists, the algorithm ends;
step (2): selecting any one of the motion feature points which have class numbers and are not marked, marking the motion feature points, searching all neighbor points of the motion feature points, distributing class numbers which are the same as those of the seed points to all the neighbor points, and turning to the step (3), and turning to the step (1) if the motion feature points are not present;
step (3): repeating the step (2).
Further, the discrimination rule is: for any motion feature pointSimultaneously satisfies the conditions:
The invention further provides a moving object detection device suitable for the vehicle-mounted fisheye camera, which is used for executing the moving object detection method suitable for the vehicle-mounted fisheye camera.
Compared with the prior art, the invention has the beneficial effects that:
the invention adopts the method for processing the imaging deformation of the fish-eye camera by calculating the virtual unit spherical normalized coordinates of the points, and can directly detect the moving target based on the characteristic point information in the fish-eye image by using the normalized coordinates, thereby omitting the steps of deformation removal such as multi-plane correction or cylindrical correction on the fish-eye image in the previous method for processing the imaging deformation of the fish-eye camera and avoiding the pixel precision loss problem caused by correction and the time-consuming problem caused by image correction.
The invention relates to a method for detecting motion characteristic points by calculating virtual unit spherical offset of points. For the problem of detecting the moving point of the fisheye camera, the polar constraint method commonly used in the detection of the moving target of the plane camera can be popularized to the fisheye camera, the deviation distance between the point and the polar line in the fisheye image is calculated, and the detection of the moving point is completed. Compared with the method, the method provided by the invention has the advantages that the possible occurrence range of the true depth of the point is considered, so that more accurate spherical offset can be calculated, and the detection effect is better. According to the method, the motion characteristic points are detected by calculating the spherical normalized coordinates of the characteristic points and the offset distance on the virtual unit spherical surface, so that the motion characteristic points are detected with higher precision than the method based on the residual error of the optical flow direction in the prior art. This is because the detection method of the present invention considers the range of distances in which the motion points in space may appear and the change situation of the projection points on the virtual unit sphere, in addition to considering whether the change of the positions of the spatial points violates the change rule of the positions of the spatial stationary points between adjacent frames.
When the existing moving object detection method clusters points into points on different objects, the pixel positions of the points are mostly considered according to the optical flow size, the optical flow direction and the optical flow direction; in the clustering method, the optical flow direction and the position of the motion feature points are considered, and the depth value is considered, so that different moving targets with similar optical flow performance in the image can be distinguished, the positions of different moving targets in the fisheye image are obtained, and the detection of the moving targets is realized.
Drawings
FIG. 1 is a flow chart of the present invention;
fig. 2 is a schematic diagram of a position relationship between a stationary point P in space and a spherical normalized coordinate point on a virtual spherical surface at an adjacent moment;
FIG. 3 is a great circle C t-1 Upper and point p t-1 Point q with closest spherical distance t ′ -1 A positional relationship diagram with the arc AB;
FIG. 4 is a functional block diagram of the moving object detection device of the present invention;
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The detection method is suitable for a moving target detection system of the vehicle-mounted fisheye camera, and the mounting position of the vehicle-mounted fisheye camera is positioned at the front windshield, the bumper position or the corresponding position at the rear of the vehicle and is respectively used for monitoring other moving targets from the front or the rear of the vehicle in the running process of the vehicle so as to realize the collision early warning function. When the camera is installed, the optical axis of the camera is required to be parallel to the vehicle body (i.e., parallel to the ground) as much as possible for convenience of handling.
Referring to fig. 1, a moving object detection method suitable for an on-vehicle fisheye camera, applied to a vehicle mounted with a monocular fisheye camera, comprises the steps of:
step one: acquiring a current frame image and a previous frame image of a surrounding scene of the vehicle shot by the fisheye camera;
step two: acquiring feature points matched with a current frame image and a previous frame image to form feature point pairs; the step can utilize the Harris feature point detection method and the Lucas and Kanade's feature point tracking method to obtain matched feature point pairs in adjacent frame images, and specifically comprises the following steps: first, let the adjacent frame images be I respectively t-1 And I t Firstly, an image I is obtained by using a Harris feature point detection method t Feature point set S in (1) t Then, the Lucas and Kanade' S feature point tracking method is used for the feature point set S t In image I t-1 Tracking to obtain and S t Matched feature point set S t-1 Delete set S t Characteristic points of failure in tracking in the middle, and obtaining a set S' t ;S t-1 And S' t The characteristic points in the image I are in one-to-one correspondence to form the image I t-1 And I t Pairs of feature points that match.
The Harris feature point detection method and the Lucas and Kanade's feature point tracking method are all common general knowledge in the art, and specific methods are not described in detail.
In the implementation process, feature point detection can be performed on both the current frame image and the previous frame image, and then matching is performed on feature vectors formed based on the positions of the feature points and surrounding information of the feature points, so that matched feature point pairs are obtained. (the method is the prior art, and detailed description is omitted
Step three: and (3) performing spherical normalized coordinate calculation on the matched characteristic point pairs, namely converting the coordinates of each characteristic point in the characteristic point pairs into spherical normalized coordinates, wherein the spherical normalized coordinates are as follows:
taking a camera optical center as an origin and a main optical axis as a Z axis, establishing a camera coordinate system O_XYZ, and enabling P (X, Y, Z) T For real-world scene points corresponding to the characteristic points of the fisheye image, defining spherical normalized coordinates of the characteristic points of the fisheye image as (x) s ,y s ,z s ) T The calculation formula is that,
but for a monocular fisheye camera, the coordinates of the P point (X, Y, Z) T Is unknown, so that the spherical normalized coordinates (x) of the feature points will be calculated from the pixel coordinates (u, v) of the feature points, from the fisheye camera imaging model s ,y s ,z s ) T The method comprises the steps of carrying out a first treatment on the surface of the The calculation process is as follows:
according to the fisheye camera imaging model, the process of mapping the real world scene point P to the feature point (u, v) in the fisheye image, described as equation (2),
wherein,,
r(θ)=k 1 θ+k 3 θ 3 +k 5 θ 5 +k 7 θ 7 +k 9 θ 9 (4)
k 1 ,k 3 ,k 5 ,k 7 ,k 9 ,u 0 ,v 0 ,f x ,f y the camera internal reference can be obtained by an off-line calibration algorithm (the off-line calibration algorithm is the prior art, specifically referred to KANNALA J and BRANDT S S.A generic camera model and calibration method for conventional, width-angle, and fish-eye lens [ J ]].IEEE Transactions on Pattern Analysis and Machine Intelligence,2006,28(8):1335-1340.)。
Known fish-eye image feature points (u, v) are obtained from the formula (2),
substituting equation (4) to find θ, and then substituting equation (3) to find θ:
and then combining (1), namely obtaining spherical normalized coordinates (x) corresponding to the characteristic points (u, v) of the fish-eye image s ,y s ,z s ) T The calculation formula is as follows:
the real space point corresponding to the pixel point of the fisheye image is mapped onto a virtual unit sphere taking the camera optical center as the sphere center by the spherical normalized coordinate corresponding to the pixel point of the fisheye image, the pixel coordinate of the matched characteristic point pair is converted into the spherical normalized coordinate by the formula (5), and the subsequent motion point detection is carried out based on the spherical normalized coordinate, so that the problem that the imaging deformation of the fisheye camera is influenced by directly adopting the pixel point coordinate of the fisheye image is avoided.
Step four: calculating a spherical offset distance l of the characteristic point pair, wherein the spherical offset distance l is specifically as follows:
(1) Definition of spherical offset distance:
as shown in FIG. 2, let the camera coordinate system at time t-1 and time t be O t-1 -X t-1 Y t-1 Z t-1 And O t -X t Y t Z t From time T-1 to time T, the rotation matrix of the camera is R, and the translation vector is t= (T x ,T y ,T z ) T The method comprises the steps of carrying out a first treatment on the surface of the For a real world scene point P, matching characteristic point pairs are formed by the characteristic points of the fisheye images corresponding to the t-1 moment and the t moment, and the spherical normalized coordinates of the characteristic points in the characteristic point pairs are P respectively t-1 And p t The method comprises the steps of carrying out a first treatment on the surface of the Let P be X in the camera coordinate system at time t-1 and time t t-1 And X t If P is a rest point, there is
X t-1 =RX t +T, (6)
The two sides of formula (6) are added to the vector T to form an outer product, and then added to X t-1 And (3) performing inner product to obtain:
wherein [ T ]] × Representing an antisymmetric matrix composed of vectors T, let [ T ]] × Rp t =n,n=(n x ,n y ,n z ) T Just the plane O in the t-1 moment coordinate system t-1 O t p′ t (wherein p' t =Rp t ) Normal vector of (a), plane O t-1 O t p′ t The virtual unit sphere at the time t-1 is intersected with a large circle C t-1 Will [ T ]] × Rp t N is substituted into (8) withI.e. deduce the vector O if P is the rest point t-1 p t-1 The included angle with the vector n is 90 degrees; when P is a rest point, point P t-1 Appear in a great circle C t-1 And the point p is narrowed by the camera motion parameters t-1 In a great circle C t-1 A range of positions where the upper occurs; finding a point q within the location range t-1 So that point q t-1 To point p t-1 The distance on the virtual unit sphere is the smallest; theoretically, when P is the rest point, point q t-1 And point p t-1 Coincidence, but due to the existence of feature point detection error and camera motion parameter estimation error, when P is a stationary point, point q t-1 And point p t-1 Will not generally coincide; when the two points do not coincide, calculate point q t-1 And point p t-1 Defining l as the spherical offset distance of the fisheye image feature point pair corresponding to the point P; when the two points coincide, directly making the spherical offset distance l=0;
(2) The calculation method of the spherical offset distance l comprises the following steps:
referring to FIG. 3, in order to determine the offset distance of point P on the virtual unit sphere, it is first required to define a large circle C t-1 On point p t-1 The possible position range is set as a large circle C t-1 The upper arc AB requires the endpoints A and B of the arc AB; the two endpoints are related to the motion parameters of the camera, let the translation vector t= (T x ,T y ,T z ) T Since the fish-eye camera is installed in front of or behind the vehicle, the vehicle is in a running or reversing motion state, and thus there is T z Not equal to 0; let T be z As shown in FIG. 2, points A and B are projection point positions on the virtual unit sphere when the point P is the closest point to the camera and the infinity point to the camera, respectively, and the optical center line O is the closest point to the camera t-1 O t Large circle C on virtual unit sphere at time t-1 t-1 The intersecting point is the point A, and the spherical normalized coordinates are as follows:
when the point P is an infinity point, the coordinate of the projection point on the virtual unit sphere is the point B, and the sphere normalization coordinate is as follows:
Rp t (10)
after the coordinates of points A and B are determined, the distance point p is required to be obtained on the arc AB t-1 Nearest point q t-1 For this purpose, a large circle C is first determined t-1 Upper distance point p t-1 Nearest point q' t-1 The calculation formula is as follows:
n=[T] × Rp t
m=n×p t-1
q t ′ -1 =m×n
point q on the big circle t ′ -1 May or may not be on arc AB, as shown in fig. 3:
first case: when a great circle C t-1 Upper and point p t-1 Point q 'where the sphere is closest to' t-1 The positional relationship with the arc AB is q' t-1 On arc AB, at this time q' t-1 Namely q t-1 The spherical offset distance l of the point P is l 1 ,
In the above expression, abs () represents an absolute value of a value within ();
second case: when a great circle C t-1 Upper and point p t-1 Point q 'where the sphere is closest to' t-1 The positional relationship with the arc AB is q' t-1 Is not on the arc AB, but is close to the point B, the point B is q t-1 The spherical offset distance l of the point P is l 2 ,
In the above, B represents a vector O t-1 B coordinates, calculated from equation (10), represent vector O t-1 The length of B;
third case: when a great circle C t-1 Upper and point p t-1 Point q 'where the sphere is closest to' t-1 The positional relationship with the arc AB is q' t-1 Is not on the arc AB, but is close to the point A, the point A is q t-1 The spherical offset distance l of the point P is l 3 ,
In the above, A represents a vector O t-1 The coordinates of A, calculated from equation (9), represent the vector O t-1 The length of A;
in order to determine which of the above cases belongs, the marking amounts r and s are calculated,
r 1 =A×B
r 2 =p t-1 ×B
r 3 =A×p t-1
r=r 1 T r 2
s=r 1 T r 3
wherein A x B represents vector O t-1 A and vector O t-1 B is the outer product, "×" is the sign of the vector outer product, vector O t-1 A and vector O t-1 Coordinates of B, respectivelyCalculated from the formula (9) and the formula (10); when r is equal to or greater than 0 and s is equal to or greater than 0, the first case (the case shown in FIG. 3 (a)), otherwise the second case (the case shown in FIG. 3 (b)) or the third case (the case shown in FIG. 3 (c)); therefore, the value of l is calculated,
in the above process, since the depth range of the point P is not known in advance, the point a and the point B are calculated by assuming that the point P is the closest point to the camera and the infinite point from the camera, and in practical application, the depth of the point P can be reasonably limited according to practical situations, so that the values of the point a and the point B can be further determined; in addition, the above processes all assume T z Conclusions drawn from > 0; when T is z And < 0, the camera can be considered as a reverse thinking, namely, the camera moves from the moment t to the moment t-1, and similar conclusions can be deduced, and the description is omitted.
Step five: judging whether the characteristic points of the current frame image in the characteristic point pairs are motion characteristic points according to the spherical offset distance l: setting a threshold value l for a spherical offset distance l obtained from the feature point pairs thre (threshold value l) thre Is an empirical threshold, which can be determined based on experimental data statistics, in this embodiment l thre =0.005), calculates whether the spherical offset distance l is greater than the set threshold l thre If it is greater than the set threshold value l thre The feature points are considered to be motion feature points; if the spherical offset distance l is smaller than the threshold value l thre The offset distance l is considered to be caused by errors and is not a motion feature point;
step six: clustering the obtained motion characteristic points, specifically:
and clustering the pixel coordinate positions of the motion feature points, the feature point pairs to which the motion feature points belong, and the depth values of the motion feature points to obtain a final motion target region formed by the motion feature points, wherein the depth information of the motion feature points is obtained by reading the depth map values. The depth map corresponding to the fish-eye image at each moment is calculated by a depth model, and the depth model is obtained through an offline and unsupervised training neural network (for the prior art, specific methods can refer to Godard, cl, mac Aodha O, firman M, et al, training in Self-Supervised Monocular Depth Estimation [ J ]. 2018.).
The clustering method of the motion feature points is similar to region growth, all the motion feature points are initialized, class numbers of all the motion feature points are set to be empty, and mark symbols are set to be unmarked; then, detecting a moving object according to the following steps:
step (1): selecting any unlabeled motion feature point without class numberMarking the seed points as seed points and assigning new class numbers, searching all neighbor points of the seed points according to a discrimination rule, assigning class numbers identical to the seed points to all neighbor points, and turning to step 2; if no such point exists, the algorithm ends;
the discriminant rule is that, for any motion characteristic pointSimultaneously satisfies the conditions:
step (2): selecting any one of the motion feature points which have class numbers and are not marked, marking the motion feature points, searching all neighbor points of the motion feature points, distributing class numbers which are the same as those of the seed points to all the neighbor points, and turning to the step 3, and turning to the step (1) if the motion feature points are not present;
step (3): repeating the step (2).
Step seven: and generating a moving target area according to the obtained moving characteristic point class numbers after the moving characteristic points are clustered. Feature points with the same class number belong to the same moving object, and a moving object area is generated.
Referring to fig. 4, the present invention further provides a moving object detection apparatus suitable for a vehicle-mounted fisheye camera, for performing the moving object detection method suitable for a vehicle-mounted fisheye camera.
The invention adopts the method of processing the imaging deformation of the fish-eye camera by calculating the virtual unit spherical normalized coordinates of the points, can directly detect the moving object based on the characteristic point information in the fish-eye image by using the normalized coordinates, firstly detects the moving characteristic points and then clusters, and in the clustering method, besides the optical flow direction and the position of the moving characteristic points, the depth value is considered, and different moving objects with similar optical flow expression in the image can be distinguished, thereby obtaining the positions of different moving objects in the fish-eye image and realizing the detection of the moving object.
The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to assist in understanding the methods of the present invention and the core ideas thereof; also, it is within the scope of the present invention to be modified by those of ordinary skill in the art in light of the present teachings. In view of the foregoing, this description should not be construed as limiting the invention.
Claims (6)
1. A moving object detection method suitable for an on-vehicle fisheye camera, characterized by being applied to a vehicle in which a monocular fisheye camera is installed, the method comprising:
acquiring a current frame image and a previous frame image of a surrounding scene of the vehicle shot by the fisheye camera;
acquiring feature points matched with a current frame image and a previous frame image to form feature point pairs;
performing spherical normalized coordinate calculation on the matched characteristic point pairs, namely converting the coordinates of each characteristic point in the characteristic point pairs into spherical normalized coordinates;
calculating a spherical offset distance l of the characteristic point pair, wherein the spherical offset distance l is specifically as follows:
(1) Definition of spherical offset distance:
let the camera coordinate system at time t-1 and time t be O t-1 -X t-1 Y t-1 Z t-1 And O t -X t Y t Z t From time T-1 to time T, the rotation matrix of the camera is R, and the translation vector is t= (T x ,T y ,T z ) T The method comprises the steps of carrying out a first treatment on the surface of the For a real world scene point P, matching characteristic point pairs are formed by the characteristic points of the fisheye images corresponding to the t-1 moment and the t moment, and the spherical normalized coordinates of the characteristic points in the characteristic point pairs are P respectively t-1 And p t The method comprises the steps of carrying out a first treatment on the surface of the Let P be X in the camera coordinate system at time t-1 and time t t-1 And X t If P is a rest point, obtaining:
X t-1 =RX t +T, (6)
the two sides of formula (6) are added to the vector T to form an outer product, and then added to X t-1 And (3) performing inner product to obtain:
wherein [ T ]] × Representing an antisymmetric matrix of vectors T, p t-1 And p t For the spherical normalized coordinates of the feature points in the feature point pair, [ T ]] × Rp t =n,n=(n x ,n y ,n z ) T Just the plane O in the t-1 moment coordinate system t-1 O t p′ t Wherein p' t =Rp t Plane O t-1 O t p′ t The virtual unit sphere at the time t-1 is intersected with a large circle C t-1 Will [ T ]] × Rp t N is substituted into (8) withI.e. deduce the vector O if P is the rest point t-1 p t-1 The included angle with the vector n is 90 degrees; when P is a rest point, point P t-1 Appear in a great circle C t-1 And the point p is narrowed by the camera motion parameters t-1 In a great circle C t-1 A range of positions where the upper occurs; finding a point q within the location range t-1 So that point q t-1 To point p t-1 The distance on the virtual unit sphere is the smallest; when the two points do not coincide, calculate point q t-1 And point p t-1 Defining l as the spherical offset distance of the fisheye image feature point pair corresponding to the point P; when the two points coincide, directly making the spherical offset distance l=0;
(2) The calculation method of the spherical offset distance l comprises the following steps:
in order to determine the offset distance of the point P on the virtual unit sphere, the large circle C is required to be formed t-1 On point p t-1 The possible position range is set as a large circle C t-1 The upper arc AB requires the endpoints A and B of the arc AB; the two endpoints are related to the motion parameters of the camera, let the translation vector t= (T x ,T y ,T z ) T Since the fish-eye camera is installed in front of or behind the vehicle, the vehicle is in a running or reversing motion state, and thus there is T z Not equal to 0; let T be z Point A and Point B are when Point P is the closest point to the camera and the distance camera, respectively, > 0Projection point position on virtual unit sphere at infinity point, optical center line O when point P is closest to camera t-1 O t Large circle C on virtual unit sphere at time t-1 t-1 The intersecting point is the point A, and the spherical normalized coordinates are as follows:
when the point P is an infinity point, the coordinate of the projection point on the virtual unit sphere is the point B, and the sphere normalization coordinate is as follows:
Rp t (10)
after the coordinates of points A and B are determined, the distance point p is required to be obtained on the arc AB t-1 Nearest point q t-1 For this purpose, a large circle C is first determined t-1 Upper distance point p t-1 Nearest point q' t-1 The calculation formula is as follows:
n=[T] × Rp t
m=n×p t-1
q′ t-1 =m×n
point q 'on the big circle' t-1 May or may not be on arc AB, in three cases:
first case: when a great circle C t-1 Upper and point p t-1 Point q 'where the sphere is closest to' t-1 The positional relationship with the arc AB is q' t-1 On arc AB, at this time q' t-1 Namely q t-1 The spherical offset distance l of the point P is l 1 ,
In the above expression, abs () represents an absolute value of a value within ();
second case: when a great circle C t-1 Upper and point p t-1 Point q 'where the sphere is closest to' t-1 The positional relationship with the arc AB is q' t-1 Not on arc AB, but at a distance ofPoint B is near, at this time point B is q t-1 The spherical offset distance l of the point P is l 2 ,
In the above, B represents a vector O t-1 B coordinates, calculated from equation (10), represent vector O t-1 The length of B;
third case: when a great circle C t-1 Upper and point p t-1 Point q 'where the sphere is closest to' t-1 The positional relationship with the arc AB is q' t-1 Is not on the arc AB, but is close to the point A, the point A is q t-1 The spherical offset distance l of the point P is l 3 ,
In the above, A represents a vector O t-1 The coordinates of A, calculated from equation (9), represent the vector O t-1 The length of A;
the marking amounts r and s are calculated and,
r 1 =A×B
r 2 =p t-1 ×B
r 3 =A×p t-1
r=r 1 T r 2
s=r 1 T r 3
wherein A x B represents vector O t-1 A and vector O t-1 B is the outer product, "×" is the sign of the vector outer product, vector O t-1 A and vector O t-1 The coordinates of B are calculated by the formula (9) and the formula (10) respectively; when r is more than or equal to 0 and s is more than or equal to 0, the first condition is the first condition, otherwise, the second condition or the third condition is the second condition; therefore, the value of l is calculated,
judging whether the characteristic points of the current frame image in the characteristic point pairs are motion characteristic points according to the spherical offset distance l: setting a threshold value l for a spherical offset distance l obtained from the feature point pairs thre Calculating whether the spherical offset distance l is larger than a set threshold value l thre If it is greater than the set threshold value l thre The feature points are considered to be motion feature points; if the spherical offset distance l is smaller than the threshold value l thre The offset distance l is considered to be caused by errors and is not a motion feature point;
clustering the obtained motion feature points;
and generating a moving target area according to the motion feature points after the motion feature points are clustered.
2. The method for detecting a moving object suitable for a vehicle-mounted fisheye camera according to claim 1, wherein the step of obtaining feature points of a current frame image and a previous frame image to form feature point pairs is specifically:
the Harris feature point detection method is combined with the Lucas and Kanade's feature point tracking method to obtain a matched feature point pair in the current frame image and the previous frame image:
let the current frame image be I t The previous frame image is I t-1 Firstly, an image I is obtained by using a Harris feature point detection method t Feature point set S in (1) t Then, the Lucas and Kanade' S feature point tracking method is used for the feature point set S t In image I t-1 Tracking to obtain and S t Matched feature point set S t-1 Delete set S t Characteristic points of failure in tracking in the middle, and obtaining a set S' t ;S t-1 And S' t The characteristic points in the image I are in one-to-one correspondence to form the image I t-1 And I t Pairs of feature points that match.
3. The method for detecting a moving object suitable for a vehicle-mounted fisheye camera according to claim 1, wherein the calculating of spherical normalized coordinates of the matched feature point pair is specifically:
taking a camera optical center as an origin and a main optical axis as a Z axis, establishing a camera coordinate system O_XYZ, and enabling P (X, Y, Z) T For real-world scene points corresponding to the characteristic points of the fisheye image, defining spherical normalized coordinates of the characteristic points of the fisheye image as (x) s ,y s ,z s ) T The calculation formula is that,
but for a monocular fisheye camera, the coordinates of the P point (X, Y, Z) T Is unknown, so that the spherical normalized coordinates (x) of the feature points will be calculated from the pixel coordinates (u, v) of the feature points, from the fisheye camera imaging model s ,y s ,z s ) T The method comprises the steps of carrying out a first treatment on the surface of the The calculation process is as follows:
according to the fisheye camera imaging model, the process of mapping the real world scene point P to the feature point (u, v) in the fisheye image, described as equation (2),
wherein,,
r(θ)=k 1 θ+k 3 θ 3 +k 5 θ 5 +k 7 θ 7 +k 9 θ 9 (4)
k 1 ,k 3 ,k 5 ,k 7 ,k 9 ,u 0 ,v 0 ,f x ,f y is taken as an internal reference of the camera and is obtained by an off-line calibration algorithm;
known fish-eye image feature points (u, v) are obtained from the formula (2),
substituting equation (4) to find θ, and then substituting equation (3) to find θ:
and then combining (1), namely obtaining spherical normalized coordinates (x) corresponding to the characteristic points (u, v) of the fish-eye image s ,y s ,z s ) T The calculation formula is as follows:
4. the method for detecting a moving object suitable for a vehicle-mounted fisheye camera according to claim 1, wherein the moving feature point cluster specifically comprises:
clustering the pixel coordinate positions of the motion feature points, the feature point pairs of the motion feature points, the formed optical flow directions and the depth values of the motion feature points to obtain a final motion target area formed by the motion feature points, wherein the depth information of the motion feature points is obtained by reading depth map values, a depth map corresponding to a fisheye image at each moment is obtained by calculating a depth model, and the depth model is obtained by offline and unsupervised training of a neural network;
the clustering method of the motion feature points comprises the steps of initializing all the motion feature points, setting class numbers of all the motion feature points as null, and setting mark symbols as unmarked; then, detecting a moving object according to the following steps:
step (1): selecting any one unlabeled motion feature point (u) t i ,v t i ) Marking the seed points as seed points, assigning new class numbers, searching all neighbor points of the seed points according to a discrimination rule, assigning class numbers identical to the seed points to all neighbor points, and turning to the step (2); if no such point exists, the algorithm ends;
step (2): selecting any one of the motion feature points which have class numbers and are not marked, marking the motion feature points, searching all neighbor points of the motion feature points, distributing class numbers which are the same as those of the seed points to all the neighbor points, and turning to the step (3), and turning to the step (1) if the motion feature points are not present;
step (3): repeating the step (2).
5. The method for detecting a moving object applicable to a vehicle-mounted fisheye camera according to claim 4, wherein the discrimination rule is: for any motion feature pointSimultaneously satisfies the conditions:
6. A moving object detection apparatus adapted for use in an on-vehicle fisheye camera, characterized by performing the moving object detection method adapted for use in an on-vehicle fisheye camera according to any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010106323.4A CN111311656B (en) | 2020-02-21 | 2020-02-21 | Moving object detection method and device suitable for vehicle-mounted fisheye camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010106323.4A CN111311656B (en) | 2020-02-21 | 2020-02-21 | Moving object detection method and device suitable for vehicle-mounted fisheye camera |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111311656A CN111311656A (en) | 2020-06-19 |
CN111311656B true CN111311656B (en) | 2023-06-27 |
Family
ID=71156757
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010106323.4A Active CN111311656B (en) | 2020-02-21 | 2020-02-21 | Moving object detection method and device suitable for vehicle-mounted fisheye camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111311656B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111860270B (en) * | 2020-07-13 | 2023-05-12 | 辽宁石油化工大学 | Obstacle detection method and device based on fisheye camera |
CN117934598B (en) * | 2024-03-21 | 2024-06-11 | 浙江大学 | Desktop-level rigid body positioning equipment and method based on optical positioning technology |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104318206A (en) * | 2014-09-30 | 2015-01-28 | 东软集团股份有限公司 | Barrier detection method and apparatus |
CN105205459A (en) * | 2015-09-16 | 2015-12-30 | 东软集团股份有限公司 | Method and device for identifying type of image feature point |
CN107992837A (en) * | 2017-12-12 | 2018-05-04 | 公安部交通管理科学研究所 | Road full-view modeling and vehicle detecting and tracking method based on single PTZ monitor cameras |
CN109146958A (en) * | 2018-08-15 | 2019-01-04 | 北京领骏科技有限公司 | A kind of traffic sign method for measuring spatial location based on two dimensional image |
CN110009567A (en) * | 2019-04-09 | 2019-07-12 | 三星电子(中国)研发中心 | For fish-eye image split-joint method and device |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104299244B (en) * | 2014-09-26 | 2017-07-25 | 东软集团股份有限公司 | Obstacle detection method and device based on monocular camera |
-
2020
- 2020-02-21 CN CN202010106323.4A patent/CN111311656B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104318206A (en) * | 2014-09-30 | 2015-01-28 | 东软集团股份有限公司 | Barrier detection method and apparatus |
CN105205459A (en) * | 2015-09-16 | 2015-12-30 | 东软集团股份有限公司 | Method and device for identifying type of image feature point |
CN107992837A (en) * | 2017-12-12 | 2018-05-04 | 公安部交通管理科学研究所 | Road full-view modeling and vehicle detecting and tracking method based on single PTZ monitor cameras |
CN109146958A (en) * | 2018-08-15 | 2019-01-04 | 北京领骏科技有限公司 | A kind of traffic sign method for measuring spatial location based on two dimensional image |
CN110009567A (en) * | 2019-04-09 | 2019-07-12 | 三星电子(中国)研发中心 | For fish-eye image split-joint method and device |
Non-Patent Citations (1)
Title |
---|
吴健辉 ; 商橙 ; 张国云 ; 李交杰 ; .鱼眼相机与PTZ相机相结合的主从目标监控***.计算机工程与科学.2017,(第03期),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN111311656A (en) | 2020-06-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110210389B (en) | Multi-target identification tracking method for road traffic scene | |
JP7124114B2 (en) | Apparatus and method for determining the center of a trailer tow coupler | |
CN108638999B (en) | Anti-collision early warning system and method based on 360-degree look-around input | |
Dai et al. | Multi-task faster R-CNN for nighttime pedestrian detection and distance estimation | |
CN111369541B (en) | Vehicle detection method for intelligent automobile under severe weather condition | |
Jazayeri et al. | Vehicle detection and tracking in car video based on motion model | |
CN107991671A (en) | A kind of method based on radar data and vision signal fusion recognition risk object | |
CN111260683A (en) | Target detection and tracking method and device for three-dimensional point cloud data | |
Huang et al. | Lane detection based on inverse perspective transformation and Kalman filter | |
CN113506318B (en) | Three-dimensional target perception method under vehicle-mounted edge scene | |
CN111311656B (en) | Moving object detection method and device suitable for vehicle-mounted fisheye camera | |
CN109727273B (en) | Moving target detection method based on vehicle-mounted fisheye camera | |
Zhang et al. | Robust inverse perspective mapping based on vanishing point | |
CN103204104B (en) | Monitored control system and method are driven in a kind of full visual angle of vehicle | |
CN105205785A (en) | Large vehicle operation management system capable of achieving positioning and operation method thereof | |
CN108280445B (en) | Method for detecting moving objects and raised obstacles around vehicle | |
CN111681283A (en) | Monocular stereoscopic vision-based relative pose calculation method applied to wireless charging alignment | |
CN105059190A (en) | Vision-based automobile door-opening bump early-warning device and method | |
Li et al. | Judgment and optimization of video image recognition in obstacle detection in intelligent vehicle | |
CN107220632B (en) | Road surface image segmentation method based on normal characteristic | |
Nath et al. | On road vehicle/object detection and tracking using template | |
CN111860270B (en) | Obstacle detection method and device based on fisheye camera | |
Yang et al. | Vision-based intelligent vehicle road recognition and obstacle detection method | |
CN111862210B (en) | Object detection and positioning method and device based on looking-around camera | |
Worrall et al. | Advances in Model-Based Traffic Vision. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |