CN106971381A - A kind of wide angle camera visual field line of demarcation generation method with the overlapping ken - Google Patents

A kind of wide angle camera visual field line of demarcation generation method with the overlapping ken Download PDF

Info

Publication number
CN106971381A
CN106971381A CN201710150152.3A CN201710150152A CN106971381A CN 106971381 A CN106971381 A CN 106971381A CN 201710150152 A CN201710150152 A CN 201710150152A CN 106971381 A CN106971381 A CN 106971381A
Authority
CN
China
Prior art keywords
image
demarcation
visual field
field line
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710150152.3A
Other languages
Chinese (zh)
Other versions
CN106971381B (en
Inventor
张云洲
张亚洲
王常凯
陶亮
王争
崔家华
张�杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University China
Original Assignee
Northeastern University China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China filed Critical Northeastern University China
Priority to CN201710150152.3A priority Critical patent/CN106971381B/en
Publication of CN106971381A publication Critical patent/CN106971381A/en
Application granted granted Critical
Publication of CN106971381B publication Critical patent/CN106971381B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

A kind of visual field line of demarcation generation method for being directed to the wide angle camera with the overlapping ken.By the way of back projection, the visual field line of demarcation of correcting image is projected back under wide angle picture.Comprise the following steps:Pass through Zhang Zhengyou standardizations and multinomial distortion correction model first, fault image is corrected, by extracting SIFT feature and deleting error hiding by RANSAC algorithms, realize and image registration is carried out to adjacent two images, by generating visual field line of demarcation under the image of the method for averaging after calibration, finally by image pixel coordinates corresponding relation formula before and after distortion correction, visual field line of demarcation is projected back in wide-angle figure.This method introduces distortion for distortion camera causes traditional images analysis method to be unable to application, using the method for back mapping, the accurate visual field line of demarcation obtained under fault image.Target handoff the wide angle cameras for having overlapping region to realization, there is significant application value.

Description

A kind of wide angle camera visual field line of demarcation generation method with the overlapping ken
Technical field
The invention belongs to technical field of image processing, it is related to automatically generate under a kind of wide angle camera with the overlapping ken and regards The method in wild line of demarcation.
Background technology
Video monitoring system is increasingly extensive in the application of business, national defence and military field, currently to intelligent monitor system Research, emphasis is target detection, tracking and behavioural analysis and identification again in terms of.Target following in the visual field is most important Key technology, is the basis of follow-up various technologies.Obtain wide visual field and realize prolonged target with lasting track, it is necessary to Expanded field of vision is come by using camera combination.The information exchange between different cameras is generated the problem that therewith with merging, it is right Target following under camera combination, seeks to whether judge target across the ken, and matches under neighbouring camera Same target, realize goal congruence differentiate.Further, since having the advantages that big visual field, short focus, wide-angle lens is extended The field range of common pinhole camera, is also frequently used in intelligent monitor system, but wide-angle lens can introduce significant abnormal Become, this will cause conventional analysis of image content algorithm to be no longer applicable.To realize particular visual algorithm application, it is necessary to for Wide angle camera shooting imaging characteristicses carry out method adjustment.
Visual field line of demarcation (FOV:Field OF View) refer in the overlapping multi-camera system of the ken, adjacent camera Between public observation part boundary line.Common practice is to extract the characteristic point in two images, realizes characteristic matching point, so Some error hidings are filtered out by the projection relation contained between match point afterwards, calculating two images correspondence by projection relation closes System, completes the visual field line of demarcation generation of visual field lap.
However, under wide angle cameras, because being influenceed visual field line of demarcation not to be straight line but curve by distorting.At this In the case of kind, realize that characteristic matching determines that dosing control has larger difficulty, it is impossible to efficiently generate visual field line of demarcation.
The content of the invention
Scope of application limitation for generally generating visual field boundary line method, is directed to overlapping the invention provides one kind The visual field line of demarcation generation method of the wide angle camera of the ken.By the way of back projection, by the visual field line of demarcation of correcting image Back projection is returned under wide angle picture.Experiment shows that this method is efficiently solved in the presence of the multiple-camera visual field point in the case of distortion Boundary line generates the visual field under problem, the application scenario of the analysis of image content for needing multichannel wide-angle camera, wide angle camera Line of demarcation generation method has suitable applied value.
The visual field line of demarcation generation method of the present invention, carries out distortion correction by wide angle camera first, by characteristic matching, Visual field line of demarcation is generated under image after distortion correction, by the way of back projection, visual field line of demarcation is projected back in wide-angle figure As the lower final line of demarcation of generation.First by Zhang Zhengyou standardizations and multinomial distortion model, fault image is corrected, led to Cross extraction SIFT feature and adjacent two images are carried out with generation visual field boundary under image registration, the image after distortion correction Line, then by image pixel coordinates corresponding relation formula before and after distortion correction, generates projection table, by the method tabled look-up by the visual field Line of demarcation, which is projected back under wide angle picture, generates final line of demarcation.Comprise the following steps that:
Step one:Wide angle cameras is demarcated using Zhang Zhengyou standardizations, camera parameters and correction model is obtained Parameter.
For wide angle cameras correction, excessively complicated modeling often carrys out counter productive to correction tape.In current big portion Partial image handles application field, can only consider radial distortion, it becomes possible to obtain good calibration result.Ignore other distortion shapes Formula (such as centrifugal distortion and tangential distortion), only retains radial distortion, here using simple positive multinomial calibration model, fixed Adopted ruThe distance for any pixel coordinate point P ' for being principal point into ideal image.Similarly, rdPrincipal point is defined as into fault image Any pixel P distance.The centre coordinate point of image, which mutually coincides, before and after distortion can obtain correcting front and rear using pixel as amount Image coordinate relation under the image coordinate system of guiding principle:
Wherein k1, k2, k3 ... kn are the distortion correction parameter of optimization undetermined, and (u, v) is the pixel coordinate of fault image, (u ', v ') is the pixel coordinate of image after correction, (u0,v0) it is principal point coordinate.
In general, multinomial model gets the correcting distorted wide angle picture of the 5th rank with regard to that can meet most of demands.Cause This n value is often set to 5.So (1) formula can use following formula approximate substitution:
Whole parameters in this model are tried to achieve using Zhang Zhengyou methods.(2) model provided is schemed before and after resulting in correction The coordinate corresponding relation of picture, completes correction purpose.
Step 2:To image zooming-out SIFT feature after correction, characteristic matching is carried out:At present on how to determine two spies Also ununified model for accurate calculation can a little be matched by levying.In the case, according to the Euclidean distance between feature descriptor Linked character point is regarded as a kind of fairly simple effective method.Assuming that a pair of descriptor desp,desq, then between the two Distance is
Because SIFT feature vector dimension is very high, and there are some closely similar characteristic points, cause many non-matching Distance value of the point pair between matching double points is close and is difficult to directly differentiation.Assume if based on such a:One characteristic point exists The point that presence can be matched in some characteristic points, then its distance between the match point and the distance between non-matching point have aobvious Write gap, then a feasible method is to calculate each characteristic point and so the Euclidean distance of point to be matched, finds out minimum Distance and time minimum range, another minimum range and time minimum range carry out ratio computing, when ratio is less than setting value, show most The corresponding point of small distance can be with Feature Points Matching.
Step 3:The matching carried out using the hypothesis in the upper section characteristic point matching method, it may appear that have to a certain degree Mistake.It is therefore desirable to matching result to screening after carry out subsequent treatment again, using RANSAC (Random Sample Consensus) algorithm can effectively solve in distinguished point based registration technique reject error matching points the problem of.
Step 4:Corresponding Feature Points Matching pair is finally obtained under two images, it is horizontal to the characteristic point under each image Coordinate takes average respectively, as the abscissa in line of demarcation, passes through the visual field line of demarcation of image after the characteristic point generation correction of matching.
Step 5:By image coordinate corresponding relation before and after the correction that is obtained in step one, by the visual field line of demarcation of generation Projected to by back mapping in original wide angle picture.
Here back mapping is realized by building the front and rear location of pixels mapping table of correction:Above-mentioned multinomial calibration model, energy Realize that flake is corrected relatively simplely, but it cannot be guaranteed that all images coordinate after correction has corresponding pixel value.Here Using the missing of image pixel positions after the method supplement correction of arest neighbors difference, i.e., by formula (1), calculate fault image picture Vegetarian refreshments P (x, y) corresponds to the pixel coordinate (x ', y ') of image after correction, and the picture is recorded in mapping table (x ', y ') position of structure Plain coordinate (x, y).Finally for blank position in form, the mode for taking consecutive value is simply taken, proximal most position is taken, apart from phase Leftward position value is taken Deng in the case of.The whole positions of fault image are traveled through, mapping table is completed and builds.
The visual field line of demarcation position that image is generated after the back projection in visual field line of demarcation, i.e. traversal correction is realized by mapping table Put, correspondence is obtained in the position of fault image by searching projection table.Complete the visual field line of demarcation under final adjacent fault image Generation, recognizes as successive image, analyzes, handling and use.
The inventive method introduces distortion for distortion camera causes traditional images analysis method to be unable to application, uses The method of back mapping, after field of view line of demarcation generation after completing correction, the accurate visual field boundary obtained under fault image Line.Target handoff the wide angle cameras for having overlapping region to realization, there is significant application value.
Brief description of the drawings
Fig. 1 is the wide-angle distortion figure of the present invention.
Fig. 2 is the multinomial calibration result figure of the present invention.
Fig. 3 is the feature point detection and matching effect figure of the present invention.
Fig. 4 is the feature point detection screening effect figure of the present invention.
Fig. 5 is the correcting image visual field line of demarcation generation of the present invention.
Fig. 6 is the original image visual field line of demarcation generation of the present invention.
Embodiment
The specific implementation to the present invention elaborates below in conjunction with the accompanying drawings.
The method of present embodiment, software environment is the systems of WINDOWS 7, and simulated environment is MATLAB2013a.
Step one:Wide-angle camera is demarcated.150 ° of the wide-angle camera field range wherein used, resolution ratio is 640*480, obtained camera relevant parameter is as follows:
(1) principal point coordinate value:u0=267.3437, v0=362.0111
(2) distortion correction parameter ki value is as follows:
K1=0.36;K2=0.38;K3=0.0016;K4=0.00000078;K5=-1.3297*10-10
By relational expression (1), the corresponding relation of image pixel coordinates position before and after correction is obtained, is caused for image zooming Image cavity, using the method for closest difference.Original image is as shown in figure 1, image is as shown in Figure 2 after correction.
Step 2:Two images SIFT feature after correction is extracted, for a bit in piece image, is calculated and another width Euclidean distance in image between the feature descriptor of all characteristic points, finds out minimum range and time minimum range, carries out ratio Computing, when ratio is less than setting value, shows that the corresponding point of minimum range can be with Feature Points Matching.All characteristic points are traveled through, Complete characteristic matching.Matching result is as shown in Figure 3.
Step 3:It can see from the result directly matched and there are a large amount of error matching points, excluded using RANSAC algorithms Exterior point, process is as follows
(1) one group of measurement data is randomly choosed, it is assumed that be entirely intra-office point.
(2) temporary pattern is calculated according to sample in (1).
(3) data are chosen according to test with obtained model, if enough points are classified as the intra-office point of hypothesis, The model so estimated is just reasonable enough.
(4) then, go to reevaluate model with the intra-office point of all hypothesis, by estimate the error of intra-office point and model come Assessment models, are classified as intra-office point and then give up the model very little, are selected if existing model is better than.
(5) repeat above-mentioned steps, until computation model number of times reaches setting number of times, or be classified as interior point probability it is big In specified probability.
The matching result for removing erroneous matching is as shown in Figure 4.
Step 4:Matching characteristic point horizontal position coordinate average value in the result of calculation procedure three, as boundary line position, is painted The visual field line of demarcation of image after vertical line processed is corrected.As shown in Figure 5.
Step 5:By formula one, mapping table is generated, the visual field line of demarcation that step 4 is generated is anti-by searching mapping table To the visual field line of demarcation being mapped between original image, generation original image.Fig. 6 gives the generation effect in visual field line of demarcation.Can To find out that, by distortion correction and back mapping, the line of demarcation of generation relatively accurately reflects the correspondence of actual scene.Can be with Carry out subsequent processing steps.
In summary, the visual field line of demarcation generation method based on back mapping realizes that the wide angle camera with the overlapping ken is regarded The generation in wild line of demarcation.This method compared to directly carry out feature extraction generate visual field line of demarcation method, detect it is more into The matching double points of work(, generate more accurate visual field line of demarcation, realize target handoff.

Claims (2)

1. a kind of visual field line of demarcation generation method of the wide angle camera with the overlapping ken, it is characterised in that following steps,
Step one:Using Zhang Zhengyou calibration algorithms and multinomial distortion correction model, only consider to calculate under the conditions of radial distortion Correction coefficient k in positive multinomial modeli, distortion correction is carried out to image;
Using positive multinomial calibration model, r is defineduThe distance for any pixel coordinate point P ' for being principal point into ideal image;Together Reason, rdIt is defined as any pixel P of the principal point into fault image distance;The centre coordinate point of image is weighed mutually before and after distortion Close obtain correction before and after using pixel as the image coordinate system of dimension under image coordinate relation:
u ′ = u + ( u - u 0 ) * Σ i = 1 n k i * r d 2 i
v ′ = v + ( v - v 0 ) * Σ i = 1 n k i * r d 2 i - - - ( 1 )
Wherein k1, k2, k3 ... kn are the distortion correction parameter of optimization undetermined, and (u, v) is the pixel coordinate of fault image, (u ', V ') for correction after image pixel coordinate, (u0,v0) it is principal point coordinate;
Step 2:To image zooming-out SIFT feature after correction, according to the Euclidean distance linked character point between feature descriptor, Carry out characteristic matching;
Assuming that a pair of descriptor desp,desq, then distance between the two be
d = Σ i = 0...127 ( Des p ( i ) - D e s ( i ) ) 2 - - - ( 8 )
Calculate each characteristic point and so the Euclidean distance of point to be matched, finds out minimum range and time minimum range, another minimum Distance and time minimum range carry out ratio computing, when ratio is less than setting value, show that the corresponding point of minimum range can be with spy Levy Point matching;
Step 3:Using RANSAC algorithms, using projective transformation matrix as data model, the erroneous matching result of step 2 is removed;
Step 4:Corresponding Feature Points Matching pair is finally obtained under two images, to the characteristic point abscissa under each image Average is taken respectively, as the abscissa in line of demarcation, passes through the visual field line of demarcation of image after the characteristic point generation correction of matching;
Step 6:According to image pixel coordinates corresponding relation formula before and after the correction obtained in step one, mapping table is generated, by looking into Look for mapping table by the result back mapping in step 3 to original image, obtain final visual field line of demarcation.
2. visual field line of demarcation as claimed in claim 1 generation method, is further characterized in that, n value is 5.
CN201710150152.3A 2017-03-14 2017-03-14 A kind of wide angle camera visual field line of demarcation generation method with the overlapping ken Active CN106971381B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710150152.3A CN106971381B (en) 2017-03-14 2017-03-14 A kind of wide angle camera visual field line of demarcation generation method with the overlapping ken

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710150152.3A CN106971381B (en) 2017-03-14 2017-03-14 A kind of wide angle camera visual field line of demarcation generation method with the overlapping ken

Publications (2)

Publication Number Publication Date
CN106971381A true CN106971381A (en) 2017-07-21
CN106971381B CN106971381B (en) 2019-06-18

Family

ID=59329373

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710150152.3A Active CN106971381B (en) 2017-03-14 2017-03-14 A kind of wide angle camera visual field line of demarcation generation method with the overlapping ken

Country Status (1)

Country Link
CN (1) CN106971381B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108053385A (en) * 2018-01-24 2018-05-18 桂林电子科技大学 A kind of real-time correction system of flake video and method
CN110430400A (en) * 2019-08-12 2019-11-08 中国人民解放***箭军工程大学 A kind of ground level method for detecting area of the movable video camera of binocular
CN116912517A (en) * 2023-06-06 2023-10-20 阿里巴巴(中国)有限公司 Method and device for detecting camera view field boundary

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140044325A1 (en) * 2012-08-09 2014-02-13 Hologic, Inc. System and method of overlaying images of different modalities
CN103997624A (en) * 2014-05-21 2014-08-20 江苏大学 Overlapped domain dual-camera target tracking system and method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140044325A1 (en) * 2012-08-09 2014-02-13 Hologic, Inc. System and method of overlaying images of different modalities
CN103997624A (en) * 2014-05-21 2014-08-20 江苏大学 Overlapped domain dual-camera target tracking system and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SENEM VELIPASALAR ET.AL: "RECOVERING FIELD OF VIEW LINES BY USING PROJECTIVE INVARIANTS", 《2004 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING(ICIP)》 *
杨俊 等: "基于SIFT及射影变换的摄像机视野分界线恢复", 《图像图形技术研究与应用》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108053385A (en) * 2018-01-24 2018-05-18 桂林电子科技大学 A kind of real-time correction system of flake video and method
CN110430400A (en) * 2019-08-12 2019-11-08 中国人民解放***箭军工程大学 A kind of ground level method for detecting area of the movable video camera of binocular
CN110430400B (en) * 2019-08-12 2020-04-24 中国人民解放***箭军工程大学 Ground plane area detection method of binocular movable camera
CN116912517A (en) * 2023-06-06 2023-10-20 阿里巴巴(中国)有限公司 Method and device for detecting camera view field boundary
CN116912517B (en) * 2023-06-06 2024-04-02 阿里巴巴(中国)有限公司 Method and device for detecting camera view field boundary

Also Published As

Publication number Publication date
CN106971381B (en) 2019-06-18

Similar Documents

Publication Publication Date Title
Xue et al. Learning to calibrate straight lines for fisheye image rectification
Sochor et al. Traffic surveillance camera calibration by 3d model bounding box alignment for accurate vehicle speed measurement
CN107240124B (en) Cross-lens multi-target tracking method and device based on space-time constraint
Boltes et al. Automatic extraction of pedestrian trajectories from video recordings
CN112883819A (en) Multi-target tracking method, device, system and computer readable storage medium
CN102313536B (en) Method for barrier perception based on airborne binocular vision
Chang et al. Tracking Multiple People Under Occlusion Using Multiple Cameras.
CN110992263B (en) Image stitching method and system
TWI639136B (en) Real-time video stitching method
Tang et al. ESTHER: Joint camera self-calibration and automatic radial distortion correction from tracking of walking humans
CN111723801B (en) Method and system for detecting and correcting target in fisheye camera picture
CN110189375B (en) Image target identification method based on monocular vision measurement
CN101860729A (en) Target tracking method for omnidirectional vision
CN102243765A (en) Multi-camera-based multi-objective positioning tracking method and system
US8428313B2 (en) Object image correction apparatus and method for object identification
CN112287867B (en) Multi-camera human body action recognition method and device
CN105809626A (en) Self-adaption light compensation video image splicing method
CN111160291B (en) Human eye detection method based on depth information and CNN
CN104376575A (en) Pedestrian counting method and device based on monitoring of multiple cameras
CN107248174A (en) A kind of method for tracking target based on TLD algorithms
GB2430736A (en) Image processing
CN113255608B (en) Multi-camera face recognition positioning method based on CNN classification
CN106971381A (en) A kind of wide angle camera visual field line of demarcation generation method with the overlapping ken
CN115376109B (en) Obstacle detection method, obstacle detection device, and storage medium
CN107492080A (en) Exempt from calibration easily monocular lens image radial distortion antidote

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant