CN106504287B - Monocular vision object space positioning system based on template - Google Patents
Monocular vision object space positioning system based on template Download PDFInfo
- Publication number
- CN106504287B CN106504287B CN201610910942.2A CN201610910942A CN106504287B CN 106504287 B CN106504287 B CN 106504287B CN 201610910942 A CN201610910942 A CN 201610910942A CN 106504287 B CN106504287 B CN 106504287B
- Authority
- CN
- China
- Prior art keywords
- central point
- vertex
- template
- image
- calibrating template
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Landscapes
- Image Analysis (AREA)
Abstract
Monocular vision object space positioning system based on template, belong to space orientation technique field, for solving the problems, such as that target positioning failure, positional accuracy are low, has technical point that include: calibrating template, for calibrating camera, it be used to detect calibrating template image space two-dimensional position;Coordinate mapping module, calibrating template image space two-dimensional coordinate system is mapped to three-dimensional coordinate system, and derivation module, it is derived in three dimensions with respect to the range-azimuth of camera to monocular vision target designation according to projection of the calibrating template in the plane of delineation.Effect is: replacing moving target using template, is detected, oriented and ranging to template, computation complexity is low, can quickly and effectively position moving target.
Description
Technical field
The invention belongs to space orientation technique field, specifically a kind of monocular vision object space based on template is fixed
Position system.
Technical background
The accurate positionin of target plays a very important role the identification of target and the understanding of image with analysis, multiple
Target in miscellaneous background is located in the fields such as military affairs, industrial monitoring, traffic control and management and has important application.It is fixed for target
The research of position aspect, the three-dimensional rebuilding method of propositions the combination Inertia information and vision such as JorgeLob, using inertial sensor and
Binocular vision is in conjunction with come the three-dimensional parameter that restores ground level and line segment normal thereto;Chinese Academy of Sciences's Shenyang automation research Hao Ying
Bright wait presets some artificial targets in target, is obtained according to binocular stereo vision to the three-dimensional information of environment, real
When calculate the positional relationship of mobile robot relative flag thing;Stone is pure as jade to double-visual angle moving target three-dimensional space in video monitoring
Between location technology studied, characteristic point is extracted based on SURF algorithm, and in environment outdoors introduce GPS come establish unification
Coordinate system realizes target positioning.
Monocular vision carries out a kind of sterically defined location technology by monocular-camera.Method not stopping pregnancy new at present
Raw, with multi-disciplinary cross reference, more promotion monocular vision object localization method is continued to develop.Existing object localization method
Target designation model mainly is established using visual signature, is positioned by target projection and determines Target space position.Color, texture,
The visual signatures such as edge, light stream are easy to extract, but vulnerable to such environmental effects, feature stability is bad, and target positioning is caused to be lost
Effect, positional accuracy are low.Although wavelet character, local feature, feature base etc. can enhance the accuracy of positioning, feature extraction
Algorithm computation complexity is high, is unfavorable for real-time target positioning.Locating speed is fast, precision is high, the location algorithm of strong robustness is
The research emphasis of monocular vision object localization method at present.
Summary of the invention
When positioning determining Target space position by target projection to solve existing object localization method, color, line
Though the visual signatures such as reason, edge, light stream are easy to extract, vulnerable to such environmental effects, feature stability is bad, causes target fixed
The low problem of position failure, positional accuracy.
Technical solution of the present invention is as follows:
A kind of monocular vision target designation system based on calibrating template, comprising:
Calibrating template is used for calibrating camera, be used to detect calibrating template image space two-dimensional position;
Calibrating template image space two-dimensional coordinate system is mapped to three-dimensional coordinate system by coordinate mapping module, and
Derivation module derives it in three dimensions with respect to camera according to projection of the calibrating template in the plane of delineation
Range-azimuth is to monocular vision target designation.
Further, the calibrating template uses diagonal black squares square for template citation form, has central point
With vertex, the completeness of central point and vertex is defined, opposite vertexes and central point encode.
Further,
The derivation module, comprising:
Parameter determination module: making calibrating template calibrating camera, determines initial alignment parameter;
Image capture module: calibrating template and object to be measured are bound, and Image Acquisition is carried out;
Computing module:
The acquisition image angle point is searched for, complete and half complete central point and vertex are extracted, calculates its in acquisition image
The Euclidean pixel distance of middle a pair of central point and vertex;
Determine the collected acquisition image center of monocular vision;
According to it is described acquisition image calibrating template central point and vertex Euclidean pixel distance and initial alignment parameter,
Calculate the Euclidean pixel distance of acquisition image center and camera center;
Calculate the Euclidean pixel distance of acquisition image center and calibrating template central point;
According to above-mentioned acquisition image center and camera center Euclidean pixel distance and
The Euclidean pixel distance of image center and calibrating template central point is acquired,
Calibrating template center is calculated at a distance from camera center, azimuth.
Further, the parameter determination module:
Calibrating template central point O point is placed on camera lens group central axis, lens plane is parallel to;
Calibrating template by closely taking pictures sampling to the translation of remote continuous horizontal, obtain it is each apart from upper uncalibrated image, respectively to every
It opens uncalibrated image and detects all angle points, extract complete central point, half complete central point, complete vertex and half complete vertex, according to
Vertex encoding matches corresponding central point and vertex, obtains central point and apex coordinate;
Calculate the Euclidean pixel distance d of B, O two o'clock in uncalibrated imageBO, the Euclidean pixel distance of E, O two o'clock in uncalibrated image
dEO:
Wherein, vertex O (x0,y0), vertex B (xB,yB), vertex E (xE,yE);
dlIndicate Euclidean pixel distance between black patch vertex and central point:
dl=E (dBO, dEO) (3)
Determine initial alignment parameter betai:
Wherein, DKOFor the Euclidean space distance of uncalibrated image central point O and camera center point K.
Further, the computing module:
The central point of acquisition image is denoted as P (x, y), according to the central point of the calibrating template of the acquisition image and vertex Europe
Family name's pixel distance dlAnd initial alignment parameter betai, calculate the Euclidean space distance of acquisition image center P and camera center point K
DKP:
DKP=βi×dl (5)
Calibrating template central point O (x0,y0) and acquire the Euclidean pixel distance of image center P (x, y) as dOP:
Wherein: (x0,y0) it is template center's point O point coordinate in acquisition image.
The Euclidean space distance D and azimuth angle alpha of calibrating template central point and camera center:
Further, the definition step of the completeness of the central point and vertex of the calibrating template is:
According to template form, the four-quadrant subregion on central point or vertex is defined, four subregions are denoted as I respectively1、I2、I3、I4,
To the n pixel that any central point or vertex are equidistantly chosen on any direction i, mean value isCorresponding central point or top
Point is in I1The pixel mean value of subregionAre as follows:
Similarly, the pixel mean value of central point or four, vertex subregion is found out with thisCentral point or top
Point completeness is defined as follows:
Complete central point or vertex:
Half complete central point or vertex:
Incomplete central point or vertex:It is not present;
Wherein, TH1Indicate diagonal black square or the similarity degree of diagonal white area pixel, TH in image2Indicate figure
The difference degree of black and white area pixel as in.
Further, the vertex to calibrating template is comprised the concrete steps that with what central point encoded:
Pixel mean value on 4 subregions of calibrating template central point O is remembered respectivelyIt is equal according to pixel
It is worth the coding for determining 4 subregions, wherein 0 coding of black, 1 coding of white;
Determine each subregion pixel mean value in vertexAny vertex is encoded, wherein black is with 0
Coding, 1 coding of white;
Determine that its color is encoded according to the pixel mean value of central point and four, vertex subregion.
The utility model has the advantages that
Existing object locating system needs additional aiding sensors or distance-measuring equipment, needs to carry out coordinate system transformation or again
Modeling, computation complexity is high, can not provide accurate target component information in real time for subsequent processing, cause target following, target
The processing such as identification, voice positioning calculate failure, meanwhile, testing cost is excessively high, and multi-sensor information fusion can also reduce accuracy,
Higher to optimization algorithm requirement, the small device for being unfavorable for requirement of real-time height and low-power consumption uses.
The present invention is in existing object space location technology, and camera marking method is complicated, and it is real-time, accurate to cannot achieve
The problem of positioning target, provides a kind of monocular vision object space positioning system (template in rear abbreviation based on calibrating template
Also be calibrating template), according to projection of the template in the plane of delineation derive its in three dimensions with respect to the distance of camera and
Orientation can equally be positioned measured moving target, be assisted with this when template is close to when being placed in front of moving target
Subsequent video image motion target following, moving target positioning, the positioning of microphone acoustic source array and pedestrian target positioning etc.
Research.The system is that the initialization that acquisition front end can be realized to moving target positions work merely with monocular cam/video camera
Make, saves using coordinate system transformation step brought by the auxiliary positionings means such as laser range finder, infrared range-measurement system, reduce test
Measuring device cost improves real-time positioning measurement efficiency.
System of the present invention carries out auxiliary directional, ranging with template substitution moving target, utilizes template calibrating camera
It derives and calculates initial parameter, the position of detection template center in the picture, i.e. template image space two-dimensional position, according to initial mark
Determine parameter and template image space two-dimensional coordinate system be mapped to three-dimensional coordinate system, with this obtain Target space position (including
Deflection and distance).Target space position is determined to overcome in the case where overlapping and partial occlusion, and moving target is being schemed
The shortcomings that cannot being accurately identified as plane, especially in pedestrian detection, acoustic source array orientation when practical application, as beneficial
Auxiliary initial alignment parameter obtaining means, more highlight its engineering practical value.
Detailed description of the invention
Fig. 1 I type and II type calibrating template schematic diagram;
Fig. 2 any direction equidistantly takes a schematic diagram;
Fig. 3 central point completeness schematic diagram (n=1);
Fig. 4 central point O coding schematic diagram;
The vertex Fig. 5 E coding schematic diagram;
Fig. 6 camera calibration method schematic diagram;
Fig. 7 Template Location schematic diagram;
Fig. 8 camera calibration process;
Fig. 9 target range 3m testing result schematic diagram;
2 assignment test schematic diagram of Figure 10 example;
3 assignment test schematic diagram of Figure 11 example;
4 assignment test schematic diagram of Figure 12 example;
5 assignment test schematic diagram of Figure 13 example;
Figure 14 monocular vision target designation procedure declaration figure.
Specific embodiment
The present invention is explained in further detail with specific embodiment with reference to the accompanying drawing:
A kind of monocular vision target designation method based on calibrating template,
It is as follows to define target step according to definition calibrating template is required for the first step:
(1) calibrating template is defined
As shown in Figure 1, calibrating template is divided into two kinds of I type in Fig. 1, II type forms, using diagonal black squares square
(two-square feature) is template citation form, is defined as camera calibration formwork style.The I type or II pattern plate are only
Vertical to use, black and white prints on A4 paper (international standard is having a size of 210mm × 297mm), it is desirable that horizontally and vertically placed in the middle, wherein
If the side length of any one black squares is l (unit: millimeter).
(2) completeness on central point and vertex is defined
In Fig. 1, the O point of I type and II type calibrating template is defined as the central point of calibrating template, rear abbreviation central point;A,B,
C, D, E and F point are defined as the vertex of calibrating template, rear abbreviation vertex.
According to template form, the four-quadrant subregion of central point is defined, as shown in Fig. 2, four subregions are denoted as I respectively1、I2、
I3、I4, to the n pixel that any central point is equidistantly chosen on any direction i, mean value isCorresponding central point is in I1
The pixel mean value of subregionAre as follows:
Similarly find out the pixel mean value of four subregions of central pointCentral point completeness is defined as follows:
Complete central point:
Half complete central point:
Incomplete central point:It is not present;
Wherein, TH1Indicate diagonal black square or the similarity degree of diagonal white area pixel, TH in image2Indicate figure
The difference degree of black and white area pixel as in.The definition of vertex completeness and central point are similarly.
In the present embodiment, n is to take a numberWhereinTo be rounded behaviour downwards
Make.In Fig. 2, black and white expression take a position, as one embodiment, respectively with 45 °, 135 °, 225 ° and 315 ° directions
It takes a little, count n=4.The pixel mean value of 4 subregions is denoted as, Using the above method, to determine each point
Completeness.
(3) vertex and central point encode
In the present embodiment, using illustrating for I pattern plate.Pixel mean value point on 4 subregions of calibrating template central point O
Do not rememberThe coding of 4 subregions is determined according to pixel mean value, wherein black is compiled with 0 coding, white with 1
Code;
Determine each subregion pixel mean value in vertexAny vertex is encoded, wherein black is with 0
Coding, 1 coding of white;
Determine that its color is encoded according to the pixel mean value of central point and four, vertex subregion.Central point and vertex encoding
As shown in Figure 4 and Figure 5, encoder dictionary is as shown in table 1 for signal.
Second step demarcates camera using template, determines initial alignment parameter betai
Calibrating template central point O point is placed on camera lens group central axis, it is (or burnt flat to be parallel to lens plane
Face), positional relationship is as shown in Figure 6.
Calibrating template by closely taking pictures sampling to the translation of remote continuous horizontal, obtain it is each apart from upper uncalibrated image, respectively to every
Uncalibrated image detects all angle points using Harris Corner Detection Algorithm, extracts complete central point, half complete central point, complete
Vertex and half complete vertex match corresponding central point and vertex according to vertex encoding, obtain central point and apex coordinate.With vertex
O(x0,y0) and vertex B (xB,yB) and vertex E (xE,yE) for, it is known that:
Wherein, dBOFor the Euclidean pixel distance of B, O two o'clock in image, dEOFor E, O two o'clock in image Euclidean pixel away from
From dlIt indicates Euclidean pixel distance between black patch vertex and central point, then has:
dl=E (dBO, dEO)
Wherein, DKOIt is empty for the Euclidean of image center (at this point, as template center point O) and camera lens central point K
Between distance, thereby determine that initial alignment parameter betai:
Step 3: calibrating template and object to be measured are fixed together, object to be measured position is replaced using calibrating template position
It sets, carries out Image Acquisition and video capture, as shown in Figure 7.Specifically, template is fixed at moving target center, by taking the photograph
As head acquisition video image, and video frame is converted into grayscale image sequence.
Step 4: carrying out angle point search using Harris Corner Detection Algorithm to grey-level image frame, calculating.
Step 5: detecting the completeness of each angle point, complete and half complete central point and vertex are extracted.
Step 6: carrying out matching of tabling look-up according to central point and vertex encoding table, central point O and vertex B, C, E, F are determined
Coordinate takes one pair of them vertex and central point to carry out the calculating of Euclidean pixel distance, is denoted as dl。
Step 7: determine image center P (x, y), the Euclidean pixel of calculation template central point O and image center P away from
From dOP, according to initial parameter βiWith dlDetermine the Euclidean space distance D of image center and camera centerKP。
Step 8: according to dOPAnd DKP, calculation template center and video camera distance D and azimuth angle alpha.
Following statement also can be used in the specific method of step 4 to step 8 as a result:
Using corner detection approach, whole angle points in input picture (acquisition image) or video frame are detected, according to the complete of angle point
Standby property, inspection center's point and vertex.According to coding schedule, central point and each vertex are matched, obtains central point and each apex coordinate, meter
Euclidean pixel distance d in nomogram picture between template center's point and vertexl.Input picture central point is denoted as P (x, y), template center
The Euclidean pixel distance of point O and image center P is dOP, according to initial alignment parameter betaiEuclidean picture between central point and vertex
Element distance dl, determine that the Euclidean space distance of image center P and camera center is DKP。
DKP=βi*dl
Locating template center is oriented target, ranging, according to DKPWith dOPDetermine template center's point and video camera
Euclidean space distance D and azimuth angle alpha
It is complicated that the present embodiment solves initialization location algorithm in existing Technology for Target Location, it is often necessary to laser range finder,
The auxiliary positionings means such as infrared range-measurement system, testing cost is high, the problem of real-time difference.A kind of monocular vision based on template is provided
Object localization method.By monocular cam/video camera, moving target is replaced to carry out auxiliary directional, ranging, algorithm using template
Complexity is low, reduces test equipment cost, greatly strengthens the real-time and accuracy of target positioning, tracked for succeeding target,
The practical applications such as target detection, acoustic source array initial alignment provide necessary initiation parameter and real time calibration guarantee.
By using above-mentioned technical proposal, a kind of monocular vision target positioning side based on template provided in this embodiment
Method, have compared with prior art it is such the utility model has the advantages that
Existing object localization method needs additional aiding sensors or distance-measuring equipment, needs to carry out coordinate system transformation or again
Modeling, computation complexity is high, can not provide accurate target component information in real time for subsequent processing, cause target following, target
The processing such as identification, voice positioning calculate failure, meanwhile, testing cost is excessively high, and multi-sensor information fusion can also reduce accuracy,
Higher to optimization algorithm requirement, the small device for being unfavorable for requirement of real-time height and low-power consumption uses.
The present embodiment replaces moving target using template, is detected, is oriented and ranging to template, demarcated using template
Video camera calculates initial alignment parameter, is detected using Corner Detection and angle point completeness and determines template center, according to initial ginseng
Two-dimensional image space template position is mapped to three-dimensional space position, obtains distance and direction of the moving target away from camera by number
Angle.This method can be efficiently applied to indoor and outdoor scene, be suitable for single or multiple moving targets and position, and solve movement mesh
Mark vulnerable to illumination, the such environmental effects such as block and lead to the problem of feature extraction is difficult, detection is realized, computation complexity is low, energy
Enough quickly and effectively to position moving target, the determination of moving target spatial position can be effectively applied to target following, pedestrian
In the subsequent processings such as detection, acoustic source array orientation, there is certain engineering use value.
As another embodiment, corresponding to the method in above-described embodiment, the present embodiment proposes a kind of based on calibration mold
The monocular vision target designation system of plate, comprising:
Calibrating template is used for calibrating camera, be used to detect calibrating template image space two-dimensional position;
Calibrating template image space two-dimensional coordinate system is mapped to three-dimensional coordinate system by coordinate mapping module, and
Derivation module derives it in three dimensions with respect to camera according to projection of the calibrating template in the plane of delineation
Range-azimuth is to monocular vision target designation.
Further, the calibrating template uses diagonal black squares square for template citation form, has central point
With vertex, the completeness of central point and vertex is defined, opposite vertexes and central point encode.
Further,
The derivation module, comprising:
Parameter determination module: making calibrating template calibrating camera, determines initial alignment parameter;
Image capture module: calibrating template and object to be measured are bound, and Image Acquisition is carried out;
Computing module:
The acquisition image angle point is searched for, complete and half complete central point and vertex are extracted, calculates its in acquisition image
The Euclidean pixel distance of middle a pair of central point and vertex;
Determine the collected acquisition image center of monocular vision;
According to it is described acquisition image calibrating template central point and vertex Euclidean pixel distance and initial alignment parameter,
Calculate the Euclidean pixel distance of acquisition image center and camera center;
Calculate the Euclidean pixel distance of acquisition image center and calibrating template central point;
According to above-mentioned acquisition image center and camera center Euclidean pixel distance and
The Euclidean pixel distance of image center and calibrating template central point is acquired,
Calibrating template center is calculated at a distance from camera center, azimuth.
Further, the parameter determination module:
Calibrating template central point O point is placed on camera lens group central axis, lens plane is parallel to;
Calibrating template by closely taking pictures sampling to the translation of remote continuous horizontal, obtain it is each apart from upper uncalibrated image, respectively to every
It opens uncalibrated image and detects all angle points, extract complete central point, half complete central point, complete vertex and half complete vertex, according to
Vertex encoding matches corresponding central point and vertex, obtains central point and apex coordinate;
Calculate the Euclidean pixel distance d of B, O two o'clock in uncalibrated imageBO, the Euclidean pixel distance of E, O two o'clock in uncalibrated image
dEO:
Wherein, vertex O (x0,y0), vertex B (xB,yB), vertex E (xE,yE);
dlIndicate Euclidean pixel distance between black patch vertex and central point:
dl=E (dBO, dEO) (3)
Determine initial alignment parameter betai:
Wherein, DKOFor the Euclidean space distance of uncalibrated image central point O and camera center point K.
Further, the computing module:
The central point of acquisition image is denoted as P (x, y), according to the central point of the calibrating template of the acquisition image and vertex Europe
Family name's pixel distance dlAnd initial alignment parameter betai, calculate the Euclidean space distance of acquisition image center P and camera center point K
DKP:
DKP=βi×dl (5)
Calibrating template central point O (x0,y0) and acquire the Euclidean pixel distance of image center P (x, y) as dOP:
Wherein: (x0,y0) it is template center's point O point coordinate in acquisition image.
The Euclidean space distance D and azimuth angle alpha of calibrating template central point and camera center:
Further, the definition step of the completeness of the central point and vertex of the calibrating template is:
According to template form, the four-quadrant subregion on central point or vertex is defined, four subregions are denoted as I respectively1、I2、I3、I4,
To the n pixel that any central point or vertex are equidistantly chosen on any direction i, mean value isCorresponding central point or top
Point is in I1The pixel mean value of subregionAre as follows:
Similarly, the pixel mean value of central point or four, vertex subregion is found out with thisCentral point or top
Point completeness is defined as follows:
Complete central point or vertex:
Half complete central point or vertex:
Incomplete central point or vertex:It is not present;
Wherein, TH1Indicate diagonal black square or the similarity degree of diagonal white area pixel, TH in image2Indicate figure
The difference degree of black and white area pixel as in.
Further, the vertex to calibrating template is comprised the concrete steps that with what central point encoded:
Pixel mean value on 4 subregions of calibrating template central point O is remembered respectivelyIt is equal according to pixel
It is worth the coding for determining 4 subregions, wherein 0 coding of black, 1 coding of white;
Determine each subregion pixel mean value in vertexAny vertex is encoded, wherein black is with 0
Coding, 1 coding of white;
Determine that its color is encoded according to the pixel mean value of central point and four, vertex subregion.
Embodiment 1: camera calibration, initial alignment parameter determine
The present embodiment is camera calibration process, determines initial alignment parameter, and test the mentioned method of the present invention.In room
In external environment, video camera is installed and fixed at the top of Mr. Yu robot or tripod, and level shooting, this is taken the photograph using cross grid
Camera, automatic to obtain camera shooting, in the visual field of camera lens, moving target holds template, continuously puts down between 0.5 meter -5 meters
Move, video camera when 0.5 meter of target range take pictures, target range is every primary, the shooting process that increases the shooting of 0.5 meter of video camera
In, guarantee template direction face camera as far as possible, guarantees that template center's point is overlapped with video camera shooting picture centre holding.
Instance parameter explanation: picture format PNG, picture size 1920 × 1080, uncalibrated image number 10 are opened.
This example calibration process as shown in figure 8, for testing result when target range video camera 3m, as shown in figure 9,
Initial alignment parameter calculated result is as shown in table 2, utilizes initial parameter βiCalculate at this time measurement distance and deflection, with reality away from
Compared from actual angle, obtain range error and angular error, due in calibration process template center always with camera shooting
Head center is overlapped, and actual angle is 0 °.Calibrated error is as shown in table 3, it is known that, range error range is in 15mm, angular error
Range reaches in allowable range of error index in 1.2 °.
Embodiment 2: the monocular vision object localization method performance test based on template
The present embodiment is based on embodiment 1, according to initial alignment parameter, is carried out using monocular-camera to target based on template
Positioning.Indoors in environment, video camera is installed and fixed at the top of Mr. Yu robot or tripod, level shooting, this use
Cross grid video camera, automatic to obtain camera shooting, in the visual field of camera lens, moving target hold template, according to set away from
It is mobile from direction, target actual range and actual angle are determined using ground calibration scale, using context of methods to shooting image
It is detected, measured value and actual value technology distance error and angular error.
Instance parameter explanation: picture format PNG, picture size 1920 × 1080, uncalibrated image number 8 are opened.This example is fixed
Position process is as shown in Figure 10, utilizes initial parameter βiMeasurement distance and deflection at this time are calculated, with actual range and actual angle
It compares, obtains range error and angular error, positioning result and method the performance test results are as shown in table 4, it is known that, distance
In 15mm, angular error range reaches in allowable range of error index error range in 1.2 °.
Embodiment 3: corridor environment, single target motion conditions
The present embodiment applies the present invention under the static shooting state of video camera, single target motion positions in corridor.?
Under this condition, video camera installs and fixes at the top of Mr. Yu robot or tripod, level shooting, in the visual field of camera lens, one
A human target holds template, template direction face video camera as far as possible, target according to 0.6m/s speed, within sweep of the eye by
It is remote and close close to video camera.The camera calibration process of this example Case-based Reasoning 1, positions moving target in video.
Embodiment parameter declaration: video format MP4,160 frame of video frame number, video image size 1920 × 1080.
This example position fixing process is as shown in figure 11, with the 10th frame, 30 frames, 50 frames, 70 frames, 90 frames, 110 frames, 130 frames, 150
For frame, illustrate moving target motion process and testing result, target positioning result is as shown in table 5.
Embodiment 4: outdoor environment, single target motion conditions
The present embodiment applies the present invention under the static shooting state of video camera, and single target movement is fixed in outdoor square
Position.With this condition, video camera installs and fixes at the top of Mr. Yu robot or tripod, level shooting, in the visual field of camera lens
Interior, human target holds template, template direction face video camera as far as possible, target according to 0.6m/s speed, in visual field model
From the distant to the near far from video camera in enclosing.The camera calibration process of this example Case-based Reasoning 1, determines moving target in video
Position.
Embodiment parameter declaration: video format MP4,160 frame of video frame number, video image size 1920 × 1080.
This example position fixing process is as shown in figure 12, with the 10th frame, 30 frames, 50 frames, 70 frames, 90 frames, 110 frames, 130 frames, 150
For frame, illustrate moving target motion process and testing result.Target positioning result is as shown in table 6.
Embodiment 5: outdoor environment, two target motion conditions
The present embodiment applies the present invention under the static shooting state of video camera, and two moving targets are fixed in outdoor square
Position.With this condition, video camera installs and fixes at the top of Mr. Yu robot or tripod, level shooting, in the visual field of camera lens
Interior, 1 trumpeter of target holds I pattern plate, and 2 trumpeter of target holds II pattern plate, template direction face video camera as far as possible, respectively according to
The speed of 0.6m/s is walking close to video camera from the distant to the near within sweep of the eye.The camera calibration process of this example Case-based Reasoning 1,
Moving target in video is positioned.
Embodiment parameter declaration: video format MP4,160 frame of video frame number, video image size 1920 × 1080.
This example position fixing process is as shown in figure 12, with the 10th frame, 30 frames, 50 frames, 70 frames, 90 frames, 110 frames, 130 frames, 150
For frame, illustrate moving target motion process and testing result, target positioning result is as shown in table 5.
Subordinate list:
Table 1I pattern plate central point-vertex encoding table;
2 initial alignment parameter list of table;
3 initial alignment assignment test table of table;
4 example of table, 2 positioning result;
5 example of table, 3 positioning result;
6 example of table, 4 positioning result;
7 example of table, 5 positioning result.
Table 1I pattern plate central point-vertex encoding table
2 initial alignment parameter list of table
3 initial alignment of table positions table
4 example of table, 2 positioning result
5 example of table, 3 positioning result
6 example of table, 4 positioning result
7 example of table, 5 positioning result
The preferable specific embodiment of the above, only the invention, but the protection scope of the invention is not
It is confined to this, anyone skilled in the art is in the technical scope that the invention discloses, according to the present invention
The technical solution of creation and its inventive concept are subject to equivalent substitution or change, should all cover the invention protection scope it
It is interior.
Claims (5)
1. a kind of monocular vision target designation system based on calibrating template characterized by comprising
Calibrating template is used for calibrating camera, be used to detect calibrating template image space two-dimensional position;
Calibrating template image space two-dimensional coordinate system is mapped to three-dimensional coordinate system and derivation module by coordinate mapping module,
It is derived in three dimensions with respect to the range-azimuth of camera to list according to projection of the calibrating template in the plane of delineation
Visually feel target designation;
The calibrating template uses diagonal black squares square for template citation form, has central point and vertex, in definition
The completeness of the heart point and vertex, opposite vertexes and central point encode;
The definition step of the completeness of the central point and vertex of the calibrating template is:
According to template form, the four-quadrant subregion on central point or vertex is defined, four subregions are denoted as I respectively1、I2、I3、I4, to appoint
The n pixel that meaning central point or vertex are equidistantly chosen on any direction i, corresponding central point or vertex are in I1The picture of subregion
Plain mean valueAre as follows:
Similarly, the pixel mean value of central point or four, vertex subregion is found out with thisCentral point or vertex are complete
Standby property is defined as follows:
Complete central point or vertex:
Half complete central point or vertex:
Incomplete central point or vertex:It is not present;
Wherein, TH1Indicate diagonal black square or the similarity degree of diagonal white area pixel, TH in image2It indicates in image
The difference degree of black and white area pixel.
2. the monocular vision target designation system based on calibrating template as described in claim 1, which is characterized in that the derivation
Module, comprising:
Parameter determination module: making calibrating template calibrating camera, determines initial alignment parameter;
Image capture module: calibrating template and object to be measured are bound, and Image Acquisition is carried out;
Computing module:
Search acquisition image angle point, extracts complete and half complete central point and vertex, calculates the one pair of them in acquisition image
The Euclidean pixel distance of central point and vertex;
Determine the collected acquisition image center of monocular vision;
According to the central point and vertex Euclidean pixel distance and initial alignment parameter of the calibrating template of the acquisition image, calculate
Acquire the Euclidean pixel distance of image center and camera center;
Calculate the Euclidean pixel distance of acquisition image center and calibrating template central point;
According to above-mentioned acquisition image center and camera center Euclidean pixel distance and
The Euclidean pixel distance of image center and calibrating template central point is acquired,
Calibrating template center is calculated at a distance from camera center, azimuth.
3. the monocular vision target designation system based on calibrating template as claimed in claim 2, which is characterized in that the parameter
Determining module:
Calibrating template central point O point is placed on camera lens group central axis, lens plane is parallel to;
Calibrating template by closely taking pictures sampling to the translation of remote continuous horizontal, obtain it is each apart from upper uncalibrated image, respectively to every mark
Determine all angle points of image detection, complete central point, half complete central point, complete vertex and half complete vertex is extracted, according to vertex
Codes match corresponds to central point and vertex, obtains central point and apex coordinate;
Calculate the Euclidean pixel distance d of B, O two o'clock in uncalibrated imageBO, the Euclidean pixel distance d of E, O two o'clock in uncalibrated imageEO:
Wherein, vertex O (x0,y0), vertex B (xB,yB), vertex E (xE,yE);
dlIndicate Euclidean pixel distance between black patch vertex and central point:
dl=E (dBO,dEO) (3)
Determine initial alignment parameter betai:
Wherein, DKOFor the Euclidean space distance of uncalibrated image central point O and camera center point K.
4. the monocular vision target designation system based on calibrating template as claimed in claim 2, which is characterized in that the calculating
Module:
The central point of acquisition image is denoted as P (x, y), according to the central point and vertex Euclidean picture of the calibrating template of the acquisition image
Element distance dlAnd initial alignment parameter betai, calculate the Euclidean space distance D of acquisition image center P and camera center point KKP:
DKP=βi×dl (5)
Calibrating template central point O (x0,y0) and acquire the Euclidean pixel distance of image center P (x, y) as dOP:
Wherein: (x0,y0) it is template center's point O point coordinate in acquisition image;
The Euclidean space distance D and azimuth angle alpha of calibrating template central point and camera center:
5. the monocular vision target designation system based on calibrating template as described in claim 1, which is characterized in that calibration mold
The vertex of plate is comprised the concrete steps that with what central point encoded:
Pixel mean value on 4 subregions of calibrating template central point O is remembered respectivelyIt is determined according to pixel mean value
The coding of 4 subregions, wherein black is encoded with 0 coding, white with 1;
Determine each subregion pixel mean value in vertexAny vertex is encoded, wherein black is compiled with 0
Code, 1 coding of white;
Determine that its color is encoded according to the pixel mean value of central point and four, vertex subregion.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610910942.2A CN106504287B (en) | 2016-10-19 | 2016-10-19 | Monocular vision object space positioning system based on template |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610910942.2A CN106504287B (en) | 2016-10-19 | 2016-10-19 | Monocular vision object space positioning system based on template |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106504287A CN106504287A (en) | 2017-03-15 |
CN106504287B true CN106504287B (en) | 2019-02-15 |
Family
ID=58294328
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610910942.2A Expired - Fee Related CN106504287B (en) | 2016-10-19 | 2016-10-19 | Monocular vision object space positioning system based on template |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106504287B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108828965B (en) * | 2018-05-24 | 2021-09-14 | 联想(北京)有限公司 | Positioning method, electronic equipment, intelligent household system and storage medium |
CN108761436B (en) * | 2018-08-27 | 2023-07-25 | 上海岗消网络科技有限公司 | Flame vision distance measuring device and method |
CN109636859B (en) * | 2018-12-24 | 2022-05-10 | 武汉大音科技有限责任公司 | Single-camera-based calibration method for three-dimensional visual inspection |
CN109887025B (en) * | 2019-01-31 | 2021-03-23 | 沈阳理工大学 | Monocular self-adjusting fire point three-dimensional positioning method and device |
CN113538578B (en) * | 2021-06-22 | 2023-07-25 | 恒睿(重庆)人工智能技术研究院有限公司 | Target positioning method, device, computer equipment and storage medium |
CN114018215B (en) * | 2022-01-04 | 2022-04-12 | 智道网联科技(北京)有限公司 | Monocular distance measuring method, device, equipment and storage medium based on semantic segmentation |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW505820B (en) * | 2000-04-04 | 2002-10-11 | Philips Electronics Na | Method of calibrating a camera, method of determining a focal length of a camera, and camera control system |
CN103487034A (en) * | 2013-09-26 | 2014-01-01 | 北京航空航天大学 | Method for measuring distance and height by vehicle-mounted monocular camera based on vertical type target |
CN103578133A (en) * | 2012-08-03 | 2014-02-12 | 浙江大华技术股份有限公司 | Method and device for reconstructing two-dimensional image information in three-dimensional mode |
CN104484883A (en) * | 2014-12-24 | 2015-04-01 | 河海大学常州校区 | Video-based three-dimensional virtual ship positioning and track simulation method |
CN106651957A (en) * | 2016-10-19 | 2017-05-10 | 大连民族大学 | Monocular vision target space positioning method based on template |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWM505820U (en) * | 2015-04-22 | 2015-08-01 | Waitemata Hands Co Ltd | Improved structure of socks |
-
2016
- 2016-10-19 CN CN201610910942.2A patent/CN106504287B/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW505820B (en) * | 2000-04-04 | 2002-10-11 | Philips Electronics Na | Method of calibrating a camera, method of determining a focal length of a camera, and camera control system |
CN103578133A (en) * | 2012-08-03 | 2014-02-12 | 浙江大华技术股份有限公司 | Method and device for reconstructing two-dimensional image information in three-dimensional mode |
CN103487034A (en) * | 2013-09-26 | 2014-01-01 | 北京航空航天大学 | Method for measuring distance and height by vehicle-mounted monocular camera based on vertical type target |
CN104484883A (en) * | 2014-12-24 | 2015-04-01 | 河海大学常州校区 | Video-based three-dimensional virtual ship positioning and track simulation method |
CN106651957A (en) * | 2016-10-19 | 2017-05-10 | 大连民族大学 | Monocular vision target space positioning method based on template |
Non-Patent Citations (2)
Title |
---|
基于单目视觉的摄像机定位技术研究;周娜;《中国优秀硕士学位论文全文数据库》;20070615(第6期);全文 |
基于单目视觉的摄像机定位技术研究;周娜;《中国优秀硕士学位论文全文数据库》;20070615(第6期);第4.1-4.3节 |
Also Published As
Publication number | Publication date |
---|---|
CN106504287A (en) | 2017-03-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106504287B (en) | Monocular vision object space positioning system based on template | |
CN106651957B (en) | Monocular vision object space localization method based on template | |
CN110070615B (en) | Multi-camera cooperation-based panoramic vision SLAM method | |
Wöhler | 3D computer vision: efficient methods and applications | |
CN106017436B (en) | BIM augmented reality setting-out system based on total station and photogrammetric technology | |
CA2835306C (en) | Sensor positioning for 3d scanning | |
CN108510551B (en) | Method and system for calibrating camera parameters under long-distance large-field-of-view condition | |
CN102622747B (en) | Camera parameter optimization method for vision measurement | |
CN106408601B (en) | A kind of binocular fusion localization method and device based on GPS | |
CN104200086A (en) | Wide-baseline visible light camera pose estimation method | |
CN102788559A (en) | Optical vision measuring system with wide-field structure and measuring method thereof | |
CN109341668B (en) | Multi-camera measuring method based on refraction projection model and light beam tracking method | |
CN104173054A (en) | Measuring method and measuring device for height of human body based on binocular vision technique | |
CN103759669A (en) | Monocular vision measuring method for large parts | |
CN108469254A (en) | A kind of more visual measuring system overall calibration methods of big visual field being suitable for looking up and overlooking pose | |
CN110223355B (en) | Feature mark point matching method based on dual epipolar constraint | |
CN104036542A (en) | Spatial light clustering-based image surface feature point matching method | |
Jun et al. | An extended marker-based tracking system for augmented reality | |
CN102930551B (en) | Camera intrinsic parameters determined by utilizing projected coordinate and epipolar line of centres of circles | |
Chen et al. | A self-recalibration method based on scale-invariant registration for structured light measurement systems | |
CN111998862A (en) | Dense binocular SLAM method based on BNN | |
CN108958256A (en) | A kind of vision navigation method of mobile robot based on SSD object detection model | |
CN114066985B (en) | Method for calculating hidden danger distance of power transmission line and terminal | |
CN109785388B (en) | Short-distance accurate relative positioning method based on binocular camera | |
CN105894505A (en) | Quick pedestrian positioning method based on multi-camera geometrical constraint |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20190215 Termination date: 20211019 |