CN105333818B - 3d space measuring method based on monocular-camera - Google Patents
3d space measuring method based on monocular-camera Download PDFInfo
- Publication number
- CN105333818B CN105333818B CN201410339869.9A CN201410339869A CN105333818B CN 105333818 B CN105333818 B CN 105333818B CN 201410339869 A CN201410339869 A CN 201410339869A CN 105333818 B CN105333818 B CN 105333818B
- Authority
- CN
- China
- Prior art keywords
- msub
- mrow
- mfrac
- point
- msup
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Other Investigation Or Analysis Of Materials By Electrical Means (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The present invention provides a kind of 3d space measuring method based on monocular-camera, including:The tested point on ground and subtest point are focused respectively, obtain corresponding imaging parameters, wherein, subtest point is located at camera optical axis on the projection line on ground, and tested point is different from subtest point object distance, and the video camera installation parameter being imaged twice is constant;According to tested point and the imaging parameters of subtest point, calculating is associated to being imaged twice, obtain the relative coordinate of tested point, the reference frame of relative coordinate is, origin is projected as on ground with video camera installation fulcrum, using the fulcrum and the vertical line on ground as Y-axis, X-axis is projected as along ground with camera optical axis, using the direction vertical with X-axis and Y-axis as Z axis.The present invention calculates the space coordinates of tested point using the inner parameter of monocular-camera, without measuring the installation parameter of video camera and demarcation thing being demarcated, saves cost, simplifies operation.
Description
Technical field
The present invention relates to field of video monitoring, more particularly to a kind of 3d space measuring method based on monocular-camera.
Background technology
Monocular-camera can not form 3D visions by single shot.Aided in the Reference of no pre- principal dimensions
Measurement, when also not knowing that camera pedestal sets height and the angle on camera lens axis and ground, subject can not be measured with taking the photograph
Relative coordinate between camera mount point.
In the prior art, during monocular-camera single shot, by shooting the Reference of pre- principal dimensions, according in image
The angle on the pixel shared by Reference, actual object size, video camera and the ground that are taken, calculates dimension of object with taking the photograph
Proportionate relationship between the pixel of camera shooting image.Then, in follow-up shooting, the installation parameter of video camera is not changed,
Only needs calculate the actual size of object according to the number of pixels shared by subject.By the angle on video camera and ground and
The height of bar is set up, the relative coordinate between subject and video camera can be calculated.
By said process as can be seen that monocular-camera single shot needs to have many external conditions, it is necessary to image
Machine installation data, such as set up bar height, video camera and the angle on ground;Demarcation thing is needed to use, it is artificial to carry out auxiliary calibration meter
Calculation obtains reduced parameter;It can not change video camera installation parameter in follow-up use, otherwise must re-scale, adaptability is bad.
Or prior art shoots same object by eyes video camera, or monocular-camera two different positions with
The same object of angle shot, by diverse location in the picture, realize 3D visually-perceptibles.But two shootings must be used
Machine, or for single camera provide movement guide rail and drive device, cost it is higher.
The content of the invention
In view of this, the invention provides a kind of 3d space measuring method based on monocular-camera, this method to include:
The tested point on ground and subtest point are focused respectively, obtain corresponding imaging parameters, wherein, the auxiliary is surveyed
Pilot is located at camera optical axis on the projection line on ground, and the tested point is different from the subtest point object distance, and twice
The video camera installation parameter of imaging is constant;
According to the tested point and the imaging parameters of subtest point, calculating is associated to being imaged twice, is obtained
The relative coordinate of the tested point, the reference frame of the relative coordinate are, with projection of the video camera installation fulcrum on ground
For origin, using the fulcrum and the vertical line on ground as Y-axis, X-axis is projected as along ground with camera optical axis, to be hung down with X-axis and Y-axis
Straight direction is Z axis.
Present invention also offers a kind of 3d space measurement apparatus based on monocular-camera, the device includes:
Imaging parameters acquiring unit, for being focused respectively to the tested point on ground and subtest point, obtain it is corresponding into
As parameter, wherein, the subtest point is located at camera optical axis on the projection line on ground, the tested point and the auxiliary
Test point object distance is different, and the video camera installation parameter being imaged twice is constant;
Relative coordinate computing unit, for the imaging parameters according to the tested point and subtest point, to twice
Imaging is associated calculating, obtains the relative coordinate of the tested point, the reference frame of the relative coordinate is, with video camera
Installation fulcrum is projected as origin on ground, using the fulcrum and the vertical line on ground as Y-axis, with projection of the camera optical axis along ground
For X-axis, using the direction vertical with X-axis and Y-axis as Z axis.
The present invention calculates the space relative coordinate of tested point using the inner parameter of monocular-camera, is imaged without measuring
The installation parameter of machine and to demarcation thing demarcate, save human and material resources and time cost, simplified operating process.
Brief description of the drawings
Fig. 1 is the logical construction of the 3d space measurement based on monocular-camera and its basis in one embodiment of the present invention
The schematic diagram of hardware environment.
Fig. 2 is the flow chart of the 3d space measuring method based on monocular-camera in one embodiment of the present invention.
Fig. 3 is monocular-camera scheme of installation.
Fig. 4 is optical imagery schematic diagram in one embodiment of the present invention.
Fig. 5 be in one embodiment of the present invention imaging point in the image height schematic diagram of the Y direction of image acquiring sensor.
Fig. 6 is lens imaging principle schematic.
Fig. 7 be in one embodiment of the present invention imaging point in the image height schematic diagram of the Z-direction of image acquiring sensor.
Embodiment
Below in conjunction with accompanying drawing, the present invention is described in detail.
The present invention provides a kind of 3d space measurement apparatus based on monocular-camera, is said exemplified by implemented in software below
It is bright, but the present invention is not precluded from other implementations such as hardware or logical device.As shown in figure 1, the plant running
Hardware environment include CPU, internal memory, nonvolatile memory and other hardware.Void of the device as a logic level
Intend device, it is run by CPU.The device includes imaging parameters acquiring unit and relative coordinate computing unit.It refer to figure
2, the use of the device and running comprise the following steps:
Step 101, imaging parameters acquiring unit is focused to the tested point on ground and subtest point respectively, is obtained corresponding
Imaging parameters, wherein, the subtest point is located at camera optical axis on the projection line on ground, the tested point with it is described auxiliary
Help test point object distance different, and the video camera installation parameter being imaged twice is constant;
Step 102, relative coordinate computing unit is according to the tested point and the imaging parameters of subtest point, to two
Secondary imaging is associated calculating, obtains the relative coordinate of the tested point, and the reference frame of the relative coordinate is, with shooting
Machine installation fulcrum is projected as origin on ground, using the fulcrum and the vertical line on ground as Y-axis, with throwing of the camera optical axis along ground
Shadow is X-axis, using the direction vertical with X-axis and Y-axis as Z axis.
The present invention is not in the case where changing monocular-camera installation parameter (position, height, head angle etc.), to be measured
Point and subtest point are imaged, and are associated calculating to being imaged twice according to imaging parameters, obtain the space of tested point
Coordinate.Concrete processing procedure is as follows.
As shown in figure 3, monocular-camera is installed vertically on E points by vertical rod.Reference frame is established using E points as origin,
Calculate position coordinates of the tested point relative to the coordinate system.The X-axis of the coordinate system is projection of the camera light direction of principal axis on ground,
Y-axis is vertical rod direction, and Z axis is perpendicular to X/Y plane.The object AD and intersection point A on ground is tested point in figure, supplemented by the B points on ground
Test point is helped, in projection of the camera optical axis along ground.
As shown in figure 4, the easy structure of video camera is given in figure, wherein, lens optical center is that camera lens are more
The virtual optics center that eyeglass is formed, lens optical center and the distance of video camera installation fulcrum (along camera light direction of principal axis)
For r, front and rear imaging twice may change.For camera optical axis and the angle on ground.
In the case where not changing video camera installation parameter (height, optical axis angle, direction), distinguished using monocular-camera
A points and B points are focused, obtain corresponding imaging parameters.The imaging point of A points on the image sensor is a points, and B points are in image sensing
Imaging point on device is b points, as being projected in the plane vertical with optical axis with thing, is derived in order to calculate.P1For once into
As object plane, i.e. object plane where A points;P2For secondary imaging object plane, i.e. object plane where B points.Thus obtain, it is Polaroid
Image distance V1, focal length F1And lens optical center and the distance r of installation fulcrum1;The image distance V of secondary imaging2, focal length F2And mirror
The distance r of head optical centre and installation fulcrum2。
Image height is calculated according to the position of imaging point in the image sensor, as shown in Figure 5.The imaging point of top is b in figure
Point, lower section imaging point are a points.According to physical size with proportional along physical size direction corresponding pixel points quantity, calculating
Physical size (image height) of the imaging point along X/Y plane.S is physics chi of the imaging sensor along X/Y plane valid pixel scope in figure
It is very little;S1For the vertical range of a point range image sensor central horizontal lines, the i.e. image height of A points;S2Passed for b points range image
The vertical range of sensor central horizontal line, the i.e. image height of B points.
By said process, image distance V corresponding to A points is obtained1, focal length F1, image height S1And lens optical center and installation branch
The distance r of point1;Obtain image distance V corresponding to B points2, focal length F2, image height S2And lens optical center and the distance r of installation fulcrum2。
Calculating is associated to being imaged twice according to above-mentioned imaging parameters, obtains tested point A relative coordinate.It is specific below in conjunction with Fig. 4
Introduce calculating process.
According to Gaussian imaging equation
The object distance U of A points is calculated respectively1With the object distance U of B points2
So as to which the distance between object plane for being imaged twice is (along optical axis direction):
The lens imaging principle according to Fig. 6
Ask A points and B points high relative to the thing of camera optical axis respectively, i.e. the value of n and k in Fig. 4
Geometrical relationship in Fig. 4, it can be deduced that
Formula (4), formula (6) and formula (7) are substituted into formula (8), drawn
Similarly, can be drawn according to geometrical relationship
Formula (4), formula (6) and formula (7) are substituted into formula (10), drawn
Therefore,
L is tested point A X-axis coordinate, is designated as Ax。
A points are in object plane P1On the distance of plane apart from optical axis perpendicular to the ground be Az, the picture a points of A points are vertical apart from optical axis
The distance of the plane on ground is az, then
Similarly, according to physical size with proportional along physical size direction corresponding pixel points quantity, calculating imaging point a
Along physical size (the image height a of Z-directionz).Q is physics chi of the imaging sensor along Z-direction valid pixel scope in Fig. 7
It is very little.
Formula (2) is substituted into formula (13) to draw
AzFor tested point A Z axis coordinate;The Y-axis coordinate A of A pointsyFor 0.Coordinate (the A of A pointsx, Ay, Az) it is pre- relative to E points
If the coordinate of coordinate system, if E points physical coordinates, it is known that if A points actual geographic position coordinates can by E point coordinates plus meter
The relative coordinate of obtained A points is calculated.
As can be seen here, being imaged twice or repeatedly by monocular-camera, and the internal data of monocular-camera is utilized, such as
The information such as image distance, focal length and image sensor size, imaging parameters are associated with calculating and can be achieved to treat the position of measuring point
Put measurement.In the process, without measuring the installation parameter of video camera and demarcation thing being demarcated, manpower, thing are saved
Power and time cost, simplified operating process.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all essences in the present invention
God any modification, equivalent substitution and improvements done etc., should be included within the scope of protection of the invention with principle.
Claims (8)
1. a kind of 3d space measuring method based on monocular-camera, it is characterised in that this method includes:
The tested point on ground and subtest point are focused respectively, obtain corresponding imaging parameters, wherein, the subtest point
Positioned at camera optical axis on the projection line on ground, the tested point is different from the subtest point object distance, and is imaged twice
Video camera installation parameter it is constant;
According to the tested point and the imaging parameters of subtest point, calculating is associated to being imaged twice, described in acquisition
The relative coordinate of tested point, the reference frame of the relative coordinate are to be projected as original on ground with video camera installation fulcrum
Point, using the fulcrum and the vertical line on ground as Y-axis, X-axis is projected as along ground with camera optical axis, with vertical with X-axis and Y-axis
Direction is Z axis.
2. the method as described in claim 1, it is characterised in that:
X-axis coordinate A in the tested point relative coordinatexFor:
<mrow>
<msub>
<mi>A</mi>
<mi>x</mi>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<mo>(</mo>
<mfrac>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>+</mo>
<msub>
<mi>r</mi>
<mn>1</mn>
</msub>
<mo>)</mo>
<mo>(</mo>
<mfrac>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>-</mo>
<mfrac>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
</mfrac>
<mo>+</mo>
<msub>
<mi>r</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>r</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
<mo>+</mo>
<mo>(</mo>
<mfrac>
<mrow>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
<msub>
<mi>S</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>)</mo>
<mo>(</mo>
<mfrac>
<mrow>
<msub>
<mi>S</mi>
<mn>1</mn>
</msub>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>+</mo>
<mfrac>
<mrow>
<msub>
<mi>S</mi>
<mn>2</mn>
</msub>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
</mfrac>
<mo>)</mo>
</mrow>
<msqrt>
<mrow>
<msup>
<mrow>
<mo>(</mo>
<mfrac>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>-</mo>
<mfrac>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
</mfrac>
<mo>+</mo>
<msub>
<mi>r</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>r</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mrow>
<mo>(</mo>
<mfrac>
<mrow>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
<msub>
<mi>S</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>+</mo>
<mfrac>
<mrow>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
<msub>
<mi>S</mi>
<mn>2</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
</mfrac>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
</msqrt>
</mfrac>
</mrow>
Wherein,
V1For the image distance of the tested point;
F1For the focal length of the tested point;
S1For image height of the tested point along X/Y plane;
r1Distance of the fulcrum along camera light direction of principal axis is installed for the lens optical center and video camera of the tested point;
V2For the image distance of the subtest point;
F2For the focal length of the subtest point;
S2For image height of the subtest point along X/Y plane;
r2Distance of the fulcrum along camera light direction of principal axis is installed for the lens optical center and video camera of the subtest point.
3. the method as described in claim 1, it is characterised in that:
Z axis coordinate A in the tested point relative coordinatezFor:
<mrow>
<msub>
<mi>A</mi>
<mi>z</mi>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
<msub>
<mi>a</mi>
<mi>z</mi>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
</mrow>
Wherein,
V1For the image distance of the tested point;
F1For the focal length of the tested point;
azFor image height of the tested point along Z-direction.
4. method as claimed in claim 2, it is characterised in that:
The AxSpecific calculating process be:
The object distance U of the tested point1With the object distance U of the subtest point2Respectively:
<mrow>
<msub>
<mi>U</mi>
<mn>1</mn>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
</mrow>
<mrow>
<msub>
<mi>U</mi>
<mn>2</mn>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
</mfrac>
</mrow>
So as to the object plane P being imaged twice1And P2Between along the distance j+m of camera light direction of principal axis be:
<mrow>
<mi>j</mi>
<mo>+</mo>
<mi>m</mi>
<mo>=</mo>
<mrow>
<mo>(</mo>
<msub>
<mi>U</mi>
<mn>1</mn>
</msub>
<mo>+</mo>
<msub>
<mi>r</mi>
<mn>1</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mrow>
<mo>(</mo>
<msub>
<mi>U</mi>
<mn>2</mn>
</msub>
<mo>+</mo>
<msub>
<mi>r</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>-</mo>
<mfrac>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
</mfrac>
<mo>+</mo>
<mrow>
<mo>(</mo>
<msub>
<mi>r</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>r</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
Ask the tested point perpendicular to the high n of thing and subtest point of camera optical axis perpendicular to camera optical axis respectively
The high k of thing:
<mrow>
<mi>n</mi>
<mo>=</mo>
<mfrac>
<msub>
<mi>U</mi>
<mn>1</mn>
</msub>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
</mfrac>
<mo>*</mo>
<msub>
<mi>S</mi>
<mn>1</mn>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
<msub>
<mi>S</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
</mrow>
<mrow>
<mi>k</mi>
<mo>=</mo>
<mfrac>
<msub>
<mi>U</mi>
<mn>2</mn>
</msub>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
</mfrac>
<mo>*</mo>
<msub>
<mi>S</mi>
<mn>2</mn>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
<msub>
<mi>S</mi>
<mn>2</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
</mfrac>
</mrow>
Above-mentioned parameter is substituted into geometric formula respectivelyWith It can obtain
<mrow>
<msub>
<mi>L</mi>
<mn>1</mn>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<mo>(</mo>
<mfrac>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>+</mo>
<msub>
<mi>r</mi>
<mn>1</mn>
</msub>
<mo>)</mo>
<mo>(</mo>
<mfrac>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>-</mo>
<mfrac>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
</mfrac>
<mo>+</mo>
<msub>
<mi>r</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>r</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<msqrt>
<mrow>
<msup>
<mrow>
<mo>(</mo>
<mfrac>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>-</mo>
<mfrac>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
</mfrac>
<mo>+</mo>
<msub>
<mi>r</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>r</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mrow>
<mo>(</mo>
<mfrac>
<mrow>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
<msub>
<mi>S</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>+</mo>
<mfrac>
<mrow>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
<msub>
<mi>S</mi>
<mn>2</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
</mfrac>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
</msqrt>
</mfrac>
</mrow>
<mrow>
<msub>
<mi>L</mi>
<mn>2</mn>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<mo>(</mo>
<mfrac>
<mrow>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
<msub>
<mi>S</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>)</mo>
<mo>(</mo>
<mfrac>
<mrow>
<msub>
<mi>S</mi>
<mn>1</mn>
</msub>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>+</mo>
<mfrac>
<mrow>
<msub>
<mi>S</mi>
<mn>2</mn>
</msub>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
</mfrac>
<mo>)</mo>
</mrow>
<msqrt>
<mrow>
<msup>
<mrow>
<mo>(</mo>
<mfrac>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>-</mo>
<mfrac>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
</mfrac>
<mo>+</mo>
<msub>
<mi>r</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>r</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mrow>
<mo>(</mo>
<mfrac>
<mrow>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
<msub>
<mi>S</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>+</mo>
<mfrac>
<mrow>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
<msub>
<mi>S</mi>
<mn>2</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
</mfrac>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
</msqrt>
</mfrac>
</mrow>
Ax=L1+L2。
5. a kind of 3d space measurement apparatus based on monocular-camera, it is characterised in that the device includes:
Imaging parameters acquiring unit, for being focused respectively to the tested point on ground and subtest point, obtain corresponding imaging ginseng
Number, wherein, the subtest point is located at camera optical axis on the projection line on ground, the tested point and the subtest
Point object distance is different, and the video camera installation parameter being imaged twice is constant;
Relative coordinate computing unit, for the imaging parameters according to the tested point and subtest point, to being imaged twice
Calculating is associated, obtains the relative coordinate of the tested point, the reference frame of the relative coordinate is to be installed with video camera
Fulcrum is projected as origin ground, and using the fulcrum and the vertical line on ground as Y-axis, X is projected as along ground with camera optical axis
Axle, using the direction vertical with X-axis and Y-axis as Z axis.
6. device as claimed in claim 5, it is characterised in that:
The relative coordinate computing unit calculates the X-axis coordinate A of the tested pointxFor:
<mrow>
<msub>
<mi>A</mi>
<mi>x</mi>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<mo>(</mo>
<mfrac>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>+</mo>
<msub>
<mi>r</mi>
<mn>1</mn>
</msub>
<mo>)</mo>
<mo>(</mo>
<mfrac>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>-</mo>
<mfrac>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
</mfrac>
<mo>+</mo>
<msub>
<mi>r</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>r</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
<mo>+</mo>
<mo>(</mo>
<mfrac>
<mrow>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
<msub>
<mi>S</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>)</mo>
<mo>(</mo>
<mfrac>
<mrow>
<msub>
<mi>S</mi>
<mn>1</mn>
</msub>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>+</mo>
<mfrac>
<mrow>
<msub>
<mi>S</mi>
<mn>2</mn>
</msub>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
</mfrac>
<mo>)</mo>
</mrow>
<msqrt>
<mrow>
<msup>
<mrow>
<mo>(</mo>
<mfrac>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>-</mo>
<mfrac>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
</mfrac>
<mo>+</mo>
<msub>
<mi>r</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>r</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mrow>
<mo>(</mo>
<mfrac>
<mrow>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
<msub>
<mi>S</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>+</mo>
<mfrac>
<mrow>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
<msub>
<mi>S</mi>
<mn>2</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
</mfrac>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
</msqrt>
</mfrac>
</mrow>
Wherein,
V1For the image distance of the tested point;
F1For the focal length of the tested point;
S1For image height of the tested point along X/Y plane;
r1Distance of the fulcrum along camera light direction of principal axis is installed for the lens optical center and video camera of the tested point;
V2For the image distance of the subtest point;
F2For the focal length of the subtest point;
S2For image height of the subtest point along X/Y plane;
r2Distance of the fulcrum along camera light direction of principal axis is installed for the lens optical center and video camera of the subtest point.
7. device as claimed in claim 5, it is characterised in that:
The relative coordinate computing unit calculates the Z axis coordinate A of the tested pointzFor:
<mrow>
<msub>
<mi>A</mi>
<mi>z</mi>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
<msub>
<mi>a</mi>
<mi>z</mi>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
</mrow>
Wherein,
V1For the image distance of the tested point;
F1For the focal length of the tested point;
azFor image height of the tested point along Z-direction.
8. device as claimed in claim 6, it is characterised in that:
The relative coordinate computing unit calculates the AxDetailed process be:
The object distance U of the tested point1With the object distance U of the subtest point2Respectively:
<mrow>
<msub>
<mi>U</mi>
<mn>1</mn>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
</mrow>
<mrow>
<msub>
<mi>U</mi>
<mn>2</mn>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
</mfrac>
</mrow>
So as to the object plane P being imaged twice1And P2Between along the distance j+m of camera light direction of principal axis be:
<mrow>
<mi>j</mi>
<mo>+</mo>
<mi>m</mi>
<mo>=</mo>
<mrow>
<mo>(</mo>
<msub>
<mi>U</mi>
<mn>1</mn>
</msub>
<mo>+</mo>
<msub>
<mi>r</mi>
<mn>1</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mrow>
<mo>(</mo>
<msub>
<mi>U</mi>
<mn>2</mn>
</msub>
<mo>+</mo>
<msub>
<mi>r</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>-</mo>
<mfrac>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
</mfrac>
<mo>+</mo>
<mrow>
<mo>(</mo>
<msub>
<mi>r</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>r</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
Ask the tested point perpendicular to the high n of thing and subtest point of camera optical axis perpendicular to camera optical axis respectively
The high k of thing:
<mrow>
<mi>n</mi>
<mo>=</mo>
<mfrac>
<msub>
<mi>U</mi>
<mn>1</mn>
</msub>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
</mfrac>
<mo>*</mo>
<msub>
<mi>S</mi>
<mn>1</mn>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
<msub>
<mi>S</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
</mrow>
<mrow>
<mi>k</mi>
<mo>=</mo>
<mfrac>
<msub>
<mi>U</mi>
<mn>2</mn>
</msub>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
</mfrac>
<mo>*</mo>
<msub>
<mi>S</mi>
<mn>2</mn>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
<msub>
<mi>S</mi>
<mn>2</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
</mfrac>
</mrow>
Above-mentioned parameter is substituted into geometric formula respectivelyWith It can obtain
<mrow>
<msub>
<mi>L</mi>
<mn>1</mn>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<mo>(</mo>
<mfrac>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>+</mo>
<msub>
<mi>r</mi>
<mn>1</mn>
</msub>
<mo>)</mo>
<mo>(</mo>
<mfrac>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>-</mo>
<mfrac>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
</mfrac>
<mo>+</mo>
<msub>
<mi>r</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>r</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<msqrt>
<mrow>
<msup>
<mrow>
<mo>(</mo>
<mfrac>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>-</mo>
<mfrac>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
</mfrac>
<mo>+</mo>
<msub>
<mi>r</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>r</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mrow>
<mo>(</mo>
<mfrac>
<mrow>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
<msub>
<mi>S</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>+</mo>
<mfrac>
<mrow>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
<msub>
<mi>S</mi>
<mn>2</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
</mfrac>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
</msqrt>
</mfrac>
</mrow>
<mrow>
<msub>
<mi>L</mi>
<mn>2</mn>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<mo>(</mo>
<mfrac>
<mrow>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
<msub>
<mi>S</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>)</mo>
<mo>(</mo>
<mfrac>
<mrow>
<msub>
<mi>S</mi>
<mn>1</mn>
</msub>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>+</mo>
<mfrac>
<mrow>
<msub>
<mi>S</mi>
<mn>2</mn>
</msub>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
</mfrac>
<mo>)</mo>
</mrow>
<msqrt>
<mrow>
<msup>
<mrow>
<mo>(</mo>
<mfrac>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>-</mo>
<mfrac>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
</mfrac>
<mo>+</mo>
<msub>
<mi>r</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>r</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mrow>
<mo>(</mo>
<mfrac>
<mrow>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
<msub>
<mi>S</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>+</mo>
<mfrac>
<mrow>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
<msub>
<mi>S</mi>
<mn>2</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mn>2</mn>
</msub>
</mrow>
</mfrac>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
</msqrt>
</mfrac>
</mrow>
Ax=L1+L2。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410339869.9A CN105333818B (en) | 2014-07-16 | 2014-07-16 | 3d space measuring method based on monocular-camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410339869.9A CN105333818B (en) | 2014-07-16 | 2014-07-16 | 3d space measuring method based on monocular-camera |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105333818A CN105333818A (en) | 2016-02-17 |
CN105333818B true CN105333818B (en) | 2018-03-23 |
Family
ID=55284479
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410339869.9A Active CN105333818B (en) | 2014-07-16 | 2014-07-16 | 3d space measuring method based on monocular-camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105333818B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109931906B (en) * | 2019-03-28 | 2021-02-23 | 华雁智科(杭州)信息技术有限公司 | Camera ranging method and device and electronic equipment |
CN110225400B (en) * | 2019-07-08 | 2022-03-04 | 北京字节跳动网络技术有限公司 | Motion capture method and device, mobile terminal and storage medium |
CN113115017B (en) * | 2021-03-05 | 2022-03-18 | 上海炬佑智能科技有限公司 | 3D imaging module parameter inspection method and 3D imaging device |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101183206A (en) * | 2006-11-13 | 2008-05-21 | 华晶科技股份有限公司 | Method for calculating distance and actuate size of shot object |
CN101344376A (en) * | 2008-08-28 | 2009-01-14 | 上海交通大学 | Measuring method for spacing circle geometric parameter based on monocular vision technology |
KR20110025724A (en) * | 2009-09-05 | 2011-03-11 | 백상주 | Method for measuring height of a subject using camera module |
CN102168954B (en) * | 2011-01-14 | 2012-11-21 | 浙江大学 | Monocular-camera-based method for measuring depth, depth field and sizes of objects |
CN103049918A (en) * | 2011-10-17 | 2013-04-17 | 天津市亚安科技股份有限公司 | Method for accurately calculating size of actual target in video frequency monitoring |
CN102661717A (en) * | 2012-05-09 | 2012-09-12 | 河北省电力建设调整试验所 | Monocular vision measuring method for iron tower |
CN103206919A (en) * | 2012-07-31 | 2013-07-17 | 广州三星通信技术研究有限公司 | Device and method used for measuring object size in portable terminal |
CN103033132B (en) * | 2012-12-20 | 2016-05-18 | 中国科学院自动化研究所 | Plane survey method and device based on monocular vision |
CN103292695B (en) * | 2013-05-10 | 2016-02-24 | 河北科技大学 | A kind of single eye stereo vision measuring method |
CN103471500B (en) * | 2013-06-05 | 2016-09-21 | 江南大学 | A kind of monocular camera machine vision midplane coordinate and the conversion method of 3 d space coordinate point |
-
2014
- 2014-07-16 CN CN201410339869.9A patent/CN105333818B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN105333818A (en) | 2016-02-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105451012B (en) | 3-D imaging system and three-D imaging method | |
CN102927917B (en) | Many orders vision measurement method of iron tower | |
CN111220130B (en) | Focusing measurement method and terminal capable of measuring object at any position in space | |
CN108574825B (en) | Method and device for adjusting pan-tilt camera | |
ES2894935T3 (en) | Three-dimensional distance measuring apparatus and method therefor | |
CN102810205A (en) | Method for calibrating camera shooting or photographing device | |
CN108833912A (en) | A kind of measurement method and system of video camera machine core optical axis center and field angle | |
CN109862345B (en) | Method and system for testing field angle | |
CN107167118B (en) | It is a kind of based on the parallel multi-thread stabilization real time laser measurement method of non-coding | |
CN104764401B (en) | A kind of engine flexible angle of cant and center of oscillation measuring method | |
CN102519434A (en) | Test verification method for measuring precision of stereoscopic vision three-dimensional recovery data | |
CN110136047B (en) | Method for acquiring three-dimensional information of static target in vehicle-mounted monocular image | |
CN104807405B (en) | Three-dimensional coordinate measurement method based on light ray angle calibration | |
CN105333818B (en) | 3d space measuring method based on monocular-camera | |
CN102589529B (en) | Scanning close-range photogrammetry method | |
CN110505468A (en) | A kind of augmented reality shows the test calibration and deviation correction method of equipment | |
CN106643567A (en) | Lane deviation system production line calibration board verification method and system | |
CN105513074B (en) | A kind of scaling method of shuttlecock robot camera and vehicle body to world coordinate system | |
CN111131801B (en) | Projector correction system and method and projector | |
CN106959378B (en) | Single width motion blur image speed calculation method | |
CN108335333A (en) | A kind of linear camera scaling method | |
CN116563370A (en) | Distance measurement method and speed measurement method based on monocular computer vision | |
CN206583440U (en) | A kind of projected image sighting distance detecting system | |
CN114062265B (en) | Evaluation method for stability of support structure of vision system | |
US20200364933A1 (en) | Image processing apparatus, image processing method and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |