CN109389638B - Camera position determining method and system - Google Patents

Camera position determining method and system Download PDF

Info

Publication number
CN109389638B
CN109389638B CN201710686610.5A CN201710686610A CN109389638B CN 109389638 B CN109389638 B CN 109389638B CN 201710686610 A CN201710686610 A CN 201710686610A CN 109389638 B CN109389638 B CN 109389638B
Authority
CN
China
Prior art keywords
camera
coordinate
identification
coordinate system
axis component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710686610.5A
Other languages
Chinese (zh)
Other versions
CN109389638A (en
Inventor
周鑫
陶澍
张鹏
李锐
何川丰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Ivreal Technology Co ltd
Original Assignee
Chongqing Ivreal Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Ivreal Technology Co ltd filed Critical Chongqing Ivreal Technology Co ltd
Priority to CN201710686610.5A priority Critical patent/CN109389638B/en
Publication of CN109389638A publication Critical patent/CN109389638A/en
Application granted granted Critical
Publication of CN109389638B publication Critical patent/CN109389638B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a camera position determining method and a camera position determining system, wherein the camera position determining method comprises the following steps: calibrating at least two identification positions on an image acquired by the camera; aligning the at least two identification positions on a photographic near surface and a photographic far surface of the camera respectively by using position alignment equipment; acquiring at least two near surface three-dimensional coordinates obtained when the position alignment equipment aligns the at least two identification positions on the photographic near surface and at least two far surface three-dimensional coordinates obtained when the position alignment equipment aligns the at least two identification positions on the photographic far surface; and performing geometric processing on the at least two near surface three-dimensional coordinates and the at least two far surface three-dimensional coordinates to obtain the position of the camera. By using the position determining method, the position of the camera is determined more time-saving, and the user only needs to align the identification position by using the position aligning equipment, so the difficulty of determining the position of the camera is reduced.

Description

Camera position determining method and system
Technical Field
The invention relates to the technical field of computer vision, in particular to a method and a system for determining the position of a camera.
Background
This section is intended to provide a background or context to the embodiments of the invention that are recited in the claims and the detailed description. The description herein is not admitted to be prior art by inclusion in this section.
Video shooting, one of the key technologies of Virtual Reality (VR) technology, the shooting effect of video often affects the visual experience of users to a large extent, so the determination of the position, direction and field of view (fov) of a camera in a Virtual scene is important. At present, in order to find the position of a real camera in a virtual scene, for example, a chessboard in an Open Source Computer Vision Library (opencv) is used to find the internal parameters of the camera, and then the external parameters are found, which is time-consuming and limited, and greatly limits the popularization and promotion of related technologies.
Disclosure of Invention
In view of the above, it is desirable to provide a method for determining a position of a camera to quickly obtain a position of the camera in a virtual scene.
The invention provides a camera position determining method, which comprises the following steps:
calibrating at least two identification positions on an image acquired by the camera;
aligning the at least two identification positions on a photographic near surface and a photographic far surface of the camera respectively by using position alignment equipment;
acquiring at least two near surface three-dimensional coordinates obtained when the position alignment equipment aligns the at least two identification positions on the photographic near surface and at least two far surface three-dimensional coordinates obtained when the position alignment equipment aligns the at least two identification positions on the photographic far surface;
and performing geometric processing on the at least two near surface three-dimensional coordinates and the at least two far surface three-dimensional coordinates to obtain the position of the camera.
Further, the at least two identification bits include a first identification bit and a second identification bit;
the at least two near-surface three-dimensional coordinates comprise a first coordinate and a second coordinate, and the at least two far-surface three-dimensional coordinates comprise a third coordinate and a fourth coordinate;
the first identification position corresponds to the first coordinate and the third coordinate; the second identification position corresponds to the second coordinate and the fourth coordinate.
Further, the geometrically processing the at least two near three-dimensional coordinates and the at least two far three-dimensional coordinates to obtain the position of the camera includes:
geometrically processing the first coordinate and the third coordinate to obtain a first straight line passing through the first coordinate and the third coordinate;
geometrically processing the second coordinate and the fourth coordinate to obtain a second straight line passing through the second coordinate and the fourth coordinate;
and acquiring the intersection point of the first straight line and the second straight line to obtain the position of the camera.
Further, the method further comprises:
obtaining a camera coordinate of the camera in a camera coordinate system according to the position of the camera;
and when the distances from the at least two identification positions to the central point of the image are equal, calculating the field angle of the camera by using the camera coordinates.
Further, the calculating the field angle of the camera by using the camera coordinates includes:
using the formula:
y/v=2*tan(fov/2)*z
wherein fov represents the camera's field of view; y represents the y-axis component of the camera in the camera coordinate system; z represents a z-axis component in the camera coordinate system; v represents the absolute value of the difference between the y-axis component of one of the at least two identification bits and the y-axis component of the center of the image acquired by the camera under the image coordinate system.
Further, the calculating the field angle of the video camera by using the coordinates of the camera coordinate system includes:
using the formula:
x/u=(y/v)*(cwidth/cheight);
y/v=2*tan(fov/2)*z
wherein fov represents the camera's field of view; x represents the x-axis component of the camera in the camera coordinate system; y represents the y-axis component of the camera in the camera coordinate system; z represents a z-axis component in the camera coordinate system; u represents the absolute value of the difference value of the x-axis component of one of the at least two identification positions and the x-axis component of the center of the image acquired by the camera under the image coordinate system; v represents the absolute value of the difference value of the y-axis component of one of the at least two identification positions and the y-axis component of the center of the image acquired by the camera under the image coordinate system; c. CwidthIndicating the width of the display image; c. CheightIndicating the height of the displayed image.
Further, the obtaining of the camera coordinates of the camera in the camera coordinate system according to the position of the camera includes:
and performing matrix conversion processing on the position of the camera to obtain the coordinates of the camera in a camera coordinate system.
Further, the method further comprises:
and obtaining the shooting direction of the image acquired by the camera by using the first straight line and the second straight line.
Further, "obtaining a shooting direction of the image captured by the camera using the first line and the second line" includes:
and when the intersection point corresponds to the central point of the image acquired by the camera and an angular bisector of an included angle formed by the first straight line and the second straight line passes through the central point, calculating the shooting direction of the image acquired by the camera by using the unit vectors of the first straight line and the second straight line.
The present invention also provides a camera position determining system, the system comprising:
the calibration device is used for calibrating at least two identification positions on the image acquired by the camera;
the position alignment equipment is used for respectively aligning the at least two identification positions on the camera near surface and the camera far surface of the camera;
the position acquisition device is used for acquiring at least two near surface three-dimensional coordinates obtained when the position alignment equipment aligns the at least two identification positions on the shooting near surface and at least two far surface three-dimensional coordinates obtained when the shooting far surface aligns the at least two identification positions;
and the processing device is used for performing geometric processing on the at least two near surface three-dimensional coordinates and the at least two far surface three-dimensional coordinates to obtain the position of the camera.
The camera position determining method and system provided by the invention determine at least two identification positions with arbitrary positions and non-coincidence on an image acquired by a camera, and then align the identification positions at the positions of a near camera surface and a far camera surface of the camera by using position aligning equipment, and obtain the current position when aligning, wherein the current position comprises X-axis, Y-axis and Z-axis components of a world coordinate system. And the position of the camera is obtained according to the obtained at least four positions (two at the near side of the camera and two at the far side of the camera), the whole process does not involve searching internal parameters and external parameters of the camera, so that time is saved when the position of the camera is determined, and a user only needs to align the identification position by using position alignment equipment, so that the difficulty of determining the position of the camera is reduced, the threshold of shooting a virtual reality video is further reduced, and the popularization and the promotion of a virtual reality technology are facilitated.
Further, after the position of the camera is obtained, under the condition that the at least two identification positions are equidistant from the center of the image, the field angle of the camera can be obtained according to the obtained at least four three-dimensional coordinates, the user can determine the shooting range according to the field angle, and the position and the direction of the lens of the camera can be properly adjusted by using the shooting range.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are some examples of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a camera position determining method according to a first embodiment of the present invention.
Fig. 2 is a flowchart of a camera position determining method according to a second embodiment of the present invention.
FIG. 3 is a schematic diagram of an exemplary marker and alignment position in accordance with an embodiment of the present invention.
Fig. 4 is a schematic diagram of an exemplary configuration of a camera position determining system according to an embodiment of the present invention.
Description of the main elements
Camera position determination system 1
Calibration device 11
Position alignment device 12
Position acquisition device 13
Processing apparatus 14
Coordinate conversion device 15
Viewing angle calculation device 16
Shooting direction calculating device 17
The following detailed description will further illustrate the invention in conjunction with the above-described figures.
Detailed Description
So that the manner in which the above recited objects, features and advantages of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to the embodiments thereof which are illustrated in the appended drawings. In addition, the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth to provide a thorough understanding of the present invention, and the described embodiments are merely a subset of the embodiments of the present invention, rather than a complete embodiment. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
Fig. 1 is a flowchart of a camera position determining method according to a first embodiment of the present invention. As shown in fig. 1, the camera position determination method may include the steps of:
step 101: and calibrating at least two identification positions on the image acquired by the camera.
In this embodiment, the camera is used to capture an image, which may be an image reflecting in real time the change of an object in the field angle of the camera at present.
It is understood that during the image taking process of the camera, the optical axis of the camera may be allowed to have a certain pitch (pitch) amount and yaw (yaw) amount, but the roll (roll) amount needs to be controlled within a sufficiently small range.
It can be understood that, in order to facilitate the post-image processing, the optical axis of the lens of the camera can be kept in a horizontal state, and the framing picture of the camera is vertical to the horizontal plane.
It is understood that, here, the at least two identification positions can be obtained by calibrating at any position where images acquired by the camera do not coincide with each other.
It is understood that the at least two identification bits may be obtained by:
1) marking points on the light incident surface of the camera lens, so that the camera can obtain an image with a calibrated identification position; or,
2) and after the camera acquires the image, calibrating the acquired image through image processing, thereby obtaining the mark position which is displayed on the image and has calibration.
In a specific application example of the present embodiment, the at least two labeled bits are obtained in the manner 1). Referring to fig. 3, as shown in fig. 3A, a point P, Q is marked on the light incident surface a of the camera lens.
It is understood that the at least two identification bits are two-dimensional identification bits.
Step 102: and respectively aligning the at least two identification positions on the photographic near surface and the photographic far surface of the camera by using a position alignment device.
In this embodiment, the photographing near plane and the photographing far plane are two planes which are close to and far from the camera in a three-dimensional space, respectively.
It can be understood that the user can observe himself in the image in real time through the virtual display head-mounted display, and the virtual display head-mounted display also displays the at least two identification positions, and the user can move between the near photographing surface and the far photographing surface according to the at least two identification positions, and at the same time, the position aligning device is used for finding the positions aligning the at least two identification positions on the near photographing surface and the far photographing surface, and recording the spatial position during alignment.
It will be appreciated that the user may also align the at least two identification locations via images displayed by other displays.
It will be appreciated that with a normal camera, the near camera surface may be selected to be about 0.1 meters from the camera lens and the far camera surface may be selected to be about 4 meters from the camera lens.
Step 103: and acquiring at least two near surface three-dimensional coordinates obtained when the position alignment equipment aligns the at least two identification positions on the photographic near surface and at least two far surface three-dimensional coordinates obtained when the position alignment equipment aligns the at least two identification positions on the photographic far surface.
In this embodiment, the position alignment device is configured to find and align positions corresponding to the positions of the identification bits on the near-field plane and the far-field plane in a three-dimensional space (real environment), and the aligned positions may be understood as the position alignment device and the identification bits coinciding with each other on the image acquired by the camera.
In a specific application example of the present embodiment, as shown in fig. 3B in conjunction with fig. 3, the light incident surface a, the near imaging surface B, and the far imaging surface c are located at different positions. According to the point P, Q marked on the light incident surface a, the user can hold the position alignment device and find the alignment position according to the identification position of the corresponding point P, Q on the image acquired by the camera, and simultaneously record the near plane three-dimensional coordinates P1 and Q1 and the far plane three-dimensional coordinates P2 and Q2 during alignment.
It will be appreciated that the position alignment device may be a virtual reality tracking device and further that the virtual reality tracking device may be a virtual reality induction handle.
Step 104: and performing geometric processing on the at least two near surface three-dimensional coordinates and the at least two far surface three-dimensional coordinates to obtain the position of the camera.
In this embodiment, the geometric processing may be coordinate conversion or may be construction of a correlation space straight line.
In this embodiment, at least two marker positions having arbitrary positions and not overlapping with each other can be determined on an image acquired by a camera, and then the marker positions are aligned at positions of a near plane and a far plane of the camera by using a position alignment apparatus, so that a current position is obtained when the marker positions are aligned, wherein the current position includes X-axis, Y-axis and Z-axis components of a world coordinate system. And then the position of the camera is obtained according to the obtained at least four positions (two at the near side of the camera and two at the far side of the camera), the whole process does not involve searching internal parameters and external parameters of the camera, so that time is saved when the position of the camera is determined, and a user only needs to align the identification position by using position alignment equipment, so that the difficulty of determining the position of the camera is reduced, and the threshold of shooting a virtual reality video is further reduced.
It is understood that the at least two identification positions may be calibrated simultaneously, or calibrated after determining the position of the photographing near surface and the position of the photographing far surface of one identification position.
It can be understood that the position of the camera can be obtained by using two-dimensional identification positions and two corresponding coordinates on the near side of the camera and two corresponding coordinates on the far side of the camera, and when more than two identification positions are used, more identification positions and corresponding coordinates are helpful for improving the accuracy of the obtained position of the camera.
Fig. 2 is a flowchart of a camera position determining method according to a second embodiment of the present invention. As shown in fig. 2, in the present embodiment, four coordinate positions are obtained by using two marker bits to obtain the position of the camera. The camera position determining method of the present embodiment includes the steps of:
step 201: and calibrating a first identification position and a second identification position on the image acquired by the camera.
Step 202: and respectively aligning the first identification position and the second identification position on the photographic near surface and the photographic far surface of the camera by using position alignment equipment.
In this embodiment, aligning the first identification position and the second identification position can be accomplished by the user holding the position alignment device and wearing a virtual reality head-mounted display for displaying real-time images of the camera, and the images displayed in the virtual reality head-mounted display can reflect the current position of the user and the position of the identification position. The user can find the photographic near surface and the photographic far surface according to the displayed images in the virtual reality head-mounted display.
When the user walks to the near side of the camera, the position alignment equipment in the hand is moved to align the first identification position and the second identification position respectively, and the positions in alignment are recorded respectively. Similarly, when the user walks far away from the camera, the position alignment device in the hand is moved to align the first marker and the second marker, and the positions at the time of alignment are recorded respectively.
It will be appreciated that in one particular application, the user may hold the virtual reality handle to perform the position alignment operation.
Step 203: and acquiring a first coordinate and a second coordinate obtained when the position alignment equipment aligns the first identification position and the second identification position on the photographing near surface, and a third coordinate and a fourth coordinate aligning the first identification position and the second identification position on the photographing far surface.
In this embodiment, with reference to fig. 3, as shown in fig. 3B, the virtual reality handle held by the user is: finding out the position of the identification position corresponding to the midpoint P, Q of the alignment image at the position b of the photographic near surface to obtain coordinates P1 and Q1; and finding the position of the identification position corresponding to the midpoint P, Q of the alignment image at the far side c of the image to obtain coordinates P2 and Q2.
Step 204: performing geometric processing on the first coordinate and the second target to obtain a first straight line passing through the first coordinate and the third coordinate; and geometrically processing the second coordinate and the fourth coordinate to obtain a second straight line passing through the second coordinate and the fourth coordinate.
It is understood that the geometric processing here may be a connection line of coordinates on the photographing near plane and the photographing far plane corresponding to the same marker position, that is, the first coordinate and the third coordinate form a first straight line, the second coordinate and the fourth coordinate form a second straight line, and due to the imaging principle, the first straight line and the second straight line necessarily intersect at a point.
In the present embodiment, as shown in fig. 3B, the coordinates P1 and P2 are connected to each other to form a straight line L1, and the coordinates Q1 and Q2 are connected to each other to form a straight line L2, in conjunction with fig. 3.
Step 205: and acquiring the intersection point of the first straight line and the second straight line to obtain the position of the camera.
In the present embodiment, the intersection position is obtained by using the principle that the first straight line and the second straight line inevitably intersect at one point, and the intersection position is the position of the camera.
In the present embodiment, as shown in fig. 3B, a straight line L1 and a straight line L2 are shown in conjunction with fig. 3.
FIG. 3 is a schematic diagram of an exemplary marker and alignment position in accordance with an embodiment of the present invention. As shown in fig. 3, in fig. 3A, a mark point is marked on the light incident surface a of the camera lensP, Q, the coordinate of the calculable point P is (u)1,v1) The coordinate of point Q is (u)2,v2) Wherein u represents an x-axis component of the planar coordinate system, v represents a y-axis component of the planar coordinate system, and the central point is (u)0,v0). Here, u is satisfied1=u0=u2,v0=(v1+v2)/2。
It can be understood that, since the point P, Q is marked on the light incident surface a, there is necessarily a position mark corresponding to the position of the point P, Q in the image acquired by the camera, and the position mark is a nominal mark (the first mark, the second mark).
In fig. 3B, three-dimensional coordinates P1 and Q1 are obtained on the photographic near plane B by the position alignment apparatus, and three-dimensional coordinates P2 and Q2 are obtained on the photographic far plane c by the position alignment apparatus.
After three-dimensional coordinates P1 and Q1 on the photographic near plane b and three-dimensional coordinates P2 and Q2 on the photographic far plane c are obtained, three-dimensional coordinates P1 and P2 are connected and extended to obtain a straight line L1, three-dimensional coordinates Q1 and Q2 are connected and extended to obtain a straight line L2, and an intersection point of the straight line L1 and the straight line L2 is obtained to obtain an intersection point O, and the position of the intersection point O is the position of the camera in the world coordinate system (X is the position of the camera in the world coordinate system) (X is the position of the intersection point Ow,Yw,Zw)。
Here, because of the particularity of the point P, Q, the two points are located on both sides of the center point of the incident surface and have the same distance to the center point, so that u is satisfied1=u0=u2,v0=(v1+v2) Therefore, it can be seen from the coordinates of the marker P and the coordinates of the marker Q that the bisector of the angle between the straight line L1 and the straight line L2 necessarily passes through the center of the light incident surface a, and the sum of the unit vector of the straight line L1 and the unit vector of the straight line L2 is the shooting direction of the image obtained by the camera.
Obtaining the following camera matrix relation according to the camera position under the world coordinate system:
Figure BDA0001376909670000101
where R is an orthogonal identity matrix of 3 × 3, t is a three-dimensional translation vector, and vector 0 is (0, 0, 0). The position (x) of the camera in the camera coordinates is thus obtainedc,yc,zc)。
Using the formula: y isc/v=2*tan(fov/2)*zcFov is obtained, wherein fov represents the field angle; v represents the y-axis component of the identification bit P or Q in the light incident surface a and v0Is the absolute value of the difference, i.e. v ═ Abs (v)1-v0)=Abs(v2-v0)。
Similarly, in combination with the above formula, the formula can also be utilized: x is the number ofc/u=(yc/v)*(cwidth/cheight) Fov is obtained, where u represents the x-axis component of the mark P or Q in the light incident surface a and u0Is the absolute value of the difference, i.e. u ═ Abs (u)1-u0)=Abs(u2-u0) (ii) a And here, cwidth、cheightThe width and height of the light incident surface a are respectively shown, and accordingly, the ratio of the width to the height of the image is not changed in the displayed image.
After the position of the camera is obtained, under the condition that the at least two identification positions are equidistant from the center of the image, the field angle of the camera can be obtained according to the at least four three-dimensional coordinates, a user can determine a shooting range according to the field angle, and the position of a lens of the camera and the shooting direction of the obtained image can be properly adjusted by using the shooting range.
In this embodiment, the identification bit is obtained by using a "mark point on the light incident surface of the camera lens, so that the camera can obtain an image with a calibrated identification bit", it can be understood that the image obtained by the camera can be further subjected to position locking and marking by image processing to obtain the identification bit, which is not described herein again.
Fig. 4 is a schematic diagram of an exemplary configuration of a camera position determining system according to an embodiment of the present invention. As shown in fig. 4, the camera position determining system 1 includes a calibration means 11, a position aligning means 12, a position acquiring means 13, a processing means 14, a coordinate converting means 15, a field angle calculating means 16, and a shooting direction calculating means 18. The method comprises the steps of calibrating at least two identification positions on an image acquired by a camera, finding position information of corresponding positions on a near camera surface and a far camera surface by utilizing the at least two calibrated identification positions, combining all the position information and obtaining the position of the camera through geometric processing, and finishing corresponding operation and position calculation processing through position alignment equipment by a common user.
The calibration device 11 may be configured to calibrate at least two identification positions on an image acquired by the camera.
A position alignment device 12 operable to align the at least two identification bits on a camera near side and a camera far side of the camera, respectively.
And a position obtaining device 13, configured to obtain at least two near plane three-dimensional coordinates obtained when the position alignment apparatus aligns the at least two identification bits on the photographic near plane, and at least two far plane three-dimensional coordinates obtained when the position alignment apparatus aligns the at least two identification bits on the photographic far plane.
And the processing device 14 is used for performing geometric processing on the at least two near plane three-dimensional coordinates and the at least two far plane three-dimensional coordinates to obtain the position of the camera.
And the coordinate conversion device 15 can be used for obtaining the camera coordinates of the camera in a camera coordinate system according to the position of the camera.
And the field angle calculation device 16 is used for calculating the field angle of the camera by utilizing the camera coordinates when the distances from the at least two identification positions to the central point of the image are equal.
And the shooting direction calculating device 17 is configured to calculate the shooting direction of the image acquired by the camera by using the unit vectors of the first straight line and the second straight line and the shooting direction of the image acquired by the camera when the intersection point corresponds to the central point of the image acquired by the camera and an angular bisector of an included angle formed by the first straight line and the second straight line passes through the central point.
It will be appreciated that to accomplish the present invention, the camera position determination system may include one or more devices.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units, modules or devices recited in the system, device or terminal device claims may also be implemented by the same unit, module or device through software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the spirit and scope of the invention.

Claims (8)

1. A camera position determination method, characterized in that the method comprises:
calibrating at least two identification positions on an image acquired by the camera;
aligning the at least two identification positions on a photographic near surface and a photographic far surface of the camera respectively by using position alignment equipment;
acquiring at least two near surface three-dimensional coordinates obtained when the position alignment equipment aligns the at least two identification positions on the photographic near surface and at least two far surface three-dimensional coordinates obtained when the position alignment equipment aligns the at least two identification positions on the photographic far surface;
performing geometric processing on the at least two near-surface three-dimensional coordinates and the at least two far-surface three-dimensional coordinates to obtain the position of the camera;
obtaining a camera coordinate of the camera in a camera coordinate system according to the position of the camera;
when the distances from the at least two identification positions to the central point of the image are equal, calculating the angle of view of the camera by using the camera coordinates, wherein the calculating the angle of view of the camera by using the camera coordinates comprises:
using the formula:
y/v=2*tan(fov/2)*z
wherein fov represents the camera's field of view; y represents the y-axis component of the camera in the camera coordinate system; z represents a z-axis component in the camera coordinate system; v represents the absolute value of the difference between the y-axis component of one of the at least two identification bits and the y-axis component of the center of the image acquired by the camera under the image coordinate system.
2. The camera position determination method according to claim 1, wherein the at least two identification bits include a first identification bit and a second identification bit;
the at least two near-surface three-dimensional coordinates comprise a first coordinate and a second coordinate, and the at least two far-surface three-dimensional coordinates comprise a third coordinate and a fourth coordinate;
the first identification position corresponds to the first coordinate and the third coordinate; the second identification position corresponds to the second coordinate and the fourth coordinate.
3. The camera position determination method of claim 2, wherein geometrically processing the at least two near plane three-dimensional coordinates and the at least two far plane three-dimensional coordinates to obtain the position of the camera comprises:
geometrically processing the first coordinate and the third coordinate to obtain a first straight line passing through the first coordinate and the third coordinate;
geometrically processing the second coordinate and the fourth coordinate to obtain a second straight line passing through the second coordinate and the fourth coordinate;
and acquiring the intersection point of the first straight line and the second straight line to obtain the position of the camera.
4. The camera position determination method of claim 1, wherein said calculating the field of view of the camera using the camera coordinate system coordinates comprises:
using the formula:
x/u=(y/v)*(cwidth/cheight);
y/v=2*tan(fov/2)*z
wherein fov represents the camera's field of view; x represents the x-axis component of the camera in the camera coordinate system; y represents the y-axis component of the camera in the camera coordinate system; z represents a z-axis component in the camera coordinate system; u represents the absolute value of the difference between the x-axis component of one of the at least two identification positions and the x-axis component of the center of the image acquired by the camera under the image coordinate system; v represents the absolute value of the difference value of the y-axis component of one of the at least two identification positions and the y-axis component of the center of the image acquired by the camera under the image coordinate system; cwidth represents the width of the display image; height indicates the height of the displayed image.
5. The camera position determination method of claim 1, wherein said deriving camera coordinates of said camera in a camera coordinate system based on said camera position comprises:
and performing matrix conversion processing on the position of the camera to obtain the coordinates of the camera in a camera coordinate system.
6. The camera position determination method of claim 3, further comprising:
and obtaining the shooting direction of the image acquired by the camera by using the first straight line and the second straight line.
7. The camera position determining method according to claim 6, wherein obtaining a shooting direction of the image acquired by the camera using the first line and the second line comprises:
and when the intersection point corresponds to the central point of the image acquired by the camera and an angular bisector of an included angle formed by the first straight line and the second straight line passes through the central point, calculating the shooting direction of the image acquired by the camera by using the unit vectors of the first straight line and the second straight line.
8. A camera position determination system, characterized in that the system comprises:
the calibration device is used for calibrating at least two identification positions on the image acquired by the camera;
the position alignment equipment is used for respectively aligning the at least two identification positions on the camera near surface and the camera far surface of the camera;
the position acquisition device is used for acquiring at least two near surface three-dimensional coordinates obtained when the position alignment equipment aligns the at least two identification positions on the shooting near surface and at least two far surface three-dimensional coordinates obtained when the shooting far surface aligns the at least two identification positions;
the processing device is used for carrying out geometric processing on the at least two near-surface three-dimensional coordinates and the at least two far-surface three-dimensional coordinates to obtain the position of the camera;
the processing device is further used for obtaining camera coordinates of the camera in a camera coordinate system according to the position of the camera; when the distances from the at least two identification positions to the central point of the image are equal, calculating the field angle of the camera by utilizing the camera coordinates;
wherein the calculating the field angle of the camera using the camera coordinates comprises:
using the formula:
y/v=2*tan(fov/2)*z
wherein fov represents the camera's field of view; y represents the y-axis component of the camera in the camera coordinate system; z represents a z-axis component in the camera coordinate system; v represents the absolute value of the difference between the y-axis component of one of the at least two identification bits and the y-axis component of the center of the image acquired by the camera under the image coordinate system.
CN201710686610.5A 2017-08-08 2017-08-08 Camera position determining method and system Active CN109389638B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710686610.5A CN109389638B (en) 2017-08-08 2017-08-08 Camera position determining method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710686610.5A CN109389638B (en) 2017-08-08 2017-08-08 Camera position determining method and system

Publications (2)

Publication Number Publication Date
CN109389638A CN109389638A (en) 2019-02-26
CN109389638B true CN109389638B (en) 2020-11-06

Family

ID=65413885

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710686610.5A Active CN109389638B (en) 2017-08-08 2017-08-08 Camera position determining method and system

Country Status (1)

Country Link
CN (1) CN109389638B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015096806A1 (en) * 2013-12-29 2015-07-02 刘进 Attitude determination, panoramic image generation and target recognition methods for intelligent machine
CN105913417A (en) * 2016-04-05 2016-08-31 天津大学 Method for geometrically constraining pose based on perspective projection line

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015096806A1 (en) * 2013-12-29 2015-07-02 刘进 Attitude determination, panoramic image generation and target recognition methods for intelligent machine
CN105913417A (en) * 2016-04-05 2016-08-31 天津大学 Method for geometrically constraining pose based on perspective projection line

Also Published As

Publication number Publication date
CN109389638A (en) 2019-02-26

Similar Documents

Publication Publication Date Title
CN110296691B (en) IMU calibration-fused binocular stereo vision measurement method and system
US11087531B2 (en) System and method for determining geo-location(s) in images
JP4825980B2 (en) Calibration method for fisheye camera.
JP2874710B2 (en) 3D position measuring device
CN104279960B (en) Method for measuring size of object by mobile equipment
WO2017022033A1 (en) Image processing device, image processing method, and image processing program
CN108288291A (en) Polyphaser calibration based on single-point calibration object
CN108629829B (en) Three-dimensional modeling method and system of the one bulb curtain camera in conjunction with depth camera
CN106331527A (en) Image splicing method and device
CN111192235A (en) Image measuring method based on monocular vision model and perspective transformation
JP2013171523A (en) Ar image processing device and method
Feng et al. Inertial measurement unit aided extrinsic parameters calibration for stereo vision systems
CN111882608A (en) Pose estimation method between augmented reality glasses tracking camera and human eyes
Liu et al. Epipolar rectification method for a stereovision system with telecentric cameras
CN107977998B (en) Light field correction splicing device and method based on multi-view sampling
Kurillo et al. Framework for hierarchical calibration of multi-camera systems for teleimmersion
CN109682312B (en) Method and device for measuring length based on camera
CN109389638B (en) Camera position determining method and system
JP3221384B2 (en) 3D coordinate measuring device
JP2006215939A (en) Free viewpoint image composition method and device
WO2022184928A1 (en) Calibration method of a portable electronic device
Kudinov et al. The algorithm for a video panorama construction and its software implementation using CUDA technology
Barazzetti Planar metric rectification via parallelograms
Li et al. Method for horizontal alignment deviation measurement using binocular camera without common target
Duan et al. Camera self-calibration method based on two vanishing points

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Method and system for determining camera position

Effective date of registration: 20221102

Granted publication date: 20201106

Pledgee: Chongqing Longshang financing Company Limited by Guarantee

Pledgor: CHONGQING IVREAL TECHNOLOGY CO.,LTD.

Registration number: Y2022500000092

PE01 Entry into force of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20231018

Granted publication date: 20201106

Pledgee: Chongqing Longshang financing Company Limited by Guarantee

Pledgor: CHONGQING IVREAL TECHNOLOGY CO.,LTD.

Registration number: Y2022500000092

PC01 Cancellation of the registration of the contract for pledge of patent right