CN111553944A - Method and device for determining camera layout position, terminal equipment and storage medium - Google Patents

Method and device for determining camera layout position, terminal equipment and storage medium Download PDF

Info

Publication number
CN111553944A
CN111553944A CN202010207264.XA CN202010207264A CN111553944A CN 111553944 A CN111553944 A CN 111553944A CN 202010207264 A CN202010207264 A CN 202010207264A CN 111553944 A CN111553944 A CN 111553944A
Authority
CN
China
Prior art keywords
points
angle
camera layout
target
visibility
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010207264.XA
Other languages
Chinese (zh)
Other versions
CN111553944B (en
Inventor
洪智慧
许秋子
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Realis Multimedia Technology Co Ltd
Original Assignee
Shenzhen Realis Multimedia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Realis Multimedia Technology Co Ltd filed Critical Shenzhen Realis Multimedia Technology Co Ltd
Priority to CN202010207264.XA priority Critical patent/CN111553944B/en
Publication of CN111553944A publication Critical patent/CN111553944A/en
Priority to PCT/CN2021/080725 priority patent/WO2021190331A1/en
Application granted granted Critical
Publication of CN111553944B publication Critical patent/CN111553944B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Studio Devices (AREA)

Abstract

The application belongs to the technical field of machine vision, and relates to a method and a device for determining a camera layout position, terminal equipment and a storage medium. The method comprises the following steps: acquiring preset mark points and n camera layout points in a scene space; dividing the angle range of the mark point rotating along the vertical direction into a plurality of angle intervals; respectively calculating the horizontal rotation visibility corresponding to each angle interval according to the positions of the mark points and the positions of the n camera layout points; calculating to obtain the three-dimensional visibility of the mark points according to the horizontal rotation visibility of each angle interval; and if the stereoscopic visibility exceeds a set threshold value, determining the n camera layout points as camera layout position points of the optical motion capture system. Through the embodiment of the application, whether the object to be detected can be captured by the determined camera layout points or not can be quickly and accurately verified, and whether the positions of the camera layout points are reasonable or not is determined.

Description

Method and device for determining camera layout position, terminal equipment and storage medium
Technical Field
The present application belongs to the technical field of machine vision, and in particular, to a method and an apparatus for determining a camera layout position, a terminal device, and a storage medium.
Background
In optical motion capture systems, reasonable camera layout positions are important to effectively capture images with each camera.
At present, the related camera layout method usually only increases the number of cameras blindly to expand the shooting space. Therefore, once the object to be detected is blocked, a dead zone exists in camera shooting, and the like, whether the object to be detected can be captured by each camera with the determined position point cannot be verified quickly and accurately, and whether the position points of each camera are reasonably distributed or not is determined.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present application and therefore may include information that does not constitute prior art known to a person of ordinary skill in the art.
Disclosure of Invention
In view of this, embodiments of the present application provide a method, an apparatus, a terminal device and a storage medium for determining a camera layout position, which can solve the technical problem that in the prior art, it is impossible to quickly and accurately verify whether an object to be detected can be captured by each determined camera layout position point.
In a first aspect of the embodiments of the present application, there is provided a method for determining a layout position of a camera, which is applied to an optical motion capture system, the method including:
acquiring preset mark points and n camera layout points in a scene space, wherein n is an integer greater than or equal to 4;
dividing the angle range of the mark point rotating along the vertical direction into a plurality of angle intervals;
respectively calculating the horizontal rotation visibility corresponding to each angle interval according to the positions of the mark points and the positions of the n camera layout points, wherein the horizontal rotation visibility is used for measuring the probability that the inclination angle of the mark points in the vertical direction is in the corresponding angle interval and the mark points can be successfully captured by the cameras at the n camera layout points during horizontal rotation;
calculating to obtain the stereoscopic visibility of the mark points according to the horizontal rotation visibility of each angle interval, wherein the stereoscopic visibility is used for measuring the probability of successful capture by the cameras at the n camera layout points when the mark points rotate randomly;
and if the stereoscopic visibility exceeds a set threshold value, determining the n camera layout points as the camera layout position points of the optical motion capture system.
In some embodiments of the present application, the dividing the angle range in which the mark point rotates in the vertical direction into a plurality of angle sections includes:
for each camera layout point, calculating an included angle between a connecting line of the camera layout point and the mark point and a vertical direction axis to obtain n included angle angles;
and dividing the angle range of 0-180 degrees of the mark point rotating along the vertical direction into n +1 angle intervals by taking the n included angle angles as divided end points.
In some embodiments of the present application, the horizontal rotation visibility of any one of the plurality of angle intervals is calculated by:
selecting a target angle from the target angle interval;
constructing a target straight line which comprises the mark point and has an included angle with a vertical direction shaft as the target angle;
constructing n-1 shielding planes based on the positions of the rest n-1 camera layout points except the target camera layout point in the n camera layout points and the positions of the mark points, wherein each shielding plane comprises the target straight line, and the included angle between the connecting line of the target camera layout point and the mark point and the vertical direction axis is the upper angle limit of the target angle interval;
determining a target occlusion plane in the n-1 occlusion planes, wherein a visible space of the target occlusion plane includes more than 2 camera layout points in the n camera layout points, and the visible space of the target occlusion plane is one of two spaces obtained by dividing the scene space by the target occlusion plane;
respectively calculating the angular interval ratio of each target shielding plane, wherein the angular interval ratio is the ratio of the included angle between the target shielding plane and the adjacent shielding plane to 360 degrees, and the adjacent shielding plane is the first shielding plane in the n-1 shielding planes which is reached by the corresponding target shielding plane in a rotating mode along the appointed direction by taking the target straight line as an axis;
and adding the proportion of the angle intervals of the target shielding planes to obtain the horizontal rotation visibility of the target angle intervals.
In some embodiments of the present application, the constructing n-1 occlusion planes based on the positions of the remaining n-1 camera layout points of the n camera layout points except for the target camera layout point and the positions of the marker points includes:
intercepting a line segment from the target straight line;
and respectively connecting two end points of the line segment with each camera layout point in the n-1 camera layout points to construct n-1 shielding planes.
In some embodiments of the present application, after determining a target occlusion plane of the n-1 occlusion planes, before calculating an angular interval ratio of each target occlusion plane, further includes:
for each target shielding plane, calculating included angles between 2 line segments formed by connecting any 2 camera layout points in more than 2 camera layout points contained in the visible space of the target shielding plane with the marking points respectively;
and removing the target shielding plane in which the included angle between 2 line segments formed by respectively connecting any 2 camera layout points with the mark point in more than 2 camera layout points contained in the visible space is not within the set angle range.
In some embodiments of the application, the calculating the stereoscopic visibility of the mark point according to the horizontal rotation visibility of each of the angle intervals includes:
respectively calculating the size of each angle interval and the ratio of 180 degrees to obtain the vertical angle ratio of each angle interval;
and calculating to obtain the three-dimensional visibility of the mark point by combining the vertical angle ratio of each angle interval and the horizontal rotation visibility of each angle interval.
In some embodiments of the application, the calculating the stereoscopic visibility of the mark point by combining the vertical angle ratio of each of the angle intervals and the horizontal rotation visibility of each of the angle intervals includes:
for each angle interval, multiplying the vertical angle ratio and the horizontal rotation visibility of the angle interval to obtain the stereoscopic visibility of the angle interval;
and adding the three-dimensional visibilities of the angle intervals to obtain the three-dimensional visibilities of the mark points.
In a second aspect of the embodiments of the present application, there is provided an apparatus for determining a layout position of cameras, which is applied to an optical motion capture system, the apparatus including:
the device comprises a position point acquisition module, a position point acquisition module and a position point acquisition module, wherein the position point acquisition module is used for acquiring mark points and n camera layout points which are preset in a scene space, and n is an integer which is greater than or equal to 4;
the vertical rotation angle dividing module is used for dividing the angle range of the mark point rotating along the vertical direction into a plurality of angle intervals;
the horizontal rotation visibility calculation module is used for calculating the horizontal rotation visibility corresponding to each angle interval according to the positions of the mark points and the positions of the n camera layout points, and the horizontal rotation visibility is used for measuring the probability that the inclination angle of the mark points in the vertical direction is within the corresponding angle interval and can be successfully captured by the cameras at the n camera layout points during horizontal rotation;
the stereoscopic visibility calculation module is used for calculating the stereoscopic visibility of the mark points according to the horizontal rotation visibility of each angle interval, and the stereoscopic visibility is used for measuring the probability of successful capture of the mark points by the cameras at the n camera layout points when the mark points rotate randomly;
a camera layout position determining module, configured to determine the n camera layout points as camera layout position points of the optical motion capture system if the stereoscopic visibility exceeds a set threshold.
In a third aspect of the embodiments of the present application, there is provided a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the step of determining the location of the computer layout as described in any one of the above when executing the computer program.
In a fourth aspect of embodiments of the present application, there is provided a computer-readable storage medium storing a computer program which, when executed by a processor, performs the step of determining a location of a camera layout as described in any one of the above.
In a fifth aspect of embodiments of the present application, there is provided a computer program product, which, when run on a terminal device, causes the terminal device to perform the step of determining a location of a camera layout as described in any one of the above.
Compared with the prior art, the embodiment of the application has the advantages that: according to the method, firstly, a mark point and n camera layout points are preset in a scene space, an angle range of the mark point rotating along the vertical direction is divided into a plurality of angle intervals, then, horizontal rotation visibility according to each angle interval is calculated respectively, so that the probability that the inclination angle of the mark point in the vertical direction is in the corresponding angle interval and the mark point can be successfully captured by a camera in the n camera layout points during horizontal rotation is measured; then, according to the horizontal rotation visibility of each angle interval, calculating to obtain the three-dimensional visibility of the mark point so as to measure the probability of successful capture of the mark point by the cameras at the n camera layout points when the mark point rotates randomly; and finally, comparing the calculated stereoscopic visibility of the mark points with a set threshold value, and further determining whether the positions of the n camera layout points are reasonable.
Other features and advantages of the present application will be apparent from the following detailed description, or may be learned by practice of the application.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a schematic flow chart diagram illustrating a method for determining a camera layout position according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an exemplary scene of an optical motion capture system provided by an embodiment of the present application;
fig. 3 is a schematic flowchart of step S120 in a method for determining a layout position of a camera according to an embodiment of the present application;
fig. 4 is a schematic flowchart of step S130 in a method for determining a layout position of a camera according to an embodiment of the present application;
FIG. 5 is an exemplary diagram of an occlusion plane in a method for determining a layout position of a camera according to an embodiment of the present application;
fig. 6 is a schematic flowchart of step S430 in a method for determining a layout position of a camera according to an embodiment of the present application;
FIG. 7 is an exemplary diagram of an object occlusion plane in a method for determining a camera layout position according to an embodiment of the present application;
FIG. 8 is another schematic flow chart of a method for determining a layout position of a camera according to an embodiment of the present disclosure;
fig. 9 is a schematic flowchart of step S140 in a method for determining a layout position of a camera according to an embodiment of the present application;
FIG. 10 is a block diagram of an apparatus for determining a layout position of cameras according to an embodiment of the present application;
fig. 11 is a schematic block diagram of a terminal device in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The motion capture system is a high-technology device for accurately measuring the motion condition of a target object in a three-dimensional space, and based on the computer graphics principle, the motion condition of the target object (tracker) is recorded in the form of images by a plurality of video capture devices arranged in the space, and then the image data is processed by a computer to obtain the spatial positions of the target object (tracker) on different time measurement units.
In particular, the task of motion capture is accomplished by monitoring and tracking a particular spot of light on the target object. For any point in space, the spatial position of the point at the moment can be determined according to the images shot by the two cameras and the camera parameters at the same moment as long as the point can be seen by the two cameras at the same time. In general, a key part of a target object, such as a human joint, a hip, etc., can be pasted with some substances or luminous points with strong light reflecting capacity, which are called as "Marker", so that the camera can easily capture the point. The optical motion capture system only recognizes and processes these landmarks to determine and calculate the spatial location of the light point at each instant.
In optical motion capture systems, reasonable camera layout positions are important to efficiently utilize the captured images of each camera. A good camera layout position has, but is not limited to, the following features: firstly, signals of a target capture object can be captured by at least two cameras under the condition that the target capture object is partially shielded as much as possible, so that the high-quality tracking of the subsequent target capture object is facilitated, and unnecessary post data processing work is reduced; and the triangulation angle is positioned in a reasonable measurement angle interval, so that the results of subsequent camera calibration and 2D-3D reconstruction are more stable and reliable, and the error is smaller.
However, the existing camera layout strategies generally do not consider these two conditions, but only blindly increase the number of cameras to expand the shooting space. Therefore, once the object to be detected is blocked, a dead zone exists in camera shooting, and the like, the related technology cannot quickly and accurately verify whether the object to be detected can be captured by each camera with the determined position point, that is, whether the position points of each camera are reasonably arranged is determined, and a series of problems such as large 3D reconstruction error, unstable tracking, tracking cross point, even tracking loss, and the like may be caused.
The method for determining the camera layout position provided in the embodiment of the present application can be applied to the above-mentioned optical motion capture system in which at least 4 camera layout points are preset.
Fig. 1 shows a schematic flow chart of a method for determining a camera layout position provided in an embodiment of the present application, where the method includes:
step S110, obtaining a preset mark point and n camera layout points in a scene space, wherein n is an integer greater than or equal to 4.
It is understood that the scene space may be any scene space of an existing optical motion capture system, and this is not particularly limited in the embodiments of the present application. The preset mark point can be any mark point on the surface of a predetermined target capture object in the scene space. For example, it may be the center point of the target capture object. The positions of the preset marking points and the positions of the layout points of the cameras are generally marked by using the same coordinate rule, for example, a three-dimensional coordinate system (x, y, z). Of course, the position of the preset mark point and the position of each camera layout point may be marked by other manners such as polar coordinates, which is not specifically limited in this embodiment of the application.
Fig. 2 is a schematic diagram of an exemplary scene of an optical motion capture system according to an embodiment of the present application, in which a scene space has a preset mark point M and preset 4 camera layout points 1, 2, 3, and 4.
It should be noted that the n camera layout points can capture the preset mark points without being blocked.
And step S120, dividing the angle range of the mark point rotating along the vertical direction into a plurality of angle intervals.
It is understood that the vertical direction refers to a direction passing through the marked point and perpendicular to the horizontal plane. The angle of rotation in the vertical direction is in the range of 0-180 °. Therefore, the angle range in which the mark point is rotated in the vertical direction may be divided into a plurality of angle sections with the vertical direction as a reference direction.
As shown in fig. 3, in an embodiment of the present application, step S120 specifically includes:
step S310, for each camera layout point, calculating an included angle between a connecting line of the camera layout point and the mark point and a vertical direction axis to obtain n included angle angles.
It can be understood that the manner of calculating the included angle between the connecting line of the marking point and the vertical direction axis is any manner of solving the included angle between two straight lines in the prior art, and this is not particularly limited in the embodiment of the present application.
And S320, dividing the angle range of 0-180 degrees of the mark point rotating along the vertical direction into n +1 angle intervals by taking the n included angle angles as divided end points.
Specifically, the n included angle angles { a }may be set1,a2,...,an|ai∈[0,180]}. Obtaining n sorted included angle angles according to the sequence from small to large; sequentially calculating the angle difference values of two adjacent sequential included angle angles in the n sequenced included angle angles to obtain n-1 angle difference values; in addition, the difference value of the angle of the included angle of the first ordered bit and the angle of the included angle of 180 degrees and the angle of the n-th ordered bit is included to jointly form the n +1 angle difference value, namely { a1,a2-a1,...,180-an}. Finally, according to the n +1The angle difference value divides the angle range of the mark point rotating along the vertical direction from 0 to 180 degrees into n +1 angle sections, and the length of each section is { a }1,a2-a1,...,180-an}. The proportion of each angle interval in the angle range is as follows in sequence: p is a radical of1,p2,...,pn+1Wherein p is1=a1/180,p2=(a2-a1)/180,pn=(an-an-1)/180,pn+1=(180-an) And/180, n is an integer greater than or equal to 4.
For example, when n is 5, the n included angles calculated in step S210 are: 50 °, 30 °, 20 °, 120 °, 80 °, the 5 ordered included angles are: 20 °, 30 °, 50 °, 80 °, 120 °. Calculating angle difference values of two included angle angles of all adjacent sequences in sequence: 10 degrees, 20 degrees, 30 degrees and 40 degrees, and the difference between the included angle of the first ordering bit 20 degrees and the included angle of the first ordering bit 180 degrees and the included angle of the second ordering bit 5 degrees is 180 degrees to 120 degrees, the angle range of the mark point rotating along the vertical direction is divided into 6 angle intervals: { [0 °, 20 °), [20 °, 30 °, [30 °, 50 °, 80 °), [80 °, 120 °, 180 °, [20 °, 30 °), and [50 °, 80 °, [80 °, 120 °, [ 180 °, ]]}. The proportion of each angle interval in the angle range is as follows in sequence: p is a radical of1=20/180,p2=10/180,p3=20/180,p4=30/180,p5=40/180,p6=60/180。
Step S130, respectively calculating a horizontal rotation visibility corresponding to each angle interval according to the positions of the mark points and the positions of the n camera layout points, wherein the horizontal rotation visibility is used for measuring the probability that the inclination angle of the mark points in the vertical direction is within the corresponding angle interval and the mark points can be successfully captured by the cameras at the n camera layout points during horizontal rotation.
In one plane, a straight line L passing through one point can be represented by another straight line passing through the point through horizontal rotation of 0-360 degrees; in the class pushes to the three-dimensional space, the spatial plane a passing through one point can be represented by another plane passing through the point through horizontal rotation of 0-360 degrees and oblique rotation of 0-180 degrees, namely any spatial plane passing through one point can be found through horizontal rotation of 0-360 degrees and oblique rotation of 0-180 degrees. Therefore, the stereoscopic visibility of the marker point can be divided into two stages of oblique rotation and horizontal rotation.
As shown in fig. 4, in one embodiment of the present application, the horizontal rotation visibility of any one of the plurality of angle intervals is calculated by:
and step S410, selecting a target angle from the target angle interval.
Still in the above example, when the target angle interval is [50 °, 80 °), the target angle may be arbitrarily selected in the target angle interval, for example, the target angle is 60 °.
And step S420, constructing a target straight line which contains the mark points and has an included angle with the vertical direction axis as the target angle.
For example, as shown in fig. 5, when n is 4, the mark point is M, the target camera layout point is 1, and the target angle is 60 °, a target straight line including the mark point M and having an angle of 60 ° with the vertical axis may be constructed as follows: the straight line passing through the marked point as M and the target camera layout point 1.
Step S430, constructing n-1 shielding planes based on the positions of the rest n-1 camera layout points except the target camera layout point in the n camera layout points and the positions of the mark points, wherein each shielding plane comprises the target straight line, and the included angle between the connecting line of the target camera layout point and the mark point and the vertical direction axis is the upper angle limit of the target angle interval.
As shown in fig. 6, in an embodiment of the present application, step S430 specifically includes:
and step S610, intercepting a line segment from the target straight line.
For example, as shown in fig. 5, a line segment 1M is cut from a straight line passing through the mark point M and the target camera layout point 1.
And S620, respectively connecting two end points of the line segment with each camera layout point in the n-1 camera layout points to construct n-1 shielding planes.
For example, as shown in fig. 5, when n is 4, the mark point is M, the target camera layout point is 1, and the target angle is 60 °, two end points 1 and M of the line segment 1M are respectively connected to each of the remaining 3 camera layout points 2, 3, and 4, so as to construct 3 occlusion planes, i.e., a plane 1M2, a plane 1M3, and a plane 1M 4.
It should be noted that after n-1 occlusion planes are constructed based on the positions of the remaining n-1 camera layout points excluding the target camera layout point among the n camera layout points and the positions of the marker points, the scene space may be divided into 2 × 1 regions, for example, when n is 4, the scene space may be divided into 6 spatial regions as shown in fig. 5.
Step S440, determining a target occlusion plane in the n-1 occlusion planes, wherein a visible space of the target occlusion plane includes more than 2 camera layout points in the n camera layout points, and the visible space of the target occlusion plane is one of two spaces obtained by dividing the scene space by the target occlusion plane.
Still in the above example, as shown in fig. 7, when n is 4, the mark point is M, the target camera layout point is 1, and the target angle is 60 °, the 3 occlusion planes constructed are: plane 1M2, plane 1M3, plane 1M 4. Starting from one of the occlusion planes 1M2, the two spaces into which the scene space is divided by the occlusion plane 1M 2: visible spaces (black areas in fig. 7) and invisible spaces (gray areas in fig. 7, i.e. areas where the back of the black areas is not visible). It is determined whether or not 2 or more of the 4 camera layout points are located in the visible space of the blocking plane 1M 2. In this example, the camera layout points 3 and 4 are located in the visible space of the mask plane 1M2, so that the mask plane 1M2 satisfies the condition that the visible space includes 2 or more of the n camera layout points, and can be determined as the target mask plane.
Similarly, it may be determined whether the occlusion plane 1M3 and the occlusion plane 1M4 are the target occlusion plane in turn.
As shown in fig. 8, in an embodiment of the present application, after step S440 and before step S450, the method further includes:
step S810, for each target occlusion plane, calculating an included angle between 2 line segments formed by connecting any 2 camera layout points in the more than 2 camera layout points included in the visible space with the mark point.
It can be understood that the manner of calculating the included angle between the line segments is any manner of solving the included angle between two straight lines in the prior art, and this is not particularly limited in the embodiment of the present application.
Step S820, removing the target shielding plane in which the included angle between 2 line segments formed by connecting any 2 camera layout points respectively with the mark point is not within the set angle range from among the more than 2 camera layout points included in the visible space.
It should be noted that, for two camera layout points observing the same mark point at the same time, if an included angle between 2 line segments formed by connecting two camera layout points with the mark point respectively is within a set angle range, for example, within an angle range [45,135 ° ], it indicates that the two camera layout points can better capture the mark point. By removing the target occlusion plane that does not satisfy this condition, the rationality of the camera layout point setting can be further improved.
And S450, respectively calculating the angular interval ratio of each target shielding plane, wherein the angular interval ratio is the ratio of the included angle between the target shielding plane and the adjacent shielding plane to 360 degrees, and the adjacent shielding plane is the first shielding plane in the n-1 shielding planes which is reached by the corresponding target shielding plane in a rotating mode along the specified direction by taking the target straight line as an axis.
It can be understood that the calculation method of the included angle between the target occlusion plane and the adjacent occlusion plane is any one of the existing methods for calculating the included angle between two planes, and this is not particularly limited in the embodiment of the present application. The adjacent occlusion plane is a first occlusion plane of the n-1 occlusion planes that the corresponding target occlusion plane reaches by rotating along a designated direction with the target straight line as an axis, for example, a first occlusion plane 1M2 of the 3 occlusion planes that the target occlusion planes 1M3 and 1M4 in the above example can reach by rotating along the designated direction. The specified direction may be a clockwise direction, a counterclockwise direction, or other directions that can be calculated or determined in advance, which is not specifically limited in this embodiment of the present application.
And step S460, adding the angular interval ratios of the target shielding planes to obtain the horizontal rotation visibility of the target angular interval.
For example, assume that the target occlusion planes of the target angular interval are 1M3 and 1M4, wherein the angular interval of the target occlusion plane 1M3 is proportional to: 0.3, the ratio of the angle intervals of the target shielding plane 1M4 is as follows: 0.45, the horizontal rotation visibility of the target angle interval is 0.3+0.45 is 0.75.
Step S140, calculating to obtain the stereoscopic visibility of the mark points according to the horizontal rotation visibility of each angle interval, wherein the stereoscopic visibility is used for measuring the probability that the mark points can be successfully captured by the cameras at the n camera layout points when the mark points rotate randomly.
It will be appreciated that a marker point may be arbitrarily oriented when rotated in space, and therefore the stereoscopic visibility of the marker point may be determined by combining the horizontal rotation visibility and the vertical angle ratio to better balance the visibility of the marker point when arbitrarily rotated.
As shown in fig. 9, in an embodiment of the present application, step S140 specifically includes:
step S910, the ratio of the size of each angle interval to 180 degrees is respectively calculated, and the vertical angle ratio of each angle interval is obtained.
For example, when n is 5, the n included angles calculated in step S210 are: the angle range of the mark point rotating along the vertical direction is divided into 6 angle intervals by 0-180 degrees in 50 degrees, 30 degrees, 20 degrees, 120 degrees and 80 degrees: { [0 °, 20 °), 20 { [20°,30°),[30°,50°),[50°,80°),[80°,120°),[120°,180°]}. The proportion of each angle interval in the angle range is as follows in sequence: p is a radical of1=20/180,p2=10/180,p3=20/180,p4=30/180,p5=40/180,p6=60/180。
And step S920, calculating to obtain the three-dimensional visibility of the mark point by combining the vertical angle ratio of each angle interval and the horizontal rotation visibility of each angle interval.
In an embodiment of the present application, step S920 specifically includes:
for each angle interval, multiplying the vertical angle ratio and the horizontal rotation visibility of the angle interval to obtain the stereoscopic visibility of the angle interval;
and adding the three-dimensional visibilities of the angle intervals to obtain the three-dimensional visibilities of the mark points.
For example, when n is 5, the ratio of each angle interval to the vertical angle range is: p is a radical of1=20/180,p2=10/180,p3=20/180,p4=30/180,p5=40/180,p660/180. The horizontal rotation visibility calculated by each angle interval is assumed to be as follows in sequence: q. q.s1=0.05,q2=0.1,q3=0.15,q4=0.2,q5=0.3,q6=0.2。
The stereoscopic visibility V of the mark point is:
V=p1*q1+p2*q2+p3*q3+p4*q4+p5*q5+p6*q6=0.393。
in general, the greater the stereoscopic visibility V of the marker point, the greater the probability that the marker point can be successfully captured by the cameras at the n camera layout points when arbitrarily rotated. Therefore, when the target mark point is not moved and the shielding plane rotates in any direction, the method can be used for simulating and measuring the probability that the target mark point can be successfully captured by the cameras at the n camera layout points when the target mark point rotates in any direction.
Step S150, if the stereoscopic visibility exceeds a set threshold, determining the n camera layout points as camera layout position points of the optical motion capture system.
The set threshold may be set according to the requirements of the actual application scenario, and may be set to a numerical value such as 0.5, 0.75, and the like, for example.
To sum up, in the embodiment of the present application, first, a mark point and n camera layout points are preset in a scene space, an angle range of the mark point rotating along a vertical direction is divided into a plurality of angle intervals, and then horizontal rotation visibility according to each angle interval is respectively calculated to measure a probability that an inclination angle of the mark point in the vertical direction is within a corresponding angle interval and the mark point can be successfully captured by a camera at the n camera layout points during horizontal rotation; then, according to the horizontal rotation visibility of each angle interval, calculating to obtain the three-dimensional visibility of the mark point so as to measure the probability of successful capture of the mark point by the cameras at the n camera layout points when the mark point rotates randomly; and finally, comparing the calculated stereoscopic visibility of the mark points with a set threshold value, and further determining whether the positions of the n camera layout points are reasonable.
As shown in fig. 10, an embodiment of the present application provides an apparatus for determining a layout position of cameras, which is applied to an optical motion capture system, and the apparatus includes:
a position point obtaining module 1010, configured to obtain mark points and n camera layout points preset in a scene space, where n is an integer greater than or equal to 4;
a vertical rotation angle dividing module 1020, configured to divide an angle range in which the mark point rotates in the vertical direction into a plurality of angle intervals;
a horizontal rotation visibility calculating module 1030, configured to calculate, according to the positions of the mark points and the positions of the n camera layout points, a horizontal rotation visibility corresponding to each angle interval, where the horizontal rotation visibility is used to measure a probability that an inclination angle of the mark point in a vertical direction is within the corresponding angle interval and the mark point can be successfully captured by a camera located at the n camera layout points during horizontal rotation;
a stereoscopic visibility calculating module 1040, configured to calculate a stereoscopic visibility of the mark point according to the horizontal rotation visibility of each angle interval, where the stereoscopic visibility is used to measure a probability that the mark point can be successfully captured by the cameras at the n camera layout points when the mark point rotates arbitrarily;
a camera layout position determining module 1050, configured to determine the n camera layout points as camera layout position points of the optical motion capture system if the stereoscopic visibility exceeds a set threshold.
Further, the vertical rotation angle division module 1020 includes:
an included angle calculating unit, configured to calculate, for each of the camera layout points, an included angle between a connection line between the camera layout point and the mark point and a vertical direction axis to obtain n included angle angles;
and the rotation angle dividing unit is used for dividing the angle range of the mark point rotating along the vertical direction by 0-180 degrees into n +1 angle intervals by taking the n included angle angles as divided end points.
Further, the horizontal rotation visibility calculating module 1030 includes:
the target angle selecting unit is used for selecting a target angle from the target angle interval;
the target straight line construction unit is used for constructing a target straight line which comprises the mark points and has an included angle with a vertical direction shaft as the target angle;
a shielding plane construction unit, configured to construct n-1 shielding planes based on positions of n-1 camera layout points of the n camera layout points except for the target camera layout point and positions of the mark points, where each shielding plane includes the target straight line, and an included angle between a connecting line of the target camera layout point and the mark point and a vertical direction axis is an upper angle limit of the target angle interval;
a target occlusion plane determining unit, configured to determine a target occlusion plane in the n-1 occlusion planes, where a visible space of the target occlusion plane includes more than 2 camera layout points in the n camera layout points, and the visible space of the target occlusion plane is one of two spaces obtained by dividing the scene space by the target occlusion plane;
the angle interval proportion calculation unit is used for calculating the angle interval proportion of each target shielding plane respectively, the angle interval proportion is the ratio of an included angle between the target shielding plane and an adjacent shielding plane to 360 degrees, and the adjacent shielding plane is the first shielding plane in the n-1 shielding planes which is reached by the corresponding target shielding plane in a rotating mode along the specified direction by taking the target straight line as an axis;
and the angular interval ratio adding unit is used for adding the angular interval ratios of the target shielding planes to obtain the horizontal rotation visibility of the target angular interval.
Further, the apparatus for determining a camera layout position further comprises:
the line segment included angle calculation module is used for calculating included angles between 2 line segments formed by connecting any 2 camera layout points with the mark points respectively in more than 2 camera layout points contained in the visible space of each target shielding plane;
and the target shielding plane removing module is used for removing target shielding planes, of more than 2 camera layout points contained in the visible space, wherein included angles between 2 line segments formed by respectively connecting any 2 camera layout points and the mark points are not within a set angle range.
Further, the stereoscopic visibility calculating module 1040 includes:
the vertical angle ratio calculating subunit is used for calculating the size of each angle interval and the ratio of 180 degrees respectively to obtain the vertical angle ratio of each angle interval;
and the stereoscopic visibility calculation operator unit is used for calculating the stereoscopic visibility of the mark points by combining the vertical angle ratio of each angle interval and the horizontal rotation visibility of each angle interval.
In this embodiment, specific implementation and corresponding effects of each step may refer to the above-mentioned method embodiment, and are not described herein again.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses, modules and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Fig. 11 shows a schematic block diagram of a terminal device provided in an embodiment of the present application, and only shows a part related to the embodiment of the present application for convenience of description.
As shown in fig. 11, the terminal device 11 of this embodiment includes: a processor 110, a memory 111 and a computer program 112 stored in said memory 111 and executable on said processor 110. The processor 110, when executing the computer program 112, implements the steps in the above-described embodiments of the method for determining the camera layout position, such as the steps S110 to S150 shown in fig. 1. Alternatively, the processor 110, when executing the computer program 112, implements the functions of the modules/units in the above-mentioned device embodiments, such as the functions of the modules 1010 to 1050 shown in fig. 10.
Illustratively, the computer program 112 may be partitioned into one or more modules/units that are stored in the memory 111 and executed by the processor 110 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used for describing the execution process of the computer program 112 in the terminal device 11.
The terminal device 11 may be any type of terminal device. Those skilled in the art will appreciate that fig. 11 is only an example of the terminal device 11, and does not constitute a limitation to the terminal device 11, and may include more or less components than those shown, or combine some components, or different components, for example, the terminal device 11 may further include an input-output device, a network access device, a bus, etc.
The Processor 110 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 111 may be an internal storage unit of the terminal device 11, such as a hard disk or a memory of the terminal device 11. The memory 111 may also be an external storage device of the terminal device 11, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 11. Further, the memory 111 may also include both an internal storage unit and an external storage device of the terminal device 11. The memory 111 is used for storing the computer program and other programs and data required by the terminal device 11. The memory 111 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/server and method may be implemented in other ways. For example, the above-described apparatus/server embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A method for determining a camera layout position for use in an optical motion capture system, the method comprising:
acquiring preset mark points and n camera layout points in a scene space, wherein n is an integer greater than or equal to 4;
dividing the angle range of the mark point rotating along the vertical direction into a plurality of angle intervals;
respectively calculating the horizontal rotation visibility corresponding to each angle interval according to the positions of the mark points and the positions of the n camera layout points, wherein the horizontal rotation visibility is used for measuring the probability that the inclination angle of the mark points in the vertical direction is in the corresponding angle interval and the mark points can be successfully captured by the cameras at the n camera layout points during horizontal rotation;
calculating to obtain the stereoscopic visibility of the mark points according to the horizontal rotation visibility of each angle interval, wherein the stereoscopic visibility is used for measuring the probability of successful capture by the cameras at the n camera layout points when the mark points rotate randomly;
and if the stereoscopic visibility exceeds a set threshold value, determining the n camera layout points as the camera layout position points of the optical motion capture system.
2. The method of determining a camera layout position as claimed in claim 1, wherein said dividing an angular range of the rotation of the marker point in the vertical direction into a plurality of angular intervals comprises:
for each camera layout point, calculating an included angle between a connecting line of the camera layout point and the mark point and a vertical direction axis to obtain n included angle angles;
and dividing the angle range of 0-180 degrees of the mark point rotating along the vertical direction into n +1 angle intervals by taking the n included angle angles as divided end points.
3. The method of determining a camera layout position according to claim 2, wherein the visibility of horizontal rotation for a target angle interval of any of the plurality of angle intervals is calculated by:
selecting a target angle from the target angle interval;
constructing a target straight line which comprises the mark point and has an included angle with a vertical direction shaft as the target angle;
constructing n-1 shielding planes based on the positions of the rest n-1 camera layout points except the target camera layout point in the n camera layout points and the positions of the mark points, wherein each shielding plane comprises the target straight line, and the included angle between the connecting line of the target camera layout point and the mark point and the vertical direction axis is the upper angle limit of the target angle interval;
determining a target occlusion plane in the n-1 occlusion planes, wherein a visible space of the target occlusion plane includes more than 2 camera layout points in the n camera layout points, and the visible space of the target occlusion plane is one of two spaces obtained by dividing the scene space by the target occlusion plane;
respectively calculating the angular interval ratio of each target shielding plane, wherein the angular interval ratio is the ratio of the included angle between the target shielding plane and the adjacent shielding plane to 360 degrees, and the adjacent shielding plane is the first shielding plane in the n-1 shielding planes which is reached by the corresponding target shielding plane in a rotating mode along the appointed direction by taking the target straight line as an axis;
and adding the proportion of the angle intervals of the target shielding planes to obtain the horizontal rotation visibility of the target angle intervals.
4. The method of determining camera layout positions according to claim 3, wherein the constructing n-1 occlusion planes based on the positions of the remaining n-1 of the n camera layout points except for the target camera layout point and the positions of the marker points comprises:
intercepting a line segment from the target straight line;
and respectively connecting two end points of the line segment with each camera layout point in the n-1 camera layout points to construct n-1 shielding planes.
5. The method for determining the layout position of the camera according to claim 3, wherein after determining the target occlusion planes of the n-1 occlusion planes, before respectively calculating the angular interval ratios of the respective target occlusion planes, further comprises:
for each target shielding plane, calculating included angles between 2 line segments formed by connecting any 2 camera layout points in more than 2 camera layout points contained in the visible space of the target shielding plane with the marking points respectively;
and removing the target shielding plane in which the included angle between 2 line segments formed by respectively connecting any 2 camera layout points with the mark point in more than 2 camera layout points contained in the visible space is not within the set angle range.
6. The method of determining camera layout positions as claimed in any of claims 2 to 5, wherein said calculating the stereoscopic visibility of the marker points according to the horizontal rotation visibility of each of the angle intervals comprises:
respectively calculating the size of each angle interval and the ratio of 180 degrees to obtain the vertical angle ratio of each angle interval;
and calculating to obtain the three-dimensional visibility of the mark point by combining the vertical angle ratio of each angle interval and the horizontal rotation visibility of each angle interval.
7. The method of determining camera layout positions as claimed in claim 6, wherein said calculating the stereoscopic visibility of the marked points in combination with the vertical angle ratio of each of the angle intervals and the horizontal rotational visibility of each of the angle intervals comprises:
for each angle interval, multiplying the vertical angle ratio and the horizontal rotation visibility of the angle interval to obtain the stereoscopic visibility of the angle interval;
and adding the three-dimensional visibilities of the angle intervals to obtain the three-dimensional visibilities of the mark points.
8. An apparatus for determining a layout position of a camera for use in an optical motion capture system, the apparatus comprising:
the device comprises a position point acquisition module, a position point acquisition module and a position point acquisition module, wherein the position point acquisition module is used for acquiring mark points and n camera layout points which are preset in a scene space, and n is an integer which is greater than or equal to 4;
the vertical rotation angle dividing module is used for dividing the angle range of the mark point rotating along the vertical direction into a plurality of angle intervals;
the horizontal rotation visibility calculation module is used for calculating the horizontal rotation visibility corresponding to each angle interval according to the positions of the mark points and the positions of the n camera layout points, and the horizontal rotation visibility is used for measuring the probability that the inclination angle of the mark points in the vertical direction is within the corresponding angle interval and can be successfully captured by the cameras at the n camera layout points during horizontal rotation;
the stereoscopic visibility calculation module is used for calculating the stereoscopic visibility of the mark points according to the horizontal rotation visibility of each angle interval, and the stereoscopic visibility is used for measuring the probability of successful capture of the mark points by the cameras at the n camera layout points when the mark points rotate randomly;
a camera layout position determining module, configured to determine the n camera layout points as camera layout position points of the optical motion capture system if the stereoscopic visibility exceeds a set threshold.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor, when executing the computer program, implements the steps of the method of determining a camera layout position according to any of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method of determining a position of a camera layout according to any one of claims 1 to 7.
CN202010207264.XA 2020-03-23 2020-03-23 Method, device, terminal equipment and storage medium for determining camera layout position Active CN111553944B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010207264.XA CN111553944B (en) 2020-03-23 2020-03-23 Method, device, terminal equipment and storage medium for determining camera layout position
PCT/CN2021/080725 WO2021190331A1 (en) 2020-03-23 2021-03-15 Camera layout position determining method and apparatus, terminal device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010207264.XA CN111553944B (en) 2020-03-23 2020-03-23 Method, device, terminal equipment and storage medium for determining camera layout position

Publications (2)

Publication Number Publication Date
CN111553944A true CN111553944A (en) 2020-08-18
CN111553944B CN111553944B (en) 2023-11-28

Family

ID=72004165

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010207264.XA Active CN111553944B (en) 2020-03-23 2020-03-23 Method, device, terminal equipment and storage medium for determining camera layout position

Country Status (2)

Country Link
CN (1) CN111553944B (en)
WO (1) WO2021190331A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112393884A (en) * 2020-10-26 2021-02-23 西北工业大学 Light path layout solving method based on spherical coordinate system
WO2021190331A1 (en) * 2020-03-23 2021-09-30 深圳市瑞立视多媒体科技有限公司 Camera layout position determining method and apparatus, terminal device, and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110255775A1 (en) * 2009-07-31 2011-10-20 3Dmedia Corporation Methods, systems, and computer-readable storage media for generating three-dimensional (3d) images of a scene
US20160240054A1 (en) * 2015-02-17 2016-08-18 Mengjiao Wang Device layout optimization for surveillance devices
CN108830132A (en) * 2018-04-11 2018-11-16 深圳市瑞立视多媒体科技有限公司 A kind of sphere points distributing method and capture ball, system for optical motion capture
CN110851225A (en) * 2019-11-12 2020-02-28 厦门市美亚柏科信息股份有限公司 Method for visually displaying dynamic layout of incremental primitive, terminal device and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104469322A (en) * 2014-12-24 2015-03-25 重庆大学 Camera layout optimization method for large-scale scene monitoring
CN104850693B (en) * 2015-01-19 2018-02-16 深圳市顺恒利科技工程有限公司 A kind of monitoring device layout method and device
CN106021803B (en) * 2016-06-06 2019-04-16 中国科学院长春光学精密机械与物理研究所 A kind of method and system of the optimal arrangement of determining image capture device
US10565726B2 (en) * 2017-07-03 2020-02-18 Qualcomm Incorporated Pose estimation using multiple cameras
CN109461189A (en) * 2018-09-04 2019-03-12 顺丰科技有限公司 Pose calculation method, device, equipment and the storage medium of polyphaser
CN111553944B (en) * 2020-03-23 2023-11-28 深圳市瑞立视多媒体科技有限公司 Method, device, terminal equipment and storage medium for determining camera layout position

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110255775A1 (en) * 2009-07-31 2011-10-20 3Dmedia Corporation Methods, systems, and computer-readable storage media for generating three-dimensional (3d) images of a scene
US20160240054A1 (en) * 2015-02-17 2016-08-18 Mengjiao Wang Device layout optimization for surveillance devices
CN108830132A (en) * 2018-04-11 2018-11-16 深圳市瑞立视多媒体科技有限公司 A kind of sphere points distributing method and capture ball, system for optical motion capture
CN110851225A (en) * 2019-11-12 2020-02-28 厦门市美亚柏科信息股份有限公司 Method for visually displaying dynamic layout of incremental primitive, terminal device and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021190331A1 (en) * 2020-03-23 2021-09-30 深圳市瑞立视多媒体科技有限公司 Camera layout position determining method and apparatus, terminal device, and storage medium
CN112393884A (en) * 2020-10-26 2021-02-23 西北工业大学 Light path layout solving method based on spherical coordinate system

Also Published As

Publication number Publication date
CN111553944B (en) 2023-11-28
WO2021190331A1 (en) 2021-09-30

Similar Documents

Publication Publication Date Title
CN107223269B (en) Three-dimensional scene positioning method and device
CN108307113B (en) Image acquisition method, image acquisition control method and related device
CN112686950B (en) Pose estimation method, pose estimation device, terminal equipment and computer readable storage medium
US10769811B2 (en) Space coordinate converting server and method thereof
CN111553944B (en) Method, device, terminal equipment and storage medium for determining camera layout position
CN110880159A (en) Image splicing method and device, storage medium and electronic device
CN110800020B (en) Image information acquisition method, image processing equipment and computer storage medium
CN113793392A (en) Camera parameter calibration method and device
CN117745845A (en) Method, device, equipment and storage medium for determining external parameter information
CN107067441A (en) Camera marking method and device
CN112102378A (en) Image registration method and device, terminal equipment and computer readable storage medium
CN113870190B (en) Vertical line detection method, device, equipment and storage medium
US7158665B2 (en) Image processing device for stereo image processing
CN109374919B (en) Method and device for determining moving speed based on single shooting device
CN111223139B (en) Target positioning method and terminal equipment
CN115131273A (en) Information processing method, ranging method and device
CN114170326B (en) Method and device for acquiring origin of camera coordinate system
JP7229102B2 (en) Three-dimensional measurement device using images, three-dimensional measurement method using images, and three-dimensional measurement program using images
CN110796596A (en) Image splicing method, imaging device and panoramic imaging system
CN111462309B (en) Modeling method and device for three-dimensional head, terminal equipment and storage medium
CN113343739B (en) Relocating method of movable equipment and movable equipment
Subedi et al. An extended method of multiple-camera calibration for 3D vehicle tracking at intersections
CN116045813B (en) Rotating shaft calibration method, device, equipment and medium
CN116503387B (en) Image detection method, device, equipment, system and readable storage medium
US20230419533A1 (en) Methods, storage media, and systems for evaluating camera poses

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant