CN113674356A - Camera screening method and related device - Google Patents

Camera screening method and related device Download PDF

Info

Publication number
CN113674356A
CN113674356A CN202110819845.3A CN202110819845A CN113674356A CN 113674356 A CN113674356 A CN 113674356A CN 202110819845 A CN202110819845 A CN 202110819845A CN 113674356 A CN113674356 A CN 113674356A
Authority
CN
China
Prior art keywords
camera
observed
target
cameras
bounding box
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110819845.3A
Other languages
Chinese (zh)
Inventor
林鹏
张凯
何曾范
李乾坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110819845.3A priority Critical patent/CN113674356A/en
Publication of CN113674356A publication Critical patent/CN113674356A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a camera screening method and a related device, wherein the camera screening method specifically comprises the following steps: acquiring position information of a target to be observed; acquiring a first collection formed by all cameras within a preset range from the target to be observed according to the position information; wherein the first collection comprises a first type of camera and a second type of camera; screening out a second collection formed by all cameras of which the target to be observed is in the visual field range and is not blocked from the first collection; the judging modes of the vision ranges corresponding to different types of cameras are different; screening out a camera with the best observation effect according to the imaging size of the target to be observed in each camera in the second aggregate and the observation angle of each camera; the imaging size and the observation angle corresponding to different types of cameras are different in obtaining mode. Through the mode, the optimal camera which can be screened out more accurately and quickly can be obtained.

Description

Camera screening method and related device
Technical Field
The application belongs to the technical field of video monitoring, and particularly relates to a camera screening method and a related device.
Background
The enhanced virtual environment (AVE) technology is a technology that a three-dimensional model of the real world is established, then a camera which is arranged in an actual scene in advance is calibrated, and a two-dimensional picture of the camera is fused into the three-dimensional model in real time. With the rapid development of computer technology and network technology, methods for establishing a three-dimensional model of a real scene are increasing, for example, by means of scanning with a total station, scanning with a laser radar, or oblique photogrammetry by an unmanned aerial vehicle. When the cameras are installed at the key positions in the three-dimensional scene, the cameras are also added into the three-dimensional model together in the modeling process, and the cameras are calibrated. After any observation target is selected in the three-dimensional model subsequently, the camera can be used for carrying out remote observation, and the current state can be monitored in real time.
Although the whole scene and the specific position of the camera can be visually displayed through the three-dimensional model, for a selected object to be observed, it is not known in advance which cameras can observe the object, nor can it be known which camera can achieve the best observation effect when observing the object, so that the cameras need to be screened according to the object to be observed and the positions of the cameras. Some common camera screening strategies consider only one type of camera, and there are few considerations on screening grounds; in addition, the existing method is to directly traverse all cameras to find out an optimal observation camera, but with the increase of the number of cameras, the processing time is also prolonged, and the interaction experience is seriously influenced.
Disclosure of Invention
The application provides a camera screening method and a related device, so as to more accurately and quickly screen out an optimal camera.
In order to solve the technical problem, the application adopts a technical scheme that: provided is a camera screening method, including: acquiring position information of a target to be observed; acquiring a first collection formed by all cameras within a preset range from the target to be observed according to the position information; wherein the first collection comprises a first type of camera and a second type of camera; screening out a second collection formed by all cameras of which the target to be observed is in the visual field range and is not blocked from the first collection; the judging modes of the vision ranges corresponding to different types of cameras are different; screening out a camera with the best observation effect according to the imaging size of the target to be observed in each camera in the second aggregate and the observation angle of each camera; the imaging size and the observation angle corresponding to different types of cameras are different in obtaining mode.
In order to solve the above technical problem, another technical solution adopted by the present application is: there is provided a camera screening apparatus, comprising a processor and a memory coupled to each other, and the processor and the memory cooperate with each other to implement the camera screening method described in any of the above embodiments.
In order to solve the above technical problem, another technical solution adopted by the present application is: there is provided an apparatus having a storage function, on which program data is stored, the program data being executable by a processor to implement the camera screening method described in any of the above embodiments.
Being different from the prior art situation, the beneficial effect of this application is: according to the camera screening method provided by the application, all cameras in a space where a target to be observed is located are firstly subjected to first-round screening according to the position information of the target to be observed so as to obtain a first collection; then, performing a second round of screening on the first collection to obtain a second collection formed by all cameras of which the target to be observed is in the visual field range and is not shielded; and then screening out the camera with the best observation effect from the imaging size and the observation angle aiming at each camera in the second set. Namely, the processing time is shortened through a multi-round screening process in the application; and the distance from the target to be observed, whether the target is in the visual field range of the camera, the imaging size of the target to be observed, the observation angle of the camera and the shielding condition related factors are comprehensively considered in the screening process, so that the finally screened optimal camera has the optimal observation effect. In addition, multiple camera types are considered in the application, and screening can be performed on different types of cameras so as to improve the screening effect.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without inventive efforts, wherein:
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of a camera screening method according to the present application;
FIG. 2 is a flowchart illustrating an embodiment corresponding to step S101 in FIG. 1;
FIG. 3 is a flowchart illustrating an embodiment corresponding to step S103 in FIG. 1;
fig. 4 is a schematic flowchart illustrating an embodiment corresponding to step S301 when the camera in fig. 3 is a bolt;
fig. 5 is a schematic flow chart of an embodiment corresponding to step S301 when the camera in fig. 3 is a ball machine;
FIG. 6 is a flowchart illustrating an embodiment corresponding to step S302 in FIG. 3;
FIG. 7 is a schematic flow chart diagram illustrating one embodiment of a camera screening framework of the present application;
FIG. 8 is a schematic structural diagram of an embodiment of a camera screening apparatus according to the present application;
fig. 9 is a schematic structural diagram of an embodiment of a device with a storage function according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a schematic flow chart of an embodiment of a camera screening method according to the present application, the camera screening method specifically includes:
s101: and acquiring the position information of the target to be observed.
Specifically, referring to fig. 2, fig. 2 is a flowchart illustrating an embodiment corresponding to step S101 in fig. 1. The specific implementation process of the step S101 may be:
s201: and receiving point locations selected by a user in a pre-established three-dimensional model, and taking objects where the point locations are located as targets to be observed.
Specifically, in this embodiment, before the step S201, the method may further include: carrying out three-dimensional modeling on a factory, a garden or even a city level target scene in a manner of total station scanning, laser radar scanning, unmanned aerial vehicle oblique photogrammetry and the like to obtain a three-dimensional model; and all the cameras in the target scene also need to be added to the three-dimensional model together. In the established three-dimensional model, a user can optionally select a certain observation target, for example, a point P selected by a mouse can be obtained by clicking the mouse, so that the target to be observed is determined.
S202: and obtaining a normal vector of a plane where the point location is located, and taking the normal vector as the orientation of the observation target.
Because the whole three-dimensional model is formed by drawing a triangle surface element formed by vertexes, a triangle where the point P is located can be found first, and a vector n of the triangle plane can be obtained by utilizing vector cross multiplication formed by two edges of the triangle, and the normal vector n can also be regarded as a normal vector (namely, the direction of an observation target) n of the point P.
S203: the minimum bounding box of the target to be observed and first coordinates of all vertexes of the minimum bounding box under a world coordinate system are obtained.
Specifically, each object to be observed can be regarded as a single body for the objects in the entire three-dimensional model, and therefore all vertices constituting the object to be observed can be found. Calculating a minimum bounding box of the target to be observed according to the acquired vertex set of the target to be observed; wherein, the minimum bounding box represents a minimum cuboid which can surround the object to be observed, and the calculation process is approximately as follows: the method comprises the steps of firstly calculating a covariance matrix according to a vertex set of a target to be observed, then obtaining three eigenvectors according to the covariance matrix, then projecting vertexes of all the targets to be observed onto the three eigenvectors, and finally determining the boundaries of the minimum bounding box in the three directions and first coordinates of eight vertexes of the minimum bounding box under a world coordinate system.
S204: and obtaining the central point of the minimum bounding box according to the first coordinates of all the vertexes, and taking the second coordinate of the central point as the position information of the target to be observed.
Specifically, the first coordinates of the eight vertices of the minimum bounding box are noted as: v ═ ViI ═ 0,1 …, 7 }; wherein v isi=(xi,yi,zi) (ii) a Center point of minimum bounding box C ═ xc,yc,zc) The calculation method comprises the following steps: x is the number ofc=(x0+x1+…+x7)/8,yc=(y0+y1+…+y7)/8,Zc=(z0+z1+…+z7)/8。
S102: acquiring a first collection formed by all cameras within a preset range from a target to be observed according to the position information; wherein the first collection includes a first type of camera and a second type of camera.
Specifically, in this embodiment, the specific implementation process of the step S102 may be: obtaining the maximum value of the effective observation distances of all the cameras in the three-dimensional model; determining a preset range by taking the center of the minimum bounding box as a sphere center and taking the maximum value as a radius; a first collection of all cameras within a predetermined range is obtained. Further, in the present embodiment, the cameras in the first collection may include a first type of camera and a second type of camera, for example, the cameras in the first collection include a gun camera and a ball camera.
S103: screening out a second collection formed by all cameras of which the target to be observed is in the visual field range and is not shielded from the first collection; the determination modes of the vision ranges corresponding to different types of cameras are different.
Specifically, referring to fig. 3, fig. 3 is a flowchart illustrating an embodiment corresponding to step S103 in fig. 1. The specific implementation process of step S103 may be:
s301: all cameras of the first type in the first collection are screened out, wherein the target to be observed is in the visual field range of the cameras, all cameras of the second type in the first collection are screened out, and all cameras form an intermediate collection.
Specifically, when the first type of camera is a bolt, please refer to fig. 4, and fig. 4 is a flowchart illustrating an embodiment corresponding to step S301 when the camera in fig. 3 is a bolt. The step of screening out all cameras of the first type in the first collection, where the object to be observed is within the field of view of the camera, specifically includes:
s401: for each bolt, the first coordinates of all vertices of the minimum bounding box are converted into third coordinates in the camera coordinate system where the bolt is located.
Specifically, for the bolt, it is assumed that the pose and position of the bolt (i.e., the position of the optical center) in the three-dimensional model are R and t, respectively, where R is a 3 x 3 matrix and t is a 3 x 1 matrix, which represent the rotational and translational transformations of a point in the camera coordinate system to the world coordinate system, respectively. According to R and t, the first coordinate of eight vertexes of the minimum bounding box in the world coordinate system can be converted into the third coordinate in the camera coordinate system
Figure BDA0003171630310000051
Wherein the content of the first and second substances,
Figure BDA0003171630310000052
s402: and converting the third coordinates of all the vertexes of the minimum bounding box into fourth coordinates in a pixel coordinate system of the picture shot by the current gun camera.
Specifically, the conversion formula is as follows:
Figure BDA0003171630310000061
wherein the content of the first and second substances,
Figure BDA0003171630310000062
a third coordinate of the vertex of the minimum bounding box in the camera coordinate system, (u, v) a fourth coordinate of the vertex of the minimum bounding box in the pixel coordinate system, fx、fy、cx、cyIs an internal parameter of the bolt; wherein, can regard as the origin of pixel coordinate system with certain angular point of the picture that the rifle bolt was shot, and the width direction of picture as the X axle direction, the direction of height of picture is as the Y axle direction, and the abscissa or the ordinate of every pixel all is greater than or equal to 0 in the picture.
S403: and judging whether the fourth coordinates of all the vertexes of the minimum bounding box are positioned in the range of the picture shot by the gun camera.
Specifically, assuming that the width and height of the picture taken by the current bolt are width and height, respectively, the specific implementation process of the step S403 may be as follows: for the fourth coordinate of each vertex obtained in step S402, it is determined whether the abscissa value of the fourth coordinate is greater than or equal to 0 and less than width, and whether the ordinate value of the fourth coordinate is greater than or equal to 0 and less than height; is formulated as follows:
Figure BDA0003171630310000063
s404: if yes, the target to be observed is within the visual field range of the gun camera.
Specifically, only when the fourth coordinates of all the vertices of the minimum bounding box satisfy the above condition, the object to be observed is considered to be within the field of view of the bolt.
S405: otherwise, the target to be observed is outside the visual field range of the gun camera.
Specifically, as long as the fourth coordinate of one vertex of the minimum bounding box does not satisfy the above condition, the object to be observed is considered to be out of the field of view of the bolt.
Referring to fig. 5, when the second type of camera is a ball machine, fig. 5 is a flowchart illustrating an embodiment corresponding to step S301 when the camera in fig. 3 is a ball machine. The step of screening out all cameras of the second type in the first set, where the target to be observed is within the field of view of the target, specifically includes:
s501: and aiming at each ball machine, obtaining the dead angle range of the ball machine under the current focal length.
Specifically, the specific implementation process of step S501 may be: and obtaining the blind angle of the dome camera relative to the Z axis of the world coordinate system according to the angle of view of the dome camera and the angle capable of being lifted up on the Z axis of the world coordinate system. In one particular embodiment, the dead angle may be obtained according to the following formula:
Figure BDA0003171630310000071
wherein blindAngle represents a blind angle with the Z-axis of the world coordinate system; the ball machine can rotate freely at 360 degrees in the horizontal direction, and the angle capable of being lifted in the vertical direction (namely the Z-axis direction of a world coordinate system) is assumed to be PITCH; the hallFov represents half of the field angle of the dome camera, which can be calculated as follows:
Figure BDA0003171630310000072
wherein width and height represent the resolution of the dome camera, and f represents the focal length of the dome camera.
In the process of calculating the angle of the dead angle, the angle of the ball machine capable of being lifted up in the vertical direction is introduced, so that the calculation result of the angle of the dead angle is more accurate.
S502: and judging whether all vertexes of the minimum bounding box are out of the dead angle range.
Specifically, the specific implementation process of step S502 may be:
A. and obtaining a first included angle between a connecting line formed by each vertex of the minimum bounding box and the optical center of the dome camera and the Z axis of the world coordinate system. Specifically, the first angle is calculated as follows:
angle=cos-1((vi-tb)Z/(‖vi-tbiiiiiiiiiiiiiiiid)); wherein v isiTop of the smallest bounding boxA first coordinate of the point in a world coordinate system; t is tbThe coordinates of the light center of the ball machine under the world coordinate system. Z ═ 0,0,1 denotes the Z axis in the world coordinate system.
B. And judging whether the first included angles of all the vertexes of the minimum bounding box are larger than or equal to the dead angle. Specifically, when angle < blindengle, it means that the vertex is located within the dead angle range of the dome camera.
S503: and if all the vertexes of the minimum bounding box are positioned outside the dead angle range, converting the first coordinates of all the vertexes of the minimum bounding box into third coordinates under a camera coordinate system where the ball machine is positioned.
Specifically, since the ball machine can rotate 360 °, its rotation matrix R will be calculated as follows: position t of ball machineb(i.e., the center of light of the dome camera) and the line connecting the center point C of the minimum bounding boxb) The position of the z-axis of the camera coordinate system in the world coordinate system is taken as the reference, and z is usedc_w=(C-tb) Denotes the position x of the x-axis and y-axis of the camera coordinate system in the world coordinate systemc_w、yc_wThe calculation method is as follows:
Figure BDA0003171630310000081
wherein x represents cross product of vectors, and Z ═ 1 (0,0,1) represents Z axis of world coordinate system, and the three vectors are normalized to obtain Z'c_w=zc_w/||zc_w||,x′c_w=xc_w/||xc_w||,y′c_w=yc_w/||yc_wI | then the rotation matrix R of the camera coordinate system to the world coordinate system is:
Figure BDA0003171630310000082
thus, for each vertex v of each minimum bounding boxiCoordinates thereof in the camera coordinate system
Figure BDA0003171630310000083
Comprises the following steps:
Figure BDA0003171630310000084
in addition, in synchronization with step S503, if at least one vertex of the minimum bounding box is within the dead angle range, the process proceeds to step S508.
S504: and converting the third coordinates of all the vertexes of the minimum bounding box into fourth coordinates under a pixel coordinate system where the picture shot by the current dome camera is located.
Specifically, the calculation process of the fourth coordinate is formulated as follows:
Figure BDA0003171630310000085
wherein the content of the first and second substances,
Figure BDA0003171630310000086
representing coordinates in the camera coordinate system, (u, v) representing coordinates in the pixel coordinate system, fx、fy、cx、cyIs an internal parameter of the ball machine.
S505: and judging whether the fourth coordinates of all the vertexes of the minimum bounding box are positioned in the range of the picture shot by the ball machine.
Specifically, assuming that the width and height of the picture taken by the current dome camera are width and height, respectively, the specific implementation process of step S505 may be: for the fourth coordinate of each vertex obtained in step S505, it is determined whether the abscissa value of the fourth coordinate is greater than or equal to 0 and less than width, and whether the ordinate value of the fourth coordinate is greater than or equal to 0 and less than height; is formulated as follows:
Figure BDA0003171630310000087
s506: if yes, the target to be observed is within the visual field range of the dome camera under the current focal length, and the step S508 is performed.
S507: otherwise, the target to be observed is out of the visual field range of the dome camera under the current focal length, and the step S508 is performed.
S508: and judging whether all the focal lengths of the current ball machine traverse or not.
S509: if all the focal lengths of the current dome camera are traversed, whether the target to be observed is in the visual field range of the dome camera under at least one focal length is judged, and the step S511 or the step S512 is carried out.
S510: if all the focal lengths of the current ball machine are not traversed, the focal length of the current ball machine is adjusted, and the step S501 is returned.
S511: and if the target to be observed is in the visual field range of the ball machine under at least one focal length, the target to be observed is in the visual field range of the ball machine.
S512: and if the target to be observed is outside the visual field range of the dome camera under all the focal lengths, the target to be observed is outside the visual field range of the dome camera.
Through the process, as long as the ball machine has a focal length to shoot the target to be observed, the ball machine is screened out and enters the subsequent steps.
S302: and screening out a second collection formed by all cameras of which the target to be observed is not shielded from the intermediate collection.
Specifically, the blocking determination can be performed for both the bolt and the ball machine in the following manner, specifically referring to fig. 6, and fig. 6 is a flowchart illustrating an embodiment corresponding to step S302 in fig. 3. The step S302 specifically includes:
s601: and obtaining the imaging area of the minimum bounding box under the pixel coordinate system according to the fourth coordinates of all the vertexes of the minimum bounding box.
S602: aiming at each pixel point in the imaging area, acquiring a fifth coordinate of the current pixel point on the virtual plane; the virtual plane is located between the camera and the target to be observed and is perpendicular to the optical axis of the camera.
Specifically, assuming that a certain pixel point p (u, v) within the imaging area has a fifth coordinate p 'on a virtual plane in which z is 1 in front of the camera, and assuming that p' is (x, y,1), then p (u, v) is the fifth coordinate p ', then p' is (x, y,1) in front of the camera
Figure BDA0003171630310000091
Wherein f isx、fy、cx、cyIs the intrinsic parameter of the camera, where p' is the fifth coordinate in the camera coordinate system. Of course, in other embodiments, the virtual plane in front of the camera may be other, for example, z is 2, 3, etc., as long as the virtual plane is perpendicular to the optical axis of the camera.
S603: and converting the fifth coordinate into a sixth coordinate in a world coordinate system.
Specifically, the sixth coordinate P ″ may be obtained by converting P' obtained in step S602 into a world coordinate system according to the pose R and the position t of the camera, and the specific calculation manner is as follows:
P″=Rp′+t。
s604: and emitting a ray passing through the sixth coordinate from the optical center of the camera by adopting a ray tracing mode, and obtaining the intersection point of the ray and the three-dimensional model.
Specifically, the process of obtaining the intersection point of the ray and the three-dimensional model is the process of looking at the intersection point of the vector (P' -t) and the whole three-dimensional model; where t represents the position of the camera.
S605: responding to the intersection point positioned in the minimum bounding box or positioned on the surface of the minimum bounding box, and enabling the current pixel point not to be shielded; otherwise, the current pixel point is shielded.
Specifically, if the meaning is that the intersection point is located between the camera and the minimum bounding box, the current pixel point is considered to be blocked.
S606: and responding to the situation that all the pixel points in the imaging area are not shielded, and the target to be observed is not shielded.
Specifically, according to the steps S602 to S605, all the pixel points in the imaging region of the target to be observed on the image are traversed, whether the pixel points are blocked is determined, and it is determined that the camera does not have the blocking when observing the target to be observed only if all the pixel points are not blocked.
Of course, in other embodiments, the order of the step S302 and the step S301 may be changed, but when the step S301 is located before the step S302, the efficiency of the whole screening process is high.
S104: screening out a camera with the best observation effect according to the imaging size of the target to be observed in each camera in the second aggregate and the observation angle of each camera; the imaging size and the observation angle corresponding to different types of cameras are different in obtaining mode.
Specifically, when the camera is a bolt, the process of obtaining the imaging size of the target to be observed in the bolt may be: referring to fig. 4 again, after the step of determining that the target to be observed is within the field of view of the camera in step S404 in fig. 4, the method further includes: obtaining a minimum polygon (i.e. a minimum convex hull algorithm) enclosing all the vertices from the pixel coordinate system by using the fourth coordinates (i.e. the coordinates in the pixel coordinate system) of all the vertices of the minimum enclosing box; and taking the area of the minimum polygon as the imaging size of the object to be observed in the camera.
In addition, when the camera is a bolt, the observation angle obtaining process of the bolt relative to the target to be observed can be as follows: A. obtaining a first included angle alpha between the direction opposite to the optical axis z of the gun bolt (0,0,1) and the direction n of the observation target; the specific calculation process of the first included angle α is as follows: alpha-cos-1(-zwn/(‖zw‖·‖n‖));zw=R-1z; wherein z iswThe coordinates of the optical axis of the gunlock under a world coordinate system are shown, and R represents a posture matrix. Of course, in other embodiments, a fourth angle between the optical axis of the bolt and the direction n of the observation target may also be obtained, and the difference between pi and the fourth angle is taken as the first angle.
B. Obtaining a second included angle beta between a vector (C-t) formed by a connecting line of the optical center t of the gun camera and the central point C of the minimum bounding box and the optical axis of the gun camera; wherein the observation angle
Figure BDA0003171630310000112
Is half of the sum of the first included angle alpha and the second included angle beta; the specific calculation process of the second included angle β is as follows: beta is cos-1((C-t)zw/(‖(C-t)‖·‖zw|). When both alpha and beta are smaller, the gun is more opposite to the target to be observed, and the observation angle is better.
When the camera is a dome camera, the process of obtaining the imaging size of the target to be observed in the dome camera may be as follows: referring to fig. 5 again, after the step of determining that the target to be observed is within the visual field range of the ball machine at the current focal length in step S506 in fig. 5, the method further includes: the smallest polygon enclosing all vertices is obtained from the pixel coordinate system using the fourth coordinates (i.e., coordinates under the pixel coordinate system) of all vertices of the smallest bounding box. After the step of determining that the target to be observed is within the visual field of the dome camera in step S511 in fig. 5, the method further includes: and taking the maximum value of the areas in all the minimum polygons as the imaging size of the object to be observed in the camera. For the bolt face, the imaging size of the target to be observed in the picture taken by the bolt face is fixed and unchanged. For the dome camera, since the dome camera can change the magnification, the focal length changes under different magnifications, the focal length is larger when the magnification is larger, and the imaging of the same object is larger at the moment, all the focal lengths of the target to be observed in the visual field range of the target to be observed need to be obtained first according to the different magnifications of the dome camera, then the corresponding imaging size is calculated according to the pixel coordinates of the vertex of the minimum bounding box, and the largest imaging size is selected as the final imaging size of the dome camera.
In addition, when the camera is a dome camera, the process of obtaining the observation angle of the dome camera relative to the target to be observed may be: obtaining the optical center t of the ball machinebVector (t) formed by connecting the center point C of the minimum bounding boxb-C) a third angle γ with the observation target orientation n; wherein the observation angle
Figure BDA0003171630310000111
Is a third included angle gamma; the specific calculation process of the third included angle γ may be: gamma-cos-1((tb-C) n/(tb-Cn)). When the gamma is smaller, the more the ball machine is over against the target to be observed, the better the observation angle is.
Further, the specific implementation process of step S104 may be: obtaining a first ratio of the imaging size of a target to be observed in the camera to the picture size of the camera, and taking the difference value of the first ratio and the observation angle after normalization as a score; the camera with the highest score is taken as the best camera. The calculation formula of the specific score is as follows:
Figure BDA0003171630310000121
wherein S isiRepresents the imaging size, width, of the target to be observed in the camera iiAnd heightiRespectively representing the width and height of the picture of camera i,
Figure BDA0003171630310000122
representing the observation angle of camera i, it can be seen from the above calculation that for a dome camera,
Figure BDA0003171630310000123
and in the case of a bolt face,
Figure BDA0003171630310000124
considering that there may be a difference in resolution of each camera, the imaging size SiNormalization processing is carried out, and observation angles are simultaneously measured
Figure BDA0003171630310000125
Normalization is also performed, so that the camera with the largest F value is selected from all the remaining cameras, and the camera with the best observation effect is the camera.
In a specific application scenario, the specific process of the camera screening method may be as follows:
A. acquiring position information of a target to be observed;
B. acquiring a first collection formed by all cameras within a preset range from a target to be observed according to the position information;
C. and aiming at each camera in the first aggregate, sequentially obtaining a result, an imaging size, an observation angle and an occlusion condition of whether the observation target is in the visual field range of the observation target.
D. And screening out a second collection formed by all the cameras of which the target to be observed is in the visual field range and is not blocked from the first collection.
E. And screening out the camera with the best observation effect according to the imaging size of the target to be observed in each camera in the second aggregate and the observation angle of each camera. And then, an instruction can be sent to the camera with the best observation effect in the real scene, and the camera is enabled to observe the target to be observed.
In another specific application scenario, the specific process of the screening method for the camera may be as follows:
A. acquiring position information of a target to be observed;
B. acquiring a first collection formed by all cameras within a preset range from a target to be observed according to the position information;
C. and obtaining a result of whether the observation target is in the visual field range of the observation target and the occlusion condition for each camera in the first collection.
D. And screening out a second collection formed by all the cameras of which the target to be observed is in the visual field range and is not blocked from the first collection.
E. And calculating and obtaining the imaging size of the object to be observed in the cameras and the observation angle of each camera aiming at each camera in the second set.
F. And screening out the camera with the best observation effect according to the imaging size of the target to be observed in each camera in the second aggregate and the observation angle of each camera. And then, an instruction can be sent to the camera with the best observation effect in the real scene, and the camera is enabled to observe the target to be observed.
Referring to fig. 7, fig. 7 is a schematic flowchart illustrating a camera screening framework according to an embodiment of the present application, where the camera screening framework specifically includes:
the first obtaining module 10 is configured to obtain position information of a target to be observed.
The first screening module 12 is coupled to the first obtaining module 10 and configured to obtain a first collection formed by all cameras within a predetermined range from the target to be observed according to the position information; wherein the first collection includes a first type of camera and a second type of camera.
The second screening module 14 is coupled to the first screening module 12 and is configured to screen out, from the first collection, a second collection formed by all cameras of which the target to be observed is within the field of view and is not blocked; the determination modes of the vision ranges corresponding to different types of cameras are different.
The third screening module 16 is coupled to the second screening module 14, and is configured to screen out a camera with the best observation effect according to the imaging size of the target to be observed in each camera in the second set and the observation angle of each camera; the imaging size and the observation angle corresponding to different types of cameras are different in obtaining mode.
Please refer to fig. 8, fig. 8 is a schematic structural diagram of an embodiment of a camera screening apparatus according to the present application. The camera screening apparatus includes a processor 20 and a memory 22 coupled to each other, and are configured to cooperate with each other to implement the camera screening method described in any of the above embodiments. In the present embodiment, the processor 20 may also be referred to as a CPU (Central Processing Unit). The processor 20 may be an integrated circuit chip having signal processing capabilities. The Processor 20 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In addition, the camera screening apparatus provided in the present application may further include other structures, such as a common display screen, a communication circuit, etc., which are not described in the present application.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an embodiment of a device with a storage function according to the present application. The device 30 with storage function has stored thereon program data 300, and the program data 300 can be executed by a processor to implement the camera screening method described in any of the above embodiments. The program data 300 may be stored in the storage device in the form of a software product, and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. The aforementioned storage device includes: various media capable of storing program codes, such as a usb disk, a mobile hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or terminal devices, such as a computer, a server, a mobile phone, and a tablet.
The above embodiments are merely examples, and not intended to limit the scope of the present disclosure, and all modifications, equivalents, and flow charts using the contents of the specification and drawings of the present disclosure, or their direct or indirect application to other related arts, are included in the scope of the present disclosure.

Claims (14)

1. A camera screening method, comprising:
acquiring position information of a target to be observed;
acquiring a first collection formed by all cameras within a preset range from the target to be observed according to the position information; wherein the first collection comprises a first type of camera and a second type of camera;
screening out a second collection formed by all cameras of which the target to be observed is in the visual field range and is not blocked from the first collection; the judging modes of the vision ranges corresponding to different types of cameras are different;
screening out a camera with the best observation effect according to the imaging size of the target to be observed in each camera in the second aggregate and the observation angle of each camera; the imaging size and the observation angle corresponding to different types of cameras are different in obtaining mode.
2. The camera screening method according to claim 1, wherein the step of screening out a second collection from the first collection, the second collection being formed by all cameras of which the object to be observed is within the visual field range and is not blocked comprises:
screening all cameras of the target to be observed in the visual field range of the target to be observed from all cameras of the first type in the first collection, and screening all cameras of the target to be observed in the visual field range of the target to be observed from all cameras of the second type in the first collection, wherein all the screened cameras form an intermediate collection;
and screening out a second collection formed by all cameras of which the target to be observed is not shielded from the intermediate collection.
3. The camera screening method according to claim 2, wherein the step of acquiring the position information of the target to be observed includes:
receiving a point location selected by a user in a pre-established three-dimensional model, and taking an object where the point location is located as the target to be observed;
obtaining a normal vector of a plane where the point location is located, and taking the normal vector as the orientation of an observation target;
acquiring a minimum bounding box of a target to be observed and first coordinates of all vertexes of the minimum bounding box under a world coordinate system;
and obtaining the central point of the minimum bounding box according to the first coordinates of all the vertexes, and taking the second coordinate of the central point as the position information of the target to be observed.
4. The camera screening method according to claim 3, wherein the first type of camera is a gun camera, and the step of screening all the cameras of the first type in the first set, in which the target to be observed is within the visual field of the target to be observed, comprises:
for each camera, converting the first coordinates of all vertexes of the minimum bounding box into third coordinates under a camera coordinate system where the camera is located;
converting the third coordinates of all the vertexes of the minimum bounding box into fourth coordinates under a pixel coordinate system where the current picture shot by the camera is located;
judging whether the fourth coordinates of all the vertexes of the minimum bounding box are positioned in the range of the pictures shot by the camera or not;
and if so, the target to be observed is in the visual field range of the camera.
5. The camera screening method according to claim 4, wherein if the object to be observed is within the field of view of the camera, the method further comprises:
obtaining a minimum polygon enclosing all the vertexes of the minimum enclosing box from the pixel coordinate system by using the fourth coordinates of all the vertexes of the minimum enclosing box;
and taking the area of the minimum polygon as the imaging size of the object to be observed in the camera.
6. The camera screening method according to claim 3, wherein the second type of camera is a dome camera, and the step of screening all the cameras of the second type in the first set, in which the target to be observed is within the visual field of the target to be observed, includes:
for each camera, obtaining a dead angle range of the camera under the current focal length;
judging whether all vertexes of the minimum bounding box are outside the dead angle range;
if so, converting the first coordinates of all the vertexes of the minimum bounding box into third coordinates under a camera coordinate system where the camera is located;
converting the third coordinates of all the vertexes of the minimum bounding box into fourth coordinates under a pixel coordinate system where the current picture shot by the camera is located;
judging whether the fourth coordinates of all the vertexes of the minimum bounding box are positioned in the range of the pictures shot by the camera or not;
if yes, the target to be observed is in the visual field range of the camera under the current focal length; otherwise, the target to be observed is out of the visual field range of the camera under the current focal length;
responding to all focal length traversals of the camera, and judging whether the target to be observed is in a visual field range under at least one focal length of the camera;
and if so, the target to be observed is in the visual field range of the camera.
7. The camera screening method according to claim 6,
the step of obtaining the dead angle range of the camera under the current focal length comprises: obtaining a dead angle of the camera relative to a Z axis of a world coordinate system according to the field angle of the camera and the angle which can be raised on the Z axis of the world coordinate system;
the step of judging whether all vertexes of the minimum bounding box are outside the dead angle range includes:
obtaining a first included angle between a connecting line formed by each vertex of the minimum bounding box and the optical center of the dome camera and the Z axis of the world coordinate system;
and judging whether the first included angles of all the vertexes of the minimum bounding box are larger than or equal to the dead angle.
8. The camera screening method according to claim 6, wherein if yes, after the step of the target to be observed being within the field of view of the camera at the current focal length, the method further comprises:
obtaining a minimum polygon enclosing all the vertexes of the minimum enclosing box from the pixel coordinate system by using the fourth coordinates of all the vertexes of the minimum enclosing box;
and taking the maximum value of the areas in all the minimum polygons as the imaging size of the target to be observed in the camera.
9. The camera screening method according to claim 3, wherein before the step of screening out the camera with the best observation effect according to the imaging size of the object to be observed in each camera in the second set and the observation angle of each camera, the method further comprises:
responding to the fact that the camera is a gunlock, and obtaining a first included angle between the opposite direction of the optical axis of the camera and the direction of an observation target and a second included angle between a vector formed by a connecting line of the optical center of the camera and the central point of the minimum bounding box and the optical axis of the camera; wherein the observation angle is half of the sum of the first included angle and the second included angle;
responding to the fact that the camera is a ball machine, and obtaining a third included angle between a vector formed by a connecting line of the optical center of the camera and the central point of the minimum bounding box and the orientation of the observation target; wherein the observation angle is the third included angle.
10. The camera screening method according to claim 1, wherein the step of screening out the camera with the best observation effect according to the imaging size and the observation angle of the target to be observed in each camera in the second set comprises:
obtaining a first ratio of the imaging size of the target to be observed in the camera to the picture size of the camera, and taking the difference value of the first ratio and the normalized observation angle as a score;
and taking the camera with the highest score as the best camera.
11. The camera screening method according to claim 4 or 6, wherein the step of screening the intermediate collection for a second collection formed by all cameras of which the target to be observed is not occluded comprises:
obtaining an imaging area of the minimum bounding box under the pixel coordinate system according to fourth coordinates of all vertexes of the minimum bounding box;
aiming at each pixel point in the imaging area, acquiring a fifth coordinate of the current pixel point on a virtual plane; wherein the virtual plane is located between the camera and the target to be observed, and the virtual plane is perpendicular to the optical axis of the camera;
converting the fifth coordinate into a sixth coordinate in a world coordinate system;
emitting a ray passing through the sixth coordinate from the optical center of the camera in a ray tracing mode, and obtaining an intersection point of the ray and the three-dimensional model;
responding to the intersection point positioned in the minimum bounding box or positioned on the surface of the minimum bounding box, and enabling the current pixel point not to be shielded; otherwise, the current pixel point is shielded;
and responding to the situation that all the pixel points in the imaging area are not shielded, and then the target to be observed is not shielded.
12. The camera screening method according to claim 3, wherein the step of obtaining the first set of all cameras within a predetermined range from the target to be observed according to the position information includes:
obtaining the maximum value of the effective observation distances of all the cameras in the three-dimensional model;
determining the preset range by taking the center of the minimum bounding box as a sphere center and the maximum value as a radius;
a first collection of all cameras located within the predetermined range is obtained.
13. A camera screening apparatus, comprising a processor and a memory coupled to each other, wherein the processor and the memory cooperate with each other to implement the camera screening method according to any one of claims 1 to 12.
14. An apparatus having a storage function, characterized in that program data are stored thereon, which program data are executable by a processor to implement the camera screening method according to any one of claims 1 to 12.
CN202110819845.3A 2021-07-20 2021-07-20 Camera screening method and related device Pending CN113674356A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110819845.3A CN113674356A (en) 2021-07-20 2021-07-20 Camera screening method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110819845.3A CN113674356A (en) 2021-07-20 2021-07-20 Camera screening method and related device

Publications (1)

Publication Number Publication Date
CN113674356A true CN113674356A (en) 2021-11-19

Family

ID=78539633

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110819845.3A Pending CN113674356A (en) 2021-07-20 2021-07-20 Camera screening method and related device

Country Status (1)

Country Link
CN (1) CN113674356A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114442805A (en) * 2022-01-06 2022-05-06 上海安维尔信息科技股份有限公司 Monitoring scene display method and system, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103491339A (en) * 2012-06-11 2014-01-01 华为技术有限公司 Video acquisition method, video acquisition equipment and video acquisition system
US20150046129A1 (en) * 2013-08-07 2015-02-12 Axis Ab Method and system for selecting position and orientation for a monitoring camera
CN104881870A (en) * 2015-05-18 2015-09-02 浙江宇视科技有限公司 Live monitoring starting method and device for to-be-observed point
CN108174090A (en) * 2017-12-28 2018-06-15 北京天睿空间科技股份有限公司 Ball machine interlock method based on three dimensions viewport information
CN108986161A (en) * 2018-06-19 2018-12-11 亮风台(上海)信息科技有限公司 A kind of three dimensional space coordinate estimation method, device, terminal and storage medium
CN113079369A (en) * 2021-03-30 2021-07-06 浙江大华技术股份有限公司 Method and device for determining image pickup equipment, storage medium and electronic device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103491339A (en) * 2012-06-11 2014-01-01 华为技术有限公司 Video acquisition method, video acquisition equipment and video acquisition system
US20150046129A1 (en) * 2013-08-07 2015-02-12 Axis Ab Method and system for selecting position and orientation for a monitoring camera
CN104881870A (en) * 2015-05-18 2015-09-02 浙江宇视科技有限公司 Live monitoring starting method and device for to-be-observed point
CN108174090A (en) * 2017-12-28 2018-06-15 北京天睿空间科技股份有限公司 Ball machine interlock method based on three dimensions viewport information
CN108986161A (en) * 2018-06-19 2018-12-11 亮风台(上海)信息科技有限公司 A kind of three dimensional space coordinate estimation method, device, terminal and storage medium
CN113079369A (en) * 2021-03-30 2021-07-06 浙江大华技术股份有限公司 Method and device for determining image pickup equipment, storage medium and electronic device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MI, Y ET AL.,: "Feature Matching Algorithm Design and Verification in Rotates Camera Normal Region Based on ROS System", 《2019 IEEE INTERNATIONAL CONFERENCE ON MECHATRONICS AND AUTOMATION (ICMA)》, 31 December 2019 (2019-12-31), pages 342 - 347 *
罗川,: "模型变形视频测量的相机位置坐标与姿态角确定", 《实验流体力学》, vol. 24, no. 6, 31 December 2020 (2020-12-31), pages 88 - 91 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114442805A (en) * 2022-01-06 2022-05-06 上海安维尔信息科技股份有限公司 Monitoring scene display method and system, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US11748906B2 (en) Gaze point calculation method, apparatus and device
CN106846409B (en) Calibration method and device of fisheye camera
US8265374B2 (en) Image processing apparatus, image processing method, and program and recording medium used therewith
US7570280B2 (en) Image providing method and device
WO2018077071A1 (en) Panoramic image generating method and apparatus
US9972120B2 (en) Systems and methods for geometrically mapping two-dimensional images to three-dimensional surfaces
US9436973B2 (en) Coordinate computation device and method, and an image processing device and method
CN104778656B (en) Fisheye image correcting method based on spherical perspective projection
WO2021031781A1 (en) Method and device for calibrating projection image and projection device
KR101521008B1 (en) Correction method of distortion image obtained by using fisheye lens and image display system implementing thereof
US20140015924A1 (en) Rapid 3D Modeling
CN106815869B (en) Optical center determining method and device of fisheye camera
JPWO2018235163A1 (en) Calibration apparatus, calibration chart, chart pattern generation apparatus, and calibration method
CN112686877B (en) Binocular camera-based three-dimensional house damage model construction and measurement method and system
US11380016B2 (en) Fisheye camera calibration system, method and electronic device
US10565803B2 (en) Methods and apparatuses for determining positions of multi-directional image capture apparatuses
CN110136207B (en) Fisheye camera calibration system, fisheye camera calibration method, fisheye camera calibration device, electronic equipment and storage medium
WO2020151268A1 (en) Generation method for 3d asteroid dynamic map and portable terminal
CN112837207A (en) Panoramic depth measuring method, four-eye fisheye camera and binocular fisheye camera
CN113538587A (en) Camera coordinate transformation method, terminal and storage medium
CN113643414A (en) Three-dimensional image generation method and device, electronic equipment and storage medium
CN113674356A (en) Camera screening method and related device
CN114511447A (en) Image processing method, device, equipment and computer storage medium
CN210986289U (en) Four-eye fisheye camera and binocular fisheye camera
CN110163922B (en) Fisheye camera calibration system, fisheye camera calibration method, fisheye camera calibration device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination