CN111667540B - Multi-camera system calibration method based on pedestrian head recognition - Google Patents

Multi-camera system calibration method based on pedestrian head recognition Download PDF

Info

Publication number
CN111667540B
CN111667540B CN202010520089.XA CN202010520089A CN111667540B CN 111667540 B CN111667540 B CN 111667540B CN 202010520089 A CN202010520089 A CN 202010520089A CN 111667540 B CN111667540 B CN 111667540B
Authority
CN
China
Prior art keywords
camera
ellipse
coordinate system
calculating
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010520089.XA
Other languages
Chinese (zh)
Other versions
CN111667540A (en
Inventor
关俊志
耿虎军
高峰
柴兴华
陈彦桥
王雅涵
张泽勇
彭会湘
陈韬亦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 54 Research Institute
Original Assignee
CETC 54 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 54 Research Institute filed Critical CETC 54 Research Institute
Priority to CN202010520089.XA priority Critical patent/CN111667540B/en
Publication of CN111667540A publication Critical patent/CN111667540A/en
Application granted granted Critical
Publication of CN111667540B publication Critical patent/CN111667540B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses a multi-camera system calibration method based on pedestrian head recognition, and belongs to the technical field of computer vision. Processing each frame of image, and extracting an ellipse of a head portrait in the image by using a CNN (content-based network) method; calculating the three-dimensional position of the human head in each frame under a camera coordinate system according to the position and the size of the ellipse; selecting any one camera coordinate system as a world coordinate system, and calculating external parameters of other cameras; and optimizing the obtained camera external parameters, and performing alignment conversion on the camera world coordinate system and the selected world coordinate system. The invention takes the human head as a characteristic point, and the point cloud formed by the motion trail of the human head as a virtual calibration object, and provides a method for calculating the three-dimensional coordinate of the head under a monocular single-frame image, which converts the external parameter calibration problem of a plurality of cameras into a three-dimensional point cloud alignment problem. Therefore, real-time online accurate external reference calibration of the multi-camera system is completed by calculating the relative pose between the three-dimensional point clouds.

Description

Multi-camera system calibration method based on pedestrian head recognition
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to a multi-camera system calibration method based on pedestrian head recognition.
Background
With the rapid development of computer vision technology and related fields of artificial intelligence, a multi-camera system is more and more widely applied in the fields of scene reconstruction, smart city security monitoring, airport monitoring, motion capture, sports video analysis, industrial measurement and the like. In recent years, a solution using a camera as an input rapidly occupies a powerful position in the market by virtue of excellent characteristics such as high performance, high convenience and high stability. Although the multiple cameras have great advantages in information processing and integration, the stable and normal operation of the multiple-camera system requires an accurate and fast calibration process.
The traditional calibration method is to use the known scene structure information to calibrate, and usually involves the manufacture of a precise calibration object, a complex calibration process and high-precision known calibration information, and requires a professional to perform complex operation. Moreover, each time the position of the camera set is changed, calibration operation needs to be carried out again.
Disclosure of Invention
In order to solve the technical problems, the invention provides a multi-camera system calibration method based on pedestrian head recognition, which takes people frequently existing in a scene as calibration objects, can realize online real-time calibration of a camera system, and provides a basis for application such as later monitoring scene understanding.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
a multi-camera system calibration method based on pedestrian head recognition comprises the following steps:
(1) Enabling a single pedestrian to walk in a camera monitoring area, and simultaneously recording videos by a plurality of cameras to obtain synchronized videos;
(2) Intercepting at least three frames of images from the video of each camera;
(3) Processing each frame of image, and extracting the head ellipse of the pedestrian in the image by using a convolutional neural network to obtain the central point position of the ellipse and the lengths of the long axis and the short axis;
(4) Calculating the three-dimensional position of the ellipse in each frame of image under the coordinate system of the camera according to the position of the central point of the ellipse and the lengths of the long axis and the short axis;
(5) Selecting any one camera coordinate system as a first world coordinate system, and calculating external parameters of other cameras;
(6) And optimizing the obtained camera external parameters, establishing a second world coordinate system by taking a certain point in a room as an origin, and performing alignment conversion on the second world coordinate system and the first world coordinate system to obtain the positions of all cameras in the second world coordinate system so as to finish the calibration of the multi-camera system.
Further, the specific manner of the step (3) is as follows:
(301) Segmenting the image, and selecting a rectangular frame with the proportion between [2/3,3/2] as a candidate frame;
(302) Performing convolution operation on all candidate frames by using a convolution neural network, and selecting the candidate frame with the highest score as an image of the head of the pedestrian in the image;
(303) And converting the candidate frame with the highest score into an ellipse to obtain the head ellipse of the pedestrian, the central point position of the ellipse and the lengths of the long axis and the short axis.
Further, the specific manner of step (4) is as follows:
(401) Obtaining the pixel coordinate (u) of the ellipse center under the image coordinate system according to the ellipse central point position and the length of the long axis and the short axis s ,v s ) The pixel coordinates are then converted to physical coordinates (x) according to the camera's internal parameters s ,y s );
(402) Calculating to obtain an ellipse area A according to the position of the central point of the ellipse and the lengths of the long axis and the short axis, and further obtaining the Z-axis coordinate of the ellipse under the coordinate system of the camera
Figure BDA0002531721210000021
Wherein R is s To model the pedestrian's head as the radius at the ball, θ is an intermediate variable, <' > H>
Figure BDA0002531721210000022
(403) Calculating the X-axis coordinate and the Y-axis coordinate of the ellipse under the camera coordinate system: x s =x s Z s ,Y s =y s Z s Obtaining the three-dimensional coordinates (X) of the ellipse under the coordinate system of the camera s ,Y s ,Z s )。
Further, the specific manner of step (5) is as follows:
(501) Recording discrete three-dimensional point cloud r of three-dimensional positions of heads of pedestrians under different cameras k (t), k is a camera mark;
(502) Selecting as a first world coordinate system the camera coordinate system of a camera denoted 1 whose discrete three-dimensional point cloud is r 1 (t);
(503) Calculating center points of point clouds of camera 1 and camera k
Figure BDA0002531721210000023
And &>
Figure BDA0002531721210000024
Figure BDA0002531721210000025
Wherein N is the total number of discrete points;
(504) Moving the coordinate origin points of the two point clouds to the point cloud center respectively:
Figure BDA0002531721210000026
wherein, g 1 (t)、g k (t) moving the origin of coordinates to obtain a discrete three-dimensional point cloud;
(505) According to equation g k (t)=R k g 1 (t) obtaining a rotation vector R of the camera k with respect to the camera 1 by singular value decomposition calculation k
(506) Calculating the offset vector c of camera k relative to camera 1 k
Figure BDA0002531721210000031
R k And c k I.e. the external parameters of camera k, namely:
r k (t)=R k r 1 (t)+c k
compared with the prior art, the invention has the beneficial effects that:
1. the invention provides an effective multi-camera system method, which can obtain good calibration effect without additional calibration objects and complicated calibration processes.
2. The method is simple and easy to implement, and can carry out automatic online calibration under the condition that a multi-camera system does not shut down, thereby greatly improving the calibration efficiency.
3. The on-line calibration through monocular depth measurement and a multi-camera system has been a research hotspot in the field, and at present, common methods are roughly divided into two types: one type is a calibration method based on the traditional calibration object, and although the method can obtain good effect, the method has high requirement on the manufacturing precision of the calibration object, the calibration flow is complicated, and online calibration cannot be realized; the other type is a self-calibration method, a specially-made calibration object is not needed in the method, and the corresponding relation between cameras is established by depending on feature points in an image, but the method cannot establish the corresponding relation of the feature points under the condition that the visual angle between the cameras is large, so that the application difficulty in a real scene is high. In view of this, the invention firstly uses the human head as a characteristic point, uses the point cloud formed by the motion track of the human head as a virtual calibration object, and provides a method for calculating the three-dimensional coordinates of the head under a monocular single-frame image, so as to convert the external reference calibration problem of multiple cameras into the three-dimensional point cloud alignment problem. And the real-time online accurate external reference calibration of the multi-camera system is completed by calculating the relative pose between the three-dimensional point clouds. This approach is an important innovation over the prior art.
Drawings
Fig. 1 is a flowchart of a calibration method for a multi-camera system according to an embodiment of the present invention.
Fig. 2 is a schematic projection diagram of a ball in an image according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a human head extracted by a convolutional neural network in the embodiment of the present invention.
Detailed description of the invention
In order to facilitate the understanding and implementation of the present invention for those of ordinary skill in the art, the present invention is further described in detail with reference to the accompanying drawings and examples, it is to be understood that the embodiments described herein are merely illustrative and explanatory of the present invention and are not restrictive thereof.
A multi-camera system calibration method based on pedestrian head recognition comprises the following steps:
step 1, after a multi-camera system is installed, firstly, a single pedestrian walks in a camera monitoring area, and then, a plurality of cameras record videos at the same time to obtain synchronized videos;
step 2, at least three frames of images are intercepted from each video;
step 3, processing each frame of image, extracting the ellipse of the head portrait in the image by using a CNN convolutional neural network method, comprising the following steps:
step 3.1, carrying out segmentation operation on the image, and selecting a rectangular frame with the proportion of [2/3,3/2] as a candidate frame;
step 3.2, performing convolution operation on all candidate frames by utilizing a convolution neural network, and selecting the candidate frame with the highest score as an image of the head in the image;
and 3.3, converting the candidate frame into an ellipse.
Step 4, calculating the three-dimensional position of the human head in each frame under the camera coordinate system according to the position and the size of the ellipse, and the method comprises the following steps:
step 4.1, according toEllipse parameters to obtain the coordinates (u) of the center of the ellipse in the image coordinate system s ,v s ) Then (x) is obtained from camera parameters s ,y s ) Further obtain
Figure BDA0002531721210000041
Step 4.2, calculating according to the ellipse parameters to obtain an ellipse area A, and determining the E in combination with the fact that A is approximately equal to pi 2 [ cos ] theta and Z s =R s /[ epsilon ] can be calculated
Figure BDA0002531721210000042
Step 4.3, finally calculating to obtain X s =x s Z s ,Y s =y s Z s
And 5, selecting any one of the camera coordinate systems as a world coordinate system, and calculating the external parameters of other cameras.
And 6, optimizing the obtained camera external parameters, and performing alignment conversion on the camera world coordinate system and the selected world coordinate system.
The following is a more specific example:
referring to fig. 1, a method for calibrating a multi-camera system based on pedestrian head recognition includes the following steps:
step 1, after a multi-camera system is installed, firstly, a single pedestrian walks in a camera monitoring area, and then, a plurality of cameras record videos simultaneously to obtain synchronized videos;
step 2, at least three frames of images are intercepted from each video;
step 3, processing each frame of image, and extracting an ellipse of the head portrait in the image by using a CNN method to obtain a detection graph as shown in fig. 3, including the following substeps:
step 3.1, performing segmentation operation on the image by adopting an image segmentation algorithm, and selecting a rectangular frame with the proportion between [2/3,3/2] as a candidate frame, wherein the specific image segmentation algorithm is disclosed in a document [1]:
[1]K.E.A.van de Sande,J.R.R.Uijlings,T.Gevers&A.W.M.Smeulders.Segmentation as selective search for object recognition.In International Conference on Computer Vision,pages1879–1886,Nov 2011.
step 3.2, performing convolution operation on all rectangular frames by using a convolution neural network, selecting the rectangular frame with the highest score as an image of the head in the image, and specifically, a human head detection algorithm based on the convolution neural network is disclosed in a document [2]:
[2]T.H.Vu,A.Osokin&I.Laptev.Context-Aware CNNsfor Person Head Detection.In IEEE International Conferenceon Computer Vision,pages 2893–2901,Dec 2015.
and 3.3, converting the rectangular frame obtained in the previous detection into an ellipse.
Step 4, calculating the three-dimensional position of the human head in each frame under the camera coordinate system according to the position and the size of the ellipse, and comprising the following substeps:
step 4.1, the projection schematic diagram of the ball in the image is shown in fig. 2, and the coordinates (u) of the center of the ellipse under the image coordinate system are obtained according to the ellipse parameters s ,v s ) Then (x) is obtained from camera parameters s ,y s ) Further obtain
Figure BDA0002531721210000051
Step 4.2, calculating according to the ellipse parameters to obtain an ellipse area A, and determining the E in combination with the A ≈ pi ∈ 2 Cos θ and Z s =R s /. Epsilon.can be calculated
Figure BDA0002531721210000052
And 4.3, finally, calculating to obtain the three-dimensional position of the human head in a camera coordinate system: x s =x s Z s ,Y s =y s Z s
Step 5, calculating the external parameters of the cameras by adopting a singular value decomposition algorithm, selecting any one of the camera coordinate systems as a world coordinate system, and calculating the external parameters of other cameras, wherein the method comprises the following substeps:
step 5.1, suppose a person is in motionThe human head three-dimensional position obtains a plurality of rows of discrete three-dimensional point cloud partial tables r under different cameras k (t),r 1 (t), where k is the camera designation. The transformation between two point clouds is shown in formula (1), where R k And t k The rotation and offset of the camera k relative to the first camera are the parameters of the camera k.
r k (t)=R k r 1 (t)+c k (I)
Firstly, the central points of two point clouds are calculated
Figure BDA0002531721210000053
Moving the coordinate origin of the two point clouds to the point cloud center->
Figure BDA0002531721210000054
Then obtain g k (t)=R k g 1 (t) calculating by singular value decomposition algorithm to obtain R k
Step 5.2 finally calculating to obtain an offset vector
Figure BDA0002531721210000055
The specific algorithm is described in the literature [3]:
[3]K.S.Arun,T.S.Huang&S.D.Blostein.Least-SquaresFitting of Two 3-D Point Sets.IEEE Transactions onPattern Analysis and Machine Intelligence,vol.9,no.5,pages 698–700,Sept 1987.
And 6, optimizing the obtained camera external parameters, and performing registration conversion on the camera world coordinate system and the selected world coordinate system. The calibration error projection error of the method is 1.6 pixels, the attitude error is 0.6 degrees, the offset error is 1.1 percent, and the calibration result is accurate.
In a word, the method processes each frame of image, and extracts the ellipse of the head portrait in the image by using a CNN method; calculating the three-dimensional position of the human head in each frame under a camera coordinate system according to the position and the size of the ellipse; selecting any one camera coordinate system as a world coordinate system, and calculating external parameters of other cameras; and optimizing the obtained camera external parameters, and performing alignment conversion on the camera world coordinate system and the selected world coordinate system. The invention takes the human head as a characteristic point, and the point cloud formed by the motion track of the human head as a virtual calibration object, and provides a method for calculating the three-dimensional coordinate of the head under a monocular single-frame image, so that the external parameter calibration problem of a multi-camera is converted into a three-dimensional point cloud alignment problem. Therefore, real-time online accurate external reference calibration of the multi-camera system is completed by calculating the relative pose between the three-dimensional point clouds.
The above description is only one embodiment of the present invention, and is not intended to limit the present invention. Any modification, improvement or the like made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (4)

1. A multi-camera system calibration method based on pedestrian head recognition is characterized by comprising the following steps:
(1) Enabling a single pedestrian to walk in a camera monitoring area, and simultaneously recording videos by a plurality of cameras to obtain synchronized videos;
(2) Intercepting at least three frames of images from the video of each camera;
(3) Processing each frame of image, and extracting the head ellipse of the pedestrian in the image by using a convolutional neural network to obtain the central point position of the ellipse and the lengths of the long axis and the short axis;
(4) Calculating the three-dimensional position of the ellipse in each frame of image under the coordinate system of the camera according to the position of the central point of the ellipse and the lengths of the long axis and the short axis;
(5) Selecting any one camera coordinate system as a first world coordinate system, and calculating external parameters of other cameras;
(6) And optimizing the obtained camera external parameters, establishing a second world coordinate system by taking a certain point in a room as an origin, and performing alignment conversion on the second world coordinate system and the first world coordinate system to obtain the positions of all cameras in the second world coordinate system so as to finish the calibration of the multi-camera system.
2. The method for calibrating a multi-camera system based on pedestrian head recognition according to claim 1, wherein the step (3) is implemented by:
(301) Segmenting the image, and selecting a rectangular frame with the proportion of [2/3,3/2] as a candidate frame;
(302) Performing convolution operation on all candidate frames by using a convolution neural network, and selecting the candidate frame with the highest score as an image of the head of the pedestrian in the image;
(303) And converting the candidate frame with the highest score into an ellipse to obtain the ellipse of the head of the pedestrian, and the central point position, the length of the long axis and the length of the short axis of the ellipse.
3. The method for calibrating a multi-camera system based on pedestrian head recognition according to claim 1, wherein the step (4) is implemented by:
(401) Obtaining the pixel coordinate (u) of the ellipse center under the image coordinate system according to the ellipse central point position and the length of the long axis and the short axis s ,v s ) The pixel coordinates are then converted to physical coordinates (x) according to the camera's internal parameters s ,y s );
(402) Calculating to obtain an ellipse area A according to the position of the central point of the ellipse and the lengths of the long axis and the short axis, and further obtaining the Z-axis coordinate of the ellipse under the coordinate system of the camera
Figure FDA0002531721200000011
Wherein R is s To model the pedestrian's head as the radius at the ball, θ is an intermediate variable, <' > H>
Figure FDA0002531721200000012
(403) Calculating the X-axis coordinate and the Y-axis coordinate of the ellipse under the camera coordinate system: x s =x s Z s ,Y s =y s Z s Obtaining the three-dimensional coordinates (X) of the ellipse under the coordinate system of the camera s ,Y s ,Z s )。
4. The method for calibrating a multi-camera system based on pedestrian head recognition according to claim 1, wherein the step (5) is implemented by:
(501) Recording discrete three-dimensional point cloud r of three-dimensional positions of heads of pedestrians under different cameras k (t), k is a camera mark;
(502) Selecting as a first world coordinate system the camera coordinate system of a camera denoted 1 whose discrete three-dimensional point cloud is r 1 (t);
(503) Calculating center points of point clouds of camera 1 and camera k
Figure FDA0002531721200000021
And &>
Figure FDA0002531721200000022
Figure FDA0002531721200000023
Wherein N is the total number of discrete points;
(504) Moving the coordinate origin of the two point clouds to the point cloud center respectively:
Figure FDA0002531721200000024
wherein, g 1 (t)、g k (t) moving the origin of coordinates to obtain a discrete three-dimensional point cloud;
(505) According to equation g k (t)=R k g 1 (t) obtaining a rotation vector R of the camera k with respect to the camera 1 by singular value decomposition calculation k
(506) Calculating the offset vector c of camera k relative to camera 1 k
Figure FDA0002531721200000025
R k And c k I.e. the external parameters of camera k, namely:
r k (t)=R k r 1 (t)+c k
CN202010520089.XA 2020-06-09 2020-06-09 Multi-camera system calibration method based on pedestrian head recognition Active CN111667540B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010520089.XA CN111667540B (en) 2020-06-09 2020-06-09 Multi-camera system calibration method based on pedestrian head recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010520089.XA CN111667540B (en) 2020-06-09 2020-06-09 Multi-camera system calibration method based on pedestrian head recognition

Publications (2)

Publication Number Publication Date
CN111667540A CN111667540A (en) 2020-09-15
CN111667540B true CN111667540B (en) 2023-04-18

Family

ID=72386357

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010520089.XA Active CN111667540B (en) 2020-06-09 2020-06-09 Multi-camera system calibration method based on pedestrian head recognition

Country Status (1)

Country Link
CN (1) CN111667540B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113077519B (en) * 2021-03-18 2022-12-09 中国电子科技集团公司第五十四研究所 Multi-phase external parameter automatic calibration method based on human skeleton extraction

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1804541A (en) * 2005-01-10 2006-07-19 北京航空航天大学 Spatial three-dimensional position attitude measurement method for video camera
JP2013003970A (en) * 2011-06-20 2013-01-07 Nippon Telegr & Teleph Corp <Ntt> Object coordinate system conversion device, object coordinate system conversion method and object coordinate system conversion program
CN103955920A (en) * 2014-04-14 2014-07-30 桂林电子科技大学 Binocular vision obstacle detection method based on three-dimensional point cloud segmentation
CN108010085A (en) * 2017-11-30 2018-05-08 西南科技大学 Target identification method based on binocular Visible Light Camera Yu thermal infrared camera
CN108648241A (en) * 2018-05-17 2018-10-12 北京航空航天大学 A kind of Pan/Tilt/Zoom camera field calibration and fixed-focus method
CN109903341A (en) * 2019-01-25 2019-06-18 东南大学 Join dynamic self-calibration method outside a kind of vehicle-mounted vidicon
CN110390695A (en) * 2019-06-28 2019-10-29 东南大学 The fusion calibration system and scaling method of a kind of laser radar based on ROS, camera
CN111127568A (en) * 2019-12-31 2020-05-08 南京埃克里得视觉技术有限公司 Camera pose calibration method based on space point location information

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10088971B2 (en) * 2014-12-10 2018-10-02 Microsoft Technology Licensing, Llc Natural user interface camera calibration

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1804541A (en) * 2005-01-10 2006-07-19 北京航空航天大学 Spatial three-dimensional position attitude measurement method for video camera
JP2013003970A (en) * 2011-06-20 2013-01-07 Nippon Telegr & Teleph Corp <Ntt> Object coordinate system conversion device, object coordinate system conversion method and object coordinate system conversion program
CN103955920A (en) * 2014-04-14 2014-07-30 桂林电子科技大学 Binocular vision obstacle detection method based on three-dimensional point cloud segmentation
CN108010085A (en) * 2017-11-30 2018-05-08 西南科技大学 Target identification method based on binocular Visible Light Camera Yu thermal infrared camera
CN108648241A (en) * 2018-05-17 2018-10-12 北京航空航天大学 A kind of Pan/Tilt/Zoom camera field calibration and fixed-focus method
CN109903341A (en) * 2019-01-25 2019-06-18 东南大学 Join dynamic self-calibration method outside a kind of vehicle-mounted vidicon
CN110390695A (en) * 2019-06-28 2019-10-29 东南大学 The fusion calibration system and scaling method of a kind of laser radar based on ROS, camera
CN111127568A (en) * 2019-12-31 2020-05-08 南京埃克里得视觉技术有限公司 Camera pose calibration method based on space point location information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周富强 等."镜像双目视觉精密测量技术综述".《光学学报》.2018,第第38卷卷(第第8期期),全文. *

Also Published As

Publication number Publication date
CN111667540A (en) 2020-09-15

Similar Documents

Publication Publication Date Title
WO2021196294A1 (en) Cross-video person location tracking method and system, and device
Xu et al. Flycap: Markerless motion capture using multiple autonomous flying cameras
CN107392964B (en) The indoor SLAM method combined based on indoor characteristic point and structure lines
Tang et al. ESTHER: Joint camera self-calibration and automatic radial distortion correction from tracking of walking humans
WO2023184968A1 (en) Structured scene visual slam method based on point line surface features
CN109949375A (en) A kind of mobile robot method for tracking target based on depth map area-of-interest
CN108416428B (en) Robot vision positioning method based on convolutional neural network
CN113077519B (en) Multi-phase external parameter automatic calibration method based on human skeleton extraction
CN110009732B (en) GMS feature matching-based three-dimensional reconstruction method for complex large-scale scene
Lin et al. Topology aware object-level semantic mapping towards more robust loop closure
WO2021098080A1 (en) Multi-spectral camera extrinsic parameter self-calibration algorithm based on edge features
CN110766024B (en) Deep learning-based visual odometer feature point extraction method and visual odometer
CN103607554A (en) Fully-automatic face seamless synthesis-based video synthesis method
CN106447601A (en) Unmanned aerial vehicle remote image mosaicing method based on projection-similarity transformation
CN110555408A (en) Single-camera real-time three-dimensional human body posture detection method based on self-adaptive mapping relation
CN114036969B (en) 3D human body action recognition algorithm under multi-view condition
CN110135277B (en) Human behavior recognition method based on convolutional neural network
CN110378995B (en) Method for three-dimensional space modeling by using projection characteristics
CN114119739A (en) Binocular vision-based hand key point space coordinate acquisition method
WO2024007485A1 (en) Aerial-ground multi-vehicle map fusion method based on visual feature
CN111582036A (en) Cross-view-angle person identification method based on shape and posture under wearable device
CN114612933B (en) Monocular social distance detection tracking method
CN111667540B (en) Multi-camera system calibration method based on pedestrian head recognition
Wei [Retracted] Deep‐Learning‐Based Motion Capture Technology in Film and Television Animation Production
Zhu et al. PairCon-SLAM: Distributed, online, and real-time RGBD-SLAM in large scenarios

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant