CN106803275A - Estimated based on camera pose and the 2D panoramic videos of spatial sampling are generated - Google Patents

Estimated based on camera pose and the 2D panoramic videos of spatial sampling are generated Download PDF

Info

Publication number
CN106803275A
CN106803275A CN201710089112.2A CN201710089112A CN106803275A CN 106803275 A CN106803275 A CN 106803275A CN 201710089112 A CN201710089112 A CN 201710089112A CN 106803275 A CN106803275 A CN 106803275A
Authority
CN
China
Prior art keywords
camera
point
sampling
sigma
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710089112.2A
Other languages
Chinese (zh)
Inventor
王兆其
李兆歆
邓果
邓果一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Zhongke Guangshi Cultural Technology Co Ltd
Original Assignee
Suzhou Zhongke Guangshi Cultural Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Zhongke Guangshi Cultural Technology Co Ltd filed Critical Suzhou Zhongke Guangshi Cultural Technology Co Ltd
Priority to CN201710089112.2A priority Critical patent/CN106803275A/en
Publication of CN106803275A publication Critical patent/CN106803275A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention discloses a kind of 2D panoramic videos generation estimated based on camera pose with spatial sampling, specially according to input video, using the matching characteristic point information and various visual angles model calibration frame of video camera position and attitude of adjacent video frames, constitutes camera pose set;Pose according to camera in camera pose set distribution situation in space, fitting 2D sampling curved surfaces, chooses n sampled point on curved surface;Position and attitude definition space metric range according to camera, frame of video in the camera pose set corresponding to selected distance current sampling point arest neighbors camera as current sampling point image;A paths are chosen on 2D sampling curved surfaces, the sampled point image construction image sequence passed through by path, this image sequence can carry out panorama displaying to object.

Description

Estimated based on camera pose and the 2D panoramic videos of spatial sampling are generated
Technical field
The present invention relates to the present invention relates to Digital Image Processing and computer vision field, specifically be shot to a period of time Sequence of pictures, estimate the pose of video camera, the method for generating 2D panoramic videos.
Background technology
The feature extraction and description of image are the image procossing of feature based and the basic link of computer vision, feature inspection The sign performance of the detection performance and description operator of calculating son directly determines the efficiency and precision of image procossing.In practical problem Middle image may be disturbed by noise, background, it is also possible to which visual angle, illumination, yardstick, translation, rotation, affine etc. occur, selection Rational characteristics of image and description operator so that these features not only have good representational but also with good robustness It is a very crucial problem.
Triangulation is used to recover corresponding three-dimensional information from the image pair or video sequence of two dimension, wherein wrapping Include the posture information of imaging camera machine and the structural information of scene.
Bundleadjustment is the best of the three-dimensional reconstruction algorithm of each feature based in computer vision Optimized algorithm, the algorithm is used to optimize the camera photocentre for calculating and the three-dimensional point of reconstruction.
The content of the invention
Goal of the invention:The invention aims to solve the deficiencies in the prior art, there is provided one kind is estimated based on camera pose The 2D panoramic video generation methods of meter, are estimated by the pose to sequence of pictures, then posture information is projected into sample space life Into 2D panoramic videos.
Technical scheme:In order to realize the above object a kind of 2D panoramic videos estimated based on camera pose with spatial sampling Generation, the method is comprised the following steps that:
(1) according to one group of frame of video of input, using matching characteristic point information and various visual angles between adjacent video two field picture Model calibration goes out the initial position and attitude of camera, and using the corresponding phase of the bundle each frames of adjustment algorithm optimizations Seat in the plane is put and attitude, finally obtains accurate camera position and attitude corresponding to each frame of video, constitutes camera pose set;
(2) position and attitude of the camera in camera pose set distribution situation in space, fit one 2D sampling curved surfaces, and choose n sampled point on sampling 2D curved surfaces;
(3) position of the camera according to step 2 and attitude definition space metric range, for each sampled point, Frame of video in the camera pose set corresponding to selected distance current sampling point arest neighbors camera as current sampling point figure Picture;
(4) paths are selected on space 2D sampling curved surfaces, one figure of the sampled point image construction passed through by the path As sequence, the scene content that displaying image sequence is recorded constitutes a space panoramic view.
Concrete operation step described in step (1) is:
A) formula is passed throughWhether judging characteristic point p is a characteristic point, and wherein I (x) is circle All any point pixel values, I (p) is candidate point pixel value, and ε is difference threshold values, N be then angle point to there is N number of point to meet on circumference, Optimal characteristics point is screened with the method for machine learning, adjacent locations multiple characteristic point is removed with non-maxima suppression algorithm;
B) image pyramid of multiscale space is set up, the multiple dimensioned consistency of characteristic point is realized;
C) rotational invariance of characteristic point, calculated by square characteristic point with r as radius in barycenter, characteristic point sit Mark barycenter and form direction of the vector as this feature point;
D) Zhang Zhengyou standardizations are utilized, the Intrinsic Matrix K of camera, camera distortion coefficient matrix M is calculated;
E) using the epipolar-line constraint relation of the matching characteristic point pair between adjacent image, to any matching characteristic point to x and x ', All meet x ' FTX=0, using n characteristic point of RANSAC methods random sampling to carrying out the calculating of basis matrix F;
F) basis matrix F is changed the eigenmatrix E=K ' F to normalized image coordinateTK, eigenmatrix E is carried out Singular value decomposition, obtains the outer parameter matrix R of adjacent camerast2Four possible Camera extrinsic matrix numbers;
G) triangulation is carried out to three-dimensional point using four possible Camera extrinsic matrix numbers, and is existed all the time using three-dimensional point This spatial relation is filtered out outside the correct camera of only one in four possible Camera extrinsic matrix numbers before camera Parameter matrix, and after the outer parameter matrix calculating of all double vision angle models is finished, treatment is averaged, reduce maximum Error;
H) by the Double-visual angle Unified Model coordinate system, in conversion to camera coordinate system, then to whole various visual angles mould Type carries out bundle adjustment, re-projection error is minimized by adjusting the pose of camera, the position of three-dimensional point cloud, by all Double-visual angles Model is all added in various visual angles model the establishment for just completing various visual angles model.
Described in step (2), the position and attitude of the camera in camera pose set distribution feelings in space Condition, fits a 2D sampling curved surface, and the concrete operation step of n sampled point of selection is on sampling 2D curved surfaces, based on life Into camera posture information, interpolation generation 2D sampling curved surface;
Specially one plane can be defined as n=(a, b, c) by its normal vector, by the range formula definable of point to plane Plane is ax+by+cz+d=0, makes C=1;Then the formula can be changed into ax+by+cz=-d;Have to all camera coordinates points
Using least square method:
Can obtain
All frame of video posture information barycenter are taken for the origin of coordinates, the third line can be removed:
Plane equation can be obtained according to Cramer's rule:
Plane equation can be obtained according to Cramer's rule:
D=∑ xx* ∑ yy- ∑ xy* ∑s xy
A=(∑ yz* ∑ xy- ∑ xz* ∑ yy)/D
B=(∑ xy* ∑ xz- ∑ xx* ∑ yz)/D
N=[a, b, 1]T
B) n sampled point Q=(x, y, z, q are generated0,q1,q2,q3)T
Position according to camera and attitude definition space metric range described in step (3), for each sampled point, Frame of video in the camera pose set corresponding to selected distance current sampling point arest neighbors camera as current sampling point figure Picture, its concrete operation step is:
6DOF poses q=(X, R) the ∈ S of each frame are projected into the distance definition of sample space then consecutive frame into p (q0, q1)=Wt*||F(X0,X1)||+Wr*||f(R0,R1)||;
Using definition apart from p, K nearest frame of video of distance sample is obtained as the image corresponding to sampled point.
A paths are selected on space 2D sampling curved surfaces described in step (4), the sampling point diagram passed through by this paths As constituting an image sequence, show the scene content that these image sequences are recorded, the image construction on whole path one Space panoramic view, concrete operation step is:
A) paths are selected in two dimension sampling curved surface in the sample space for fitting;
B) image sequence of each sampled point displaying is with a distance from nearest from sampled point camera pose in sample space Image sequence.Wherein distance definition employs the spatial measure distance definition of step (3).
Beneficial effect:A kind of 2D panoramic videos generation estimated based on camera pose with spatial sampling of the present invention, Estimate, then posture information projected into sample space to carry out video sampling, this image sequence by pose to sequence of pictures Can be good at carrying out comprehensive displaying to object.
Brief description of the drawings
Fig. 1 flow charts of the present invention;
Fig. 2 is gaussian pyramid model;
Fig. 3 is four kinds of possible solutions of Camera extrinsic matrix number Rt matrixes.
Specific embodiment
With reference to the accompanying drawings and examples, the present invention is furture elucidated.
Embodiment
As shown in figure 1, a kind of 2D panoramic videos generation estimated based on camera pose with spatial sampling of the present invention, The method is comprised the following steps that:
Step (1) is realized in picture by the image sequence for shooting to be used the extraction of two dimensional image characteristic point and is matched To the tracking of characteristic point in sequence:
1.1) differentiate whether point p is a characteristic point, can be by judging to draw circle centered on this feature point p, this was justified 16 pixels, if whether a minimum of n continuous pixel meets all bigger than Ip+t in 16 pixels circumferentially, or Person is smaller than Ip-t;The gray value of the point p that Ip refers to here, t is a threshold value;If meeting such requirement, judge that p points are One characteristic point, otherwise p points are not characteristic points, and the value of n is typically set to 12, and computing formula is
Wherein I (x) is circumference any point pixel value, and I (p) is candidate point pixel value, and ε is difference threshold values, and N is have on circumference It is then angle point that N number of point meets;
1.2) using the method screening optimal characteristics point of machine learning;Particularly as be use one decision-making of ID3 Algorithm for Training Tree, by the 16 pixels input decision tree on characteristic point circumference in 1.1, optimal FAST characteristic points is filtered out with this;
1.3) the local comparatively dense characteristic point of non-maxima suppression removal, is specially removed using non-maxima suppression algorithm and faced Near position multiple characteristic points, be that each characteristic point calculates its response magnitude, its calculation be characteristic point P and its around 16 The absolute value of individual characteristic point deviation and;In the characteristic point closed on is compared, retain the larger characteristic point of response, delete remaining Characteristic point;
1.4) image pyramid of multiscale space is set up, the multiple dimensioned consistency of characteristic point is realized, a ratio is set Factor scaleFactor (being defaulted as 1.2) and pyramidal number of plies nlevels (being defaulted as 8);By original image factor contracting in proportion It is small into nlevels width images;Image after scaling is:
I '=I/scaleFactork (k=1,2 ..., nlevels);The image zooming-out feature of nlevels width different proportions Put characteristic point of the summation as diagram picture;
1.5) rotational invariance of characteristic point:The direction of FAST characteristic points is determined using square (moment) method;Pass through Square come calculate characteristic point with r as radius in barycenter, feature point coordinates to barycenter formed a vector as this feature point Direction, square is defined as follows:
Wherein, I (x, y) is gradation of image expression formula.The barycenter of the square is:
Assuming that angular coordinate is O, then vectorial angle is the direction of this feature point.Computing formula is as follows:
1.6) for each characteristic point, it is considered to its 31x31 neighborhood;Place different from original BRIEF algorithms is, here After Gaussian smoothing is carried out to image, in the window of 31x31, after producing a pair of random points, centered on random point, take The subwindow of 5x5, comparing the size of the pixel sum in two subwindows carries out binary coding, rather than only by two random points Determine that the such characteristic value of binary coding more possesses noise immunity;
1.7) Feature Points Matching:One threshold values of setting, when the similarity of descriptor of two pictures is more than its, judges It is same characteristic features point.
Step (2) calibrates position and the attitude of camera using the matching double points information between adjacent image, is surveyed using triangle Amount method calculates the three-dimensional point corresponding to images match point, solves the pose attitude of camera, and utilizes bundle Adjustment optimizes the corresponding camera attitude of each frame, specially:
Using Zhang Zhengyou standardizations, the Intrinsic Matrix of camera is calculated
2.2) using the epipolar-line constraint relation of the matching double points between adjacent image, to any one matching double points X and X ', all accord with Close X ' FTX=0, using n characteristic point of RANSAC methods random sampling (being defaulted as 8) to carrying out the calculating of basis matrix F;
2.3) basis matrix F is changed the eigenmatrix E=K ' F to normalized image coordinateTK;Eigenmatrix E is entered Row singular value decomposition, obtains first Camera extrinsic matrix number and is designated as:
The then outer parameter matrix R of adjacent camerast2Four may solve and be:
Rt2=(UWVT|+u3)
Rt2=(UWVT|-u3)
Rt2=(UWTVT|+u3)
Rt2=(UWTVT|-u3)
2.4) triangulation, two camera sight line intersections are carried out to three-dimensional point using four possible Camera extrinsic matrix numbers Place is the locus of three-dimensional point, is calculated by multiple camera projection equation xi=KRtiX herein, and utilize three Only one of the dimension point during this spatial relation before the camera filters out four possible solutions all the time is correctly solved, and all double The outer parameter matrix of visual angle model is calculated after finishing, and is averaged treatment, reduces worst error;
2.5) bundle adjustment is carried out to double vision angle model, by camera Intrinsic Matrix, the camera position auto―control for calibrating, three The corresponding two dimensional image projection point coordinates unbalanced input fitting function of each point, uses in dimension point cloud and point cloud Levenberg-Marquardt algorithms are fitted, and the locus to three-dimensional point cloud is adjusted, so as to reduce three-dimensional point weight Project to the re-projection error between original point on two dimensional image;
2.6) by all Double-visual angle Unified Model coordinate systems, in conversion to camera coordinate system, first Double-visual angle is selected Model as the attitude of various visual angles model reference value, the outer parameter matrix Rt for calculating camera in follow-up double vision angle model is relative The transformation matrix of the initial value of correspondence camera matrix in various visual angles model, using this transformation matrix by Double-visual angle model coordinate systems Under three-dimensional point information change into various visual angles model, bundle adjustment then is carried out to whole various visual angles model, by adjusting phase The pose of machine, the position of three-dimensional point cloud minimize re-projection error;All double vision angle models are all added in various visual angles model Just complete the establishment of various visual angles model.
The pose (6DOF) of camera is projected to sample space by step (3), image sequence is carried out 6DOF poses sampling and Deng sample space distance samples, specially:
3.1) 6DOF poses q=(X, R) the ∈ S of each frame are projected into the distance definition of sample space then consecutive frame into p (q0,q1)=Wt*||F(X0,X1)||+Wr*||f(R0,R1)||;
Wherein Wt、WrIt is weight coefficient, Wr*||f(R0,R1) | |=Wr*(1-R0R1)
3.2) using definition apart from p, obtain all sequence of pictures distance and, then sampling is carried out the sample space in and is waited Sample space distance samples.
Step (4) fits a two-dimentional sample plane in sample space, and n sampled point is chosen in sample plane Generation 2D panoramic videos, specially:.
4.1) a two-dimentional sample plane in sample space is fitted, n sampled point Q=(x, y, z, q is generated0,q1,q2, q3)T
4.2) image sequence of each sampled point displaying is nearest from sampled point camera pose vector in sample space The image sequence of distance;Its mathematical description is:
Distance=argminP (Q, X)
(wherein p is the distance function in sample space, and Q is sampled point, and X is each frame phase seat in the plane in sample space Appearance point).
Above-mentioned implementation method technology design and feature only to illustrate the invention, the technical field is familiar with the purpose is to allow Technical staff will appreciate that present disclosure and implement according to this, can not be limited the scope of the invention with this.All The equivalents made according to spirit of the invention or modification, should all contain device within protection scope of the present invention.

Claims (5)

  1. It is 1. a kind of to estimate to be generated with the 2D panoramic videos of spatial sampling based on camera pose, it is characterised in that:The method it is specific Step is as follows:
    (1) according to one group of frame of video of input, using matching characteristic point information and various visual angles model between adjacent video two field picture The initial position and attitude of camera are calibrated, it is each further with bundle adjustment (bundle adjustment) algorithm optimization The corresponding camera position of frame and attitude, constitute accurate camera pose set;
    (2) position and attitude of the camera in camera pose set distribution situation in space, fits a 2D and adopts Sample curved surface, and choose n sampled point on sampling 2D curved surfaces;
    (3) position of the camera according to step 2 and attitude definition space metric range.For each sampled point, in phase Spatial measure distance and the frame of video corresponding to the immediate camera of current sampling point are chosen in the appearance set of seat in the plane as currently adopting The image of sampling point;
    (4) paths are selected on space 2D sampling curved surfaces, one image sequence of the sampled point image construction passed through by the path Row, a panoramic view of the Composition of contents scene that image sequence is recorded.Multiple path may be selected, multiple aphoramas are built Figure.
  2. 2. a kind of 2D panoramic videos generation estimated based on camera pose with spatial sampling according to claim 1, it is special Levy and be:Concrete operation step described in step (1) is:
    A) formula is passed throughWhether judging characteristic point p is a characteristic point, wherein I (x) It is circumference any point pixel value, I (p) is candidate point pixel value, and ε is difference threshold values, and N be then angle to there is N number of point to meet on circumference Point, optimal characteristics point is screened with the method for machine learning, and adjacent locations multiple characteristic point is removed with non-maxima suppression algorithm;
    B) image pyramid of multiscale space is set up, the multiple dimensioned consistency of characteristic point is realized;
    C) rotational invariance of characteristic point, calculated by square characteristic point with r as radius in barycenter, feature point coordinates arrives Barycenter forms direction of the vector as this feature point;
    D) Zhang Zhengyou standardizations are utilized, the Intrinsic Matrix K of camera, camera distortion coefficient matrix M is calculated;
    E) using the epipolar-line constraint relation of the matching characteristic point pair between adjacent image, to any matching characteristic point to x and x ', all accord with Close x ' FTX=0, using n characteristic point of RANSAC methods random sampling to carrying out the calculating of basis matrix F;
    F) basis matrix F is changed the eigenmatrix E=K ' F to normalized image coordinateTK, eigenmatrix E is carried out unusual Value is decomposed, and obtains the outer parameter matrix R of adjacent camerast2Four possible Camera extrinsic matrix numbers;
    G) triangulation is carried out to three-dimensional point using four possible Camera extrinsic matrix numbers, and using three-dimensional point all the time in camera Preceding this spatial relation filters out the correct Camera extrinsic number of only one in four possible Camera extrinsic matrix numbers Matrix, and after the outer parameter matrix calculating of all double vision angle models is finished, treatment is averaged, reduce maximum mistake Difference;
    H) by the Double-visual angle Unified Model coordinate system, in conversion to camera coordinate system, then whole various visual angles model is entered Row bundle adjustment, minimizes re-projection error, by all double vision angle models by adjusting the pose of camera, the position of three-dimensional point cloud All it is added in various visual angles model and just completes the establishment of various visual angles model.
  3. 3. a kind of 2D panoramic videos generation estimated based on camera pose with spatial sampling according to claim 1, it is special Levy and be:Described in step (2), the position and attitude of the camera in camera pose set distribution feelings in space Condition, fits a 2D sampling curved surface, and be the step of the concrete operations of n sampled point of selection on 2D curved surfaces are sampled:It is based on The camera posture information of generation, interpolation generation 2D sampling curved surfaces;
    Specifically, a plane can be defined as n=(a, b, c) by its normal vector, be put down by the range formula definable of point to plane Face is ax+by+cz+d=0, might as well make C=1;Then the formula can be changed into ax+by+cz=-d;Have to all camera coordinates points
    x 0 y 0 1 x 1 y 1 1 ... ... ... x n x n 1 a b d = - z 0 - z 1 ... - z n
    Using least square method:
    Can obtain
    Σx i x i Σx i y i Σx i Σy i x i Σy i y i Σy i Σx i Σy i N a b d = - Σx i z i Σy i z i Σz i
    All frame of video posture information barycenter are taken for the origin of coordinates, the third line can be removed:
    Σx i x i Σx i y i Σy i x i Σy i y i a b = - Σx i z i Σy i z i
    Plane equation can be obtained according to Cramer's rule:
    D=∑ xx* ∑ yy- ∑ xy* ∑s xy
    A=(∑ yz* ∑ xy- ∑ xz* ∑ yy)/D
    B=(∑ xy* ∑ xz- ∑ xx* ∑ yz)/D
    N=[a, b, 1]T
    B) n equally spaced sampled point Q=(x, y, z, q are generated on curved surface0,q1,q2,q3)T
  4. 4. a kind of 2D panoramic videos generation estimated based on camera pose with spatial sampling according to claim 1, it is special Levy and be:Position according to camera and attitude definition space metric range described in step (3), for each sampled point, The frame of video corresponding to the spatial measure distance camera nearest with current sampling point is chosen in camera pose set as current The image of sampled point, its concrete operation step is:
    6DOF poses q=(X, R) the ∈ S of each frame are projected into the distance definition of sample space then consecutive frame into p (q0,q1)= Wt*||F(X0,X1)||+Wr*||f(R0,R1)||;
    Using definition apart from p, the nearest frame of video of distance sample is obtained as the image corresponding to sampled point.
  5. 5. a kind of 2D panoramic videos generation estimated based on camera pose with spatial sampling according to claim 1, it is special Levy and be:A paths are selected on space 2D sampling curved surfaces described in step (4), the sampling dot image passed through by this paths An image sequence is constituted, shows the scene content that these image sequences are recorded, the image construction on whole 2D samplings curved surface One space panoramic view, concrete operation step is:
    A) an optional paths in two dimension sampling curved surface in the sample space for fitting;
    B) frame of video of the image of each sampled point displaying corresponding to camera nearest from sampled point in sample space.Its middle-range With a distance from using the spatial measure defined in claim 4.
CN201710089112.2A 2017-02-20 2017-02-20 Estimated based on camera pose and the 2D panoramic videos of spatial sampling are generated Pending CN106803275A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710089112.2A CN106803275A (en) 2017-02-20 2017-02-20 Estimated based on camera pose and the 2D panoramic videos of spatial sampling are generated

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710089112.2A CN106803275A (en) 2017-02-20 2017-02-20 Estimated based on camera pose and the 2D panoramic videos of spatial sampling are generated

Publications (1)

Publication Number Publication Date
CN106803275A true CN106803275A (en) 2017-06-06

Family

ID=58988651

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710089112.2A Pending CN106803275A (en) 2017-02-20 2017-02-20 Estimated based on camera pose and the 2D panoramic videos of spatial sampling are generated

Country Status (1)

Country Link
CN (1) CN106803275A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107833270A (en) * 2017-09-28 2018-03-23 浙江大学 Real-time object dimensional method for reconstructing based on depth camera
CN108615244A (en) * 2018-03-27 2018-10-02 中国地质大学(武汉) A kind of image depth estimation method and system based on CNN and depth filter
CN109035327A (en) * 2018-06-25 2018-12-18 北京大学 Panorama camera Attitude estimation method based on deep learning
CN109781003A (en) * 2019-02-11 2019-05-21 华侨大学 A kind of next optimum measurement pose of Constructed Lighting Vision System determines method
CN110298884A (en) * 2019-05-27 2019-10-01 重庆高开清芯科技产业发展有限公司 A kind of position and orientation estimation method suitable for monocular vision camera in dynamic environment
CN110463205A (en) * 2017-03-22 2019-11-15 高通股份有限公司 The Globe polar projection being effectively compressed for 360 degree of videos
CN111325796A (en) * 2020-02-28 2020-06-23 北京百度网讯科技有限公司 Method and apparatus for determining pose of vision device
CN112102411A (en) * 2020-11-02 2020-12-18 中国人民解放军国防科技大学 Visual positioning method and device based on semantic error image
CN112669381A (en) * 2020-12-28 2021-04-16 北京达佳互联信息技术有限公司 Pose determination method and device, electronic equipment and storage medium
CN114152937A (en) * 2022-02-09 2022-03-08 西南科技大学 External parameter calibration method for rotary laser radar
CN114325524A (en) * 2020-09-29 2022-04-12 上海联影医疗科技股份有限公司 Magnetic resonance image reconstruction method, device and system and storage medium
CN114598809A (en) * 2022-01-18 2022-06-07 影石创新科技股份有限公司 Method for selecting view angle of panoramic video, electronic device, computer program product and readable storage medium
CN116051630A (en) * 2023-04-03 2023-05-02 慧医谷中医药科技(天津)股份有限公司 High-frequency 6DoF attitude estimation method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103247075A (en) * 2013-05-13 2013-08-14 北京工业大学 Variational mechanism-based indoor scene three-dimensional reconstruction method
CN103903263A (en) * 2014-03-26 2014-07-02 苏州科技学院 Algorithm for 360-degree omnibearing distance measurement based on Ladybug panorama camera images
US9465976B1 (en) * 2012-05-18 2016-10-11 Google Inc. Feature reduction based on local densities for bundle adjustment of images
CN106204625A (en) * 2016-07-27 2016-12-07 大连理工大学 A kind of variable focal length flexibility pose vision measuring method
US9569847B1 (en) * 2011-11-16 2017-02-14 Google Inc. General and nested Wiberg minimization

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9569847B1 (en) * 2011-11-16 2017-02-14 Google Inc. General and nested Wiberg minimization
US9465976B1 (en) * 2012-05-18 2016-10-11 Google Inc. Feature reduction based on local densities for bundle adjustment of images
CN103247075A (en) * 2013-05-13 2013-08-14 北京工业大学 Variational mechanism-based indoor scene three-dimensional reconstruction method
CN103903263A (en) * 2014-03-26 2014-07-02 苏州科技学院 Algorithm for 360-degree omnibearing distance measurement based on Ladybug panorama camera images
CN106204625A (en) * 2016-07-27 2016-12-07 大连理工大学 A kind of variable focal length flexibility pose vision measuring method

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110463205B (en) * 2017-03-22 2023-05-09 高通股份有限公司 Sphere projection for efficient compression of 360 degree video
CN110463205A (en) * 2017-03-22 2019-11-15 高通股份有限公司 The Globe polar projection being effectively compressed for 360 degree of videos
CN107833270A (en) * 2017-09-28 2018-03-23 浙江大学 Real-time object dimensional method for reconstructing based on depth camera
CN107833270B (en) * 2017-09-28 2020-07-03 浙江大学 Real-time object three-dimensional reconstruction method based on depth camera
CN108615244A (en) * 2018-03-27 2018-10-02 中国地质大学(武汉) A kind of image depth estimation method and system based on CNN and depth filter
CN108615244B (en) * 2018-03-27 2019-11-15 中国地质大学(武汉) A kind of image depth estimation method and system based on CNN and depth filter
CN109035327A (en) * 2018-06-25 2018-12-18 北京大学 Panorama camera Attitude estimation method based on deep learning
CN109035327B (en) * 2018-06-25 2021-10-29 北京大学 Panoramic camera attitude estimation method based on deep learning
CN109781003A (en) * 2019-02-11 2019-05-21 华侨大学 A kind of next optimum measurement pose of Constructed Lighting Vision System determines method
CN110298884A (en) * 2019-05-27 2019-10-01 重庆高开清芯科技产业发展有限公司 A kind of position and orientation estimation method suitable for monocular vision camera in dynamic environment
CN111325796A (en) * 2020-02-28 2020-06-23 北京百度网讯科技有限公司 Method and apparatus for determining pose of vision device
CN111325796B (en) * 2020-02-28 2023-08-18 北京百度网讯科技有限公司 Method and apparatus for determining pose of vision equipment
CN114325524B (en) * 2020-09-29 2023-09-01 上海联影医疗科技股份有限公司 Magnetic resonance image reconstruction method, device, system and storage medium
CN114325524A (en) * 2020-09-29 2022-04-12 上海联影医疗科技股份有限公司 Magnetic resonance image reconstruction method, device and system and storage medium
CN112102411A (en) * 2020-11-02 2020-12-18 中国人民解放军国防科技大学 Visual positioning method and device based on semantic error image
US11321937B1 (en) 2020-11-02 2022-05-03 National University Of Defense Technology Visual localization method and apparatus based on semantic error image
CN112669381A (en) * 2020-12-28 2021-04-16 北京达佳互联信息技术有限公司 Pose determination method and device, electronic equipment and storage medium
CN114598809A (en) * 2022-01-18 2022-06-07 影石创新科技股份有限公司 Method for selecting view angle of panoramic video, electronic device, computer program product and readable storage medium
CN114152937B (en) * 2022-02-09 2022-05-17 西南科技大学 External parameter calibration method for rotary laser radar
CN114152937A (en) * 2022-02-09 2022-03-08 西南科技大学 External parameter calibration method for rotary laser radar
CN116051630A (en) * 2023-04-03 2023-05-02 慧医谷中医药科技(天津)股份有限公司 High-frequency 6DoF attitude estimation method and system

Similar Documents

Publication Publication Date Title
CN106803275A (en) Estimated based on camera pose and the 2D panoramic videos of spatial sampling are generated
CN110288657B (en) Augmented reality three-dimensional registration method based on Kinect
CN104484648B (en) Robot variable visual angle obstacle detection method based on outline identification
CN107909640B (en) Face relighting method and device based on deep learning
CN107843251B (en) Pose estimation method of mobile robot
CN110245199B (en) Method for fusing large-dip-angle video and 2D map
CN110211169B (en) Reconstruction method of narrow baseline parallax based on multi-scale super-pixel and phase correlation
CN112767467B (en) Double-image depth estimation method based on self-supervision deep learning
CN109785373B (en) Speckle-based six-degree-of-freedom pose estimation system and method
CN109087323A (en) A kind of image three-dimensional vehicle Attitude estimation method based on fine CAD model
CN113313732A (en) Forward-looking scene depth estimation method based on self-supervision learning
CN110570474B (en) Pose estimation method and system of depth camera
CN111553845B (en) Quick image stitching method based on optimized three-dimensional reconstruction
CN109003307B (en) Underwater binocular vision measurement-based fishing mesh size design method
CN114782628A (en) Indoor real-time three-dimensional reconstruction method based on depth camera
CN115376024A (en) Semantic segmentation method for power accessory of power transmission line
EP2800055A1 (en) Method and system for generating a 3D model
CN114419246A (en) Space target instant dense reconstruction method
CN110580715A (en) Image alignment method based on illumination constraint and grid deformation
CN116152121B (en) Curved surface screen generating method and correcting method based on distortion parameters
Cheng et al. An integrated approach to 3D face model reconstruction from video
CN114170317B (en) Swimming pool drowning prevention head position judging method and device and computer equipment
CN112767481B (en) High-precision positioning and mapping method based on visual edge features
CN112016568A (en) Method and device for tracking image feature points of target object
CN114549634A (en) Camera pose estimation method and system based on panoramic image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170606