CN110120101A - Cylindrical body augmented reality method, system, device based on 3D vision - Google Patents

Cylindrical body augmented reality method, system, device based on 3D vision Download PDF

Info

Publication number
CN110120101A
CN110120101A CN201910360629.XA CN201910360629A CN110120101A CN 110120101 A CN110120101 A CN 110120101A CN 201910360629 A CN201910360629 A CN 201910360629A CN 110120101 A CN110120101 A CN 110120101A
Authority
CN
China
Prior art keywords
image
cylindrical body
augmented reality
camera posture
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910360629.XA
Other languages
Chinese (zh)
Other versions
CN110120101B (en
Inventor
唐付林
吴毅红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN201910360629.XA priority Critical patent/CN110120101B/en
Publication of CN110120101A publication Critical patent/CN110120101A/en
Application granted granted Critical
Publication of CN110120101B publication Critical patent/CN110120101B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention belongs to computer vision fields, and in particular to a kind of cylindrical body augmented reality method, system, device based on 3D vision, it is intended to solve the problems, such as that the prior art is difficult to carry out augmented reality to cylindrical body.The method of the present invention includes: to concentrate each image to the video image of the cylindrical body multi-angle of view of acquisition, using the contour line of Hough transformation method fitting cylindrical body in the picture, and builds world coordinate system;Based on projective invariance and image-forming principle, the corresponding camera posture of every image is calculated;Reconstruction cylinder threedimensional model;Obtaining three-dimensional model video image based on reconstruction simultaneously carries out the calculating of interframe camera Attitude Tracking, obtains the corresponding camera posture of each frame image;Virtual image is added on cylindrical body video image by the camera posture of acquisition, realizes cylindrical body augmented reality.The method of the present invention is rebuild offline and online tracking accuracy height, speed are fast, and the dummy object being superimposed is stablized, and the purpose of cylindrical body augmented reality is realized.

Description

Cylindrical body augmented reality method, system, device based on 3D vision
Technical field
The invention belongs to computer vision fields, and in particular to a kind of cylindrical body augmented reality side based on 3D vision Method, system, device.
Background technique
In recent years, augmented reality changes the theory of traditional video information interactive application, military in medical treatment, educates, joy There is huge application prospect in the fields such as happy.Either in academia still in industry, augmented reality is all received widely Concern.Originally, there are many markers of planar square to be used for augmented reality, such as ARToolKit, ARTag, AprilTag Deng.ARToolKit be it is earliest most popular, the thought that ARTag and AprilTa are all based on ARToolKit improves 's.Later, the marker of planar rondure gradually came into vogue, such as Mono-spectrum, CCTag, RUNETag etc..No matter It is the marker of planar square or the marker of planar rondure, is all that dummy object is projected to very according to calculating camera posture Augmented reality is realized on real object.For these planar tags, their all common disadvantages, it is necessary to it is flat to print these Then face marker is put the planar tags of printing can realize augmented reality in the scene, and it is not very convenient to operate.Into One step occurs not needing the augmented reality of label, but is all to be confined in the enhancing to plane, for cylindrical body curved surface Augmented reality it is difficult very much, the relevant technologies are fresh few at home and abroad.
Summary of the invention
In order to solve the above problem in the prior art, i.e. the prior art is difficult to ask cylindrical body progress augmented reality Topic, the cylindrical body augmented reality method based on 3D vision that the present invention provides a kind of, comprising:
Step S10 obtains cylindrical body multi-vision angle video image collection, as input picture collection;
Step S20 concentrates each image to the input picture, using Hough transformation method fitting cylindrical body in image In contour line, and establish the world coordinate system of the Circle in Digital Images cylinder;
Step S30, it is the contour line for the cylindrical body for concentrating each image to be fitted based on the input picture, corresponding World coordinate system is based on projective invariance and image-forming principle, calculates the corresponding camera posture of every image;
Step S40 extracts the characteristic point that the input picture concentrates each image, and is based on the corresponding phase of described image Machine posture, the corresponding spatial point of reconstruction features point obtain the cylinder body three-dimensional models of reconstruction;
Step S50, the cylinder body three-dimensional models based on the reconstruction obtain corresponding cylindrical body video image and carry out just Beginningization obtains initial camera posture and three-dimensional-two-dimentional corresponding relationship;
Step S60 carries out interframe camera Attitude Tracking based on the initial camera posture and three-dimensional-two-dimentional corresponding relationship It calculates, obtains the corresponding camera posture of each frame image;
The virtual image of input is added to the circle using the corresponding camera posture of each frame image by step S70 On cylinder video image, to realize cylindrical body augmented reality.
In some preferred embodiments, " each image is concentrated to the input picture, using Hough in step S20 Transform method is fitted the contour line of cylindrical body in the picture, and establishes the world coordinate system of the Circle in Digital Images cylinder ", method Are as follows:
Step S201 concentrates each image to the input picture, using the two of Hough transformation method fitting cylindrical body Edge line l1And l2, while being fitted two conic section c of cylindrical body1And c2
Step S202, with the curve c2Central point o2Origin of the corresponding spatial point as world coordinate system, the song Line c2Central point o2To curve c2On a point X-axis of the straight line as world coordinate system, the curve c2Central point o2 To curve c1Central point o1Z axis of the straight line as world coordinate system, the conic section c2Corresponding space plane is generation The X-Y plane of boundary's coordinate system completes the foundation of world coordinate system.
In some preferred embodiments, " each image is concentrated to be fitted to obtain based on the input picture in step S30 Cylindrical body contour line, corresponding world coordinate system, be based on projective invariance and image-forming principle, calculate every image it is corresponding Camera posture ", method are as follows:
Two straight lines and two curves, world coordinate systems based on fitting, calculate separately every image from world coordinate system Rotational transformation matrix R and translation vector t, the rotational transformation matrix R and translation vector t to camera coordinates system is corresponding for image Camera posture.
In some preferred embodiments, " feature that the input picture concentrates each image is extracted in step S40 Point, and it is based on the corresponding camera posture of described image, the corresponding spatial point of reconstruction features point " it is additionally provided with spatial point optimization later The step of, method are as follows:
Step B10 is minimized using re-projection error according to three-dimensional-two-dimensional corresponding relationship and is optimized each frame image All spatial points observed by posture and image;
Step B20, all spatial points observed by posture and image based on each frame image after the optimization, Optimize the spatial point of all images and the camera posture of all images using global bundle adjustment.
In some preferred embodiments, " the cylinder body three-dimensional models based on the reconstruction are obtained and are corresponded in step S50 Cylindrical body video image and initialized ", method are as follows:
Step S501, the cylindrical body video image based on acquisition handle image using Linear P3P RANSAC, continuously Obtain the default corresponding camera posture of frame number image;
Step S502, judge the corresponding camera posture of the default frame number image degree of closeness whether be more than setting threshold Value, judging result be it is yes, then initialize completion;Judging result be it is no, then follow the steps S501.
In some preferred embodiments, " based on initial camera posture pass corresponding with three-dimensional-two dimension in step S60 System carries out interframe camera Attitude Tracking and calculates, obtains the corresponding camera posture of each frame image ", method are as follows:
Step S601 detects angle point in the area-of-interest of current frame image and extracts two valued description;
Step S602 matches two valued description of current frame image and two valued description of previous frame image, obtains current Two dimension-two-dimentional relation of frame image and previous frame image;
Step S603, two of three-dimensional-two-dimentional relation and current frame image and previous frame image based on previous frame image Dimension-two-dimentional relation obtains three-dimensional-two-dimentional relation of current frame image;
Step S604, three-dimensional-two-dimentional relation based on the current frame image calculate current frame image using EPnP method Corresponding camera posture.
In some preferred embodiments, and in step S70 " the corresponding camera posture of each frame image is used, it will be defeated The virtual image entered is added on the cylindrical body video image, carries out cylindrical body augmented reality " it is additionally provided with camera appearance before The step of state unstability is eliminated, method are as follows:
Using extended Kalman filter smooth camera posture, the unstability of camera posture is eliminated.
Another aspect of the present invention proposes a kind of cylindrical body augmented reality system based on 3D vision, including input At the beginning of module, world coordinate system establish module, camera Attitude Calculation module, cylindrical body three-dimensional reconstruction module, cylindrical body video image Beginningization module, interframe camera Attitude Tracking module, augmented reality module, output module;
The input module is configured to obtain the video image collection of cylindrical body setting multi-angle of view and input;
The world coordinate system establishes module, is configured to concentrate the video image of the multi-angle of view each image, intends Cylindrical body contour line is closed, and establishes world coordinate system;
The camera Attitude Calculation module is configured to cylindrical body contour line, the world coordinate system of fitting, utilizes projection Invariance and image-forming principle calculate the corresponding camera posture of every image;
The cylindrical body three-dimensional reconstruction module, the video image for being configured to extract the multi-angle of view concentrate each image Characteristic point, and it is based on the corresponding camera posture of described image, the corresponding spatial point of reconstruction features point obtains the cylindrical body three of reconstruction Dimension module;
The cylindrical body video image initialization module is configured to the cylinder body three-dimensional models of the reconstruction, obtains Corresponding cylindrical body video image is simultaneously initialized, and initial camera posture and three-dimensional-two-dimentional corresponding relationship are obtained;
The interframe camera Attitude Tracking module is configured to initial camera posture pass corresponding with three-dimensional-two dimension System carries out interframe camera Attitude Tracking and calculates, obtains the corresponding camera posture of each frame image
The augmented reality module is configured to use the corresponding camera posture of each frame image, by the virtual of input On image superposition to the cylindrical body video image, cylindrical body augmented reality is carried out;
The output module is configured as output to the later cylindrical body video image of augmented reality.
The third aspect of the present invention proposes a kind of storage device, wherein be stored with a plurality of program, described program be suitable for by Processor is loaded and is executed to realize the above-mentioned cylindrical body augmented reality method based on 3D vision.
The fourth aspect of the present invention proposes a kind of processing unit, including processor, storage device;The processor is fitted In each program of execution;The storage device is suitable for storing a plurality of program;Described program be suitable for loaded by processor and executed with Realize the above-mentioned cylindrical body augmented reality method based on 3D vision.
Beneficial effects of the present invention:
The invention proposes a kind of cylindrical body augmented reality based on 3D vision realizes cylindrical body in off-line phase Reconstructing three-dimensional model realizes the online three-dimensional tracking of cylindrical body in on-line stage.In actual use, either offline to rebuild also It is to track online, all obtains very high precision.In addition, online tracking velocity is more than 50FPS, obtained using online tracking Camera posture is superimposed dummy object, and dummy object is highly stable, realizes the purpose of cylindrical body augmented reality.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other Feature, objects and advantages will become more apparent upon:
Fig. 1 is the system flow schematic diagram of the cylindrical body augmented reality method the present invention is based on 3D vision;
Fig. 2 is a kind of cylindrical shape of embodiment of the cylindrical body augmented reality method the present invention is based on 3D vision Object exemplary diagram;
Fig. 3 is that a kind of world of the foundation of embodiment of the cylindrical body augmented reality method the present invention is based on 3D vision is sat Mark system schematic diagram;
Fig. 4 is a kind of cylinder volume reconstruction mould of embodiment of the cylindrical body augmented reality method the present invention is based on 3D vision Type exemplary diagram;
Fig. 5 the present invention is based on a kind of camera posture interframe of embodiment of the cylindrical body augmented reality method of 3D vision with The re-projection error schematic diagram of each frame in track;
The present invention is based on each frames in the tracking of the camera posture interframe of the cylindrical body augmented reality method of 3D vision to disappear by Fig. 6 The time diagram of consumption;
Fig. 7 is the system operation visualization schematic diagram of the cylindrical body augmented reality method the present invention is based on 3D vision;
The present invention is based on a kind of superposition virtual earths of embodiment of the cylindrical body augmented reality method of 3D vision by Fig. 8 Augmented reality result exemplary diagram;
The present invention is based on a kind of replacement cylinder circumferences of embodiment of the cylindrical body augmented reality method of 3D vision by Fig. 9 The augmented reality result exemplary diagram of texture.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is only used for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to just Part relevant to related invention is illustrated only in description, attached drawing.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
A kind of cylindrical body augmented reality method based on 3D vision of the invention, comprising:
Step S10 obtains cylindrical body multi-vision angle video image collection, as input picture collection;
Step S20 concentrates each image to the input picture, using Hough transformation method fitting cylindrical body in image In contour line, and establish the world coordinate system of the Circle in Digital Images cylinder;
Step S30, it is the contour line for the cylindrical body for concentrating each image to be fitted based on the input picture, corresponding World coordinate system is based on projective invariance and image-forming principle, calculates the corresponding camera posture of every image;
Step S40 extracts the characteristic point that the input picture concentrates each image, and is based on the corresponding phase of described image Machine posture, the corresponding spatial point of reconstruction features point obtain the cylinder body three-dimensional models of reconstruction;
Step S50, the cylinder body three-dimensional models based on the reconstruction obtain corresponding cylindrical body video image and carry out just Beginningization obtains initial camera posture and three-dimensional-two-dimentional corresponding relationship;
Step S60 carries out interframe camera Attitude Tracking based on the initial camera posture and three-dimensional-two-dimentional corresponding relationship It calculates, obtains the corresponding camera posture of each frame image;
The virtual image of input is added to the circle using the corresponding camera posture of each frame image by step S70 On cylinder video image, to realize cylindrical body augmented reality.
In order to be more clearly illustrated to the cylindrical body augmented reality method the present invention is based on 3D vision, below with reference to Fig. 1 is unfolded to be described in detail to step each in embodiment of the present invention method.
The cylindrical body augmented reality method based on 3D vision of an embodiment of the present invention, including step S10- step S70, each step are described in detail as follows:
Step S10 obtains cylindrical body multi-vision angle video image collection, as input picture collection.
Before the offline three-dimensional reconstruction for carrying out cylindrical body, calibration for cameras Intrinsic Matrix K first utilizes camera intrinsic parameter Image is normalized in matrix K;Then, some images are shot, comprising cylindrical body and there are multiple visual angles in these images.
As shown in Fig. 2, for the present invention is based on a kind of cylinders of embodiment of the cylindrical body augmented reality method of 3D vision The object exemplary diagram of shape is from left to right followed successively by can- can, cola- Coke bottle, sprite- Sprite bottle, water- Mineral water bottle.
Step S20 concentrates each image to the input picture, using Hough transformation method fitting cylindrical body in image In contour line, and establish the world coordinate system of the Circle in Digital Images cylinder.
The basic principle of Hough transformation method (Hough transform) is the duality using point with line, will be original The given curve negotiating curve representation form of image space becomes a point of parameter space, thus giving in original image The test problems for determining curve are converted into the spike problem found in parameter space, namely detection overall permanence is converted into detection office Portion's characteristic, such as straight line, ellipse, circle, camber line etc..
Step S201 concentrates each image to the input picture, using the two of Hough transformation method fitting cylindrical body Edge line l1And l2, while being fitted two conic section c of cylindrical body1And c2
Step S202, with the curve c2Central point o2Origin of the corresponding spatial point as world coordinate system, the song Line c2Central point o2To curve c2On a point X-axis of the straight line as world coordinate system, the curve c2Central point o2 To curve c1Central point o1Z axis of the straight line as world coordinate system, the conic section c2Corresponding space plane is generation The X-Y plane of boundary's coordinate system, establishes world coordinate system.
Set the conic section c of fitting1Center be o1, fitting conic section c2Center be o2, choose o2(or o1) Corresponding spatial point is the origin of world coordinate system, and homogeneous coordinates are (0 00 1)T.In conic section c2One image of upper selection Point M0, o2To M0Straight line be space coordinates X-axis, then M0Corresponding spatial point homogeneous coordinates are (r 00 1)T, r is cylinder The radius of body.o2To o1Z axis of the straight line as space coordinates, then o1Corresponding spatial point homogeneous coordinates are (0 0 h 1)T, H is the height of cylindrical body.Choose conic section c2Corresponding space plane is the X-Y plane of world coordinate system.So far, the world is sat Mark system is established.
As shown in figure 3, for the present invention is based on a kind of foundation of embodiment of the cylindrical body augmented reality method of 3D vision World coordinate system schematic diagram, o be world coordinate system origin, o to curve c2On a point M0Straight line be world coordinate system X-axis, o to curve c1Central point o1Straight line be world coordinate system Z axis, conic section c2Corresponding space plane is The X-Y plane of world coordinate system.In actual application, world coordinate system is established in this approach, can choose other differences Origin of the point as world coordinate system, also can choose the X, Y, Z axis of straight line that other pass through origin as world coordinate system, This is no longer going to repeat them.
Step S30, it is the contour line for the cylindrical body for concentrating each image to be fitted based on the input picture, corresponding World coordinate system is based on projective invariance and image-forming principle, calculates the corresponding camera posture of every image.
Common projective invariance includes that the projection of collinear points is still conllinear, and the projection of parallel lines intersects at a point, directly The cross ratio invariability etc. of line segment projection.
Cylindrical body contour line (including two straight lines and two curves), world coordinate system based on fitting calculate separately every Open spin matrix R and translation vector t, the rotational transformation matrix R and translation of the image from world coordinate system to camera coordinates system Vector t is the corresponding camera posture of image.
Rotational transformation matrix R and translation vector t, as shown in formula (1) and formula (2):
R=(r1 r2 r3) formula (1)
T=(t1 t2 t3) formula (2)
Wherein, r1=(r11 r21 r31)T, r2=(r12 r22 r32)T, r3=(r13 r23 r33)TIt is the three of spin matrix R Column.
Infinite point V in world coordinate system, on Z axisz=(0 01 0)T, define fitting a straight line l1And l2Intersection point For VzIn the subpoint v of two dimensional imagez.By the image-forming principle of image, v is calculatedz, as shown in formula (3):
vz=l1×l2≈(r1 r2 r3 t)(0 0 1 0)TFormula (3)
So as to calculate r3, as shown in formula (4):
According to projective invariance, digital simulation curve c1Central point o1, matched curve c2Central point o2, such as formula (5) and Shown in formula (6):
Wherein, u1、v1Respectively represent o1Abscissa, ordinate in two dimensional image, u2、v2Respectively represent o2In two dimensional image Abscissa, ordinate.
According to o1The homogeneous coordinates (0 0 h 1) of corresponding spatial pointTAnd o2The homogeneous coordinates (0 00 1) of corresponding spatial pointT, Based on image image-forming principle, scale factor s1、s2Calculation method such as formula (7) and formula (8) shown in:
So as to calculate t, as shown in formula (9):
T=s1*o1-h*r3Formula (9)
Under world coordinate system, the infinite point homogeneous coordinates of X-axis are (1 00 1)T, according to image image-forming principle, meter Calculate vx, as shown in formula (10):
vx=(o2×m0)×Vz≈(r1 r2 r3 t)(1 0 0 0)TFormula (10)
So as to calculate r1, as shown in formula (11):
Calculate r2, as shown in formula (12):
r2=r3×r1Formula (12)
In conclusion obtaining rotational transformation matrix R and translation vector t is the corresponding camera posture of image.
Step S40 extracts the characteristic point that the input picture concentrates each frame image, and is based on the corresponding phase of described image Machine posture, the corresponding spatial point of reconstruction features point obtain the cylinder body three-dimensional models of reconstruction.
On the first frame image of the input picture collection, a characteristic point m is choseni=(ui vi 1)T, corresponding sky Between point homogeneous coordinates be Mi=(Xi Yi Zi 1)T, then according to imaging process, MiShown in the calculation method of reconstruction such as formula (13):
(Xi Yi Zi)T=s*RT*mi-RT* t formula (13)
Wherein, s is scale factor;Spatial point MiOn cylindrical body, meetR is cylinder radius.
" characteristic point that the input picture concentrates each image is extracted, and corresponding based on described image in step S40 The step of spatial point optimization is additionally provided with after camera posture, the corresponding spatial point of reconstruction features point ", method are as follows:
Step B10 is minimized using re-projection error according to three-dimensional-two-dimensional corresponding relationship and is optimized each frame image All spatial points observed by posture and image, as shown in formula (14):
Step B20, all spatial points observed by posture and image based on each frame image after the optimization, Optimize the spatial point of all images and the camera posture of all images using global bundle adjustment, as shown in formula (15):
Wherein, KlRepresent all images, PlAll spatial points are represented, E (i, j) is represented, calculation method such as formula (16) institute Show:
E (i, j)=| | mi-[Rj,tj]Mi||2Formula (16)
As shown in figure 4, for the present invention is based on a kind of cylinders of embodiment of the cylindrical body augmented reality method of 3D vision Volume reconstruction model example figure, upper left are can- can, and upper right is cola- Coke bottle, and lower-left is sprite- Sprite bottle, bottom right For water- mineral water bottle.
Cylindrical body three-dimensional reconstruction is carried out using the method for the present invention, precision is high, error is low, and be averaged reconstruction error such as 1 institute of table Show:
Table 1
Cylindrical body Can Coke bottle Sprite bottle Mineral water bottle
Average reconstruction error (pixel) 1.92×10-5 1.66×10-5 1.60×10-5 1.66×10-5
Step S50, the cylindrical body 3-D image based on the reconstruction obtain corresponding cylindrical body video image and carry out just Beginningization obtains initial camera posture and three-dimensional-two-dimentional corresponding relationship.
Pose solution is frequently encountered in computer vision, and P3P (P3P, Perspective-3-Points) is provided A solution, it is a kind of pose solution mode by 3D-2D, needs known matched 3D point and image 2D point.
Step S501, the cylindrical body video image based on acquisition handle image using Linear P3P RANSAC, continuously Obtain the default corresponding camera posture of frame number image.
Step S502, judge the corresponding camera posture of the default frame number image degree of closeness whether be more than setting threshold Value, judging result be it is yes, then initialize completion;Judging result be it is no, then follow the steps S501.
In the initialization procedure of cylindrical body, Linear P3P RANSAC is used:
Firstly, rejecting the straight line and short straight line close to image border using Hough transformation detection and fitting a straight line, merge Identical straight line, with set Si={ li, i=1,2 ... N } and indicate remaining straight line.In addition, by calculating between two straight lines Angle, finds many parallel lines pair, and note is parallel to straight line liStraight line collection be combined into STi.In set STiIn, selection distance is farthest Two straight lines, reject other straight lines, remember that the intersection point of two farthest straight lines is v, then the third column of spin matrix R can be by It calculates, as shown in formula (17):
On the image extracted region angle point folded by two farthest straight lines and with two valued description describe these extraction Angle point.
Then, present frame is found by description corresponding to description of matching present frame and the spatial point rebuild 3D-2D corresponding points remember that the collection of corresponding points is combined into Si={ (mj,Mj), j=1,2 ... n }.From set SiIn arbitrarily select three couples of 3D- 2D corresponding points, are denoted as (mi,Mi), i=1,2,3, according to imaging process, our available formulas (18):
si*mi=RMi+ t formula (18)
Wherein, siIt is scale factor, Mi=(Xi,Yi,Zi)T, i=1,2,3.
Scale factor s can be used1Indicate scale factor s2、s3, as shown in formula (19) and formula (20):
Equation group further can be obtained, as shown in formula (21):
Formula (19) and formula (20) are substituted into equation group (21), obtain one about s1Equation group, use SVD point Solution solves equation group, the s that then will be solved1Formula (19) and formula (20) are substituted into, s can be calculated2And s3.It will calculate Obtained s1、s2And s3It substitutes into formula (18), linear solution can obtain r1、r2, as shown in formula (22) and formula (23):
Wherein, A1=(s2m2-s1m1-(Z2-Z1)r3)(Y3-Y1), A2=(s3m3-s1m1-(Z3-Z1)r3)(Y2-Y1), A3= (s2m2-s1m1-(Z2-Z1)r3)(X3-X1), A4=(s3m3-s1m1-(Z3-Z1)r3)(X2-X1), B==(X2-X1)(Y3-Y1)- (X3-X1)(Y2-Y1)。
Thus the spin matrix R=(r obtained1 r2 r3), translation vector t is calculated, as shown in formula (24):
T=simi-RMi, i=1,2,3 formula (24)
By camera the posture R and t acquired above, we can calculate interior point number and put number in saving.Again Three pairs of different 3D-2D corresponding points are selected, process above is repeated and finds interior number and save interior point number.Institute is handled After the different three couple 3D-2D corresponding points having, camera posture corresponding to interior the largest number of combinations of point is selected.In order to further Accurate camera posture is obtained, three pairs of 3D-2D corresponding points and Qi Nei point are put together, recalculate camera appearance using EPnP State.
Finally, in each set STiIn, according to above method, calculate camera posture and it is corresponding in point, in selection Camera posture corresponding to the largest number of set of point.Then, we advanced optimize camera posture according to corresponding interior point, such as Shown in formula (25):
Above method, we term it " Linear P3P RANSAC "." Linear P3P RANSAC " is used to handle Continuous three frames image, if their camera posture is relatively, then it is assumed that initialize successfully, otherwise initialization failure, weight Continuous three frames image is newly selected to be initialized until initializing successfully.
Step S60 carries out interframe camera Attitude Tracking based on the initial camera posture and three-dimensional-two-dimentional corresponding relationship It calculates, obtains the corresponding camera posture of each frame image.
Step S601 detects angle point in the area-of-interest of current frame image and extracts two valued description.
Step S602, matches the characteristic point of current frame image and the characteristic point of previous frame image, obtain current frame image with Two dimension-two-dimentional relation of previous frame image.
Step S603, two of three-dimensional-two-dimentional relation and current frame image and previous frame image based on previous frame image Dimension-two-dimentional relation obtains three-dimensional-two-dimentional relation of current frame image.
Step S604, three-dimensional-two-dimentional relation based on the current frame image calculate current frame image using EPnP method Corresponding camera posture.
After initializing successfully, interframe tracking is carried out using the information of previous frame.Interframe tracking include tracking previous frame and with Track model.During tracking previous frame, firstly, present frame area-of-interest (region where cylindrical body) detection angle point and Extract description;Then, the sub 2D-2D to find present frame and previous frame of description of description and previous frame of present frame is matched Corresponding relationship, to find the 3D-2D corresponding relationship of present frame;Finally, being used according to the 3D-2D corresponding relationship of present frame The camera posture of EPnP calculating present frame.If tracking previous frame failure, the camera of present frame is predicted using motion model Posture.During trace model, pass through formula s*mi=R (Xi Yi Zi)T+ t and formulaIt rebuilds in present frame Not with the matched characteristic point of previous frame, the spatial point newly rebuild is matched with model further, obtains more 3D-2D pairs Ying Dian, using the camera posture of all 3D-2D corresponding points optimization present frames of present frame, as shown in formula (26):
Finally according to the camera posture after optimization, the area-of-interest of next frame is predicted.
As shown in figure 5, for the present invention is based on a kind of cameras of embodiment of the cylindrical body augmented reality method of 3D vision The re-projection error schematic diagram of each frame, it is laughable from left to right to respectively represent can- can, cola- in the tracking of posture interframe Bottle, sprite- Sprite bottle, water- mineral water bottle, Frames represent all picture frames in a video, reprojection Error (pixel) represents re-projection error, and the re-projection error that straight line represents most of picture frames in video is less than a certain setting Threshold value.
As shown in fig. 6, for the present invention is based on the tracking of the camera posture interframe of the cylindrical body augmented reality method of 3D vision In the consumption of each frame time diagram, from left to right respectively represent can- can, cola- Coke bottle, sprite- Sprite Bottle, water- mineral water bottle, Frames represent all picture frames in a video, and times (ms) represents online tracking process In the consumption of each frame time, straight line, which represents most of picture frames and tracks time of consumption online, is less than a certain given threshold.
Augmented reality is carried out using the method for the present invention, frame loss rate is low during line camera Attitude Tracking, re-projection error The number of image frames FPS high that small, each second can track online, as shown in table 2:
Video Frame number Frame loss rate Re-projection error (pixel) FPS(Hz)
Can 3356 0.70% 0.90 63
Cola 3507 1.64% 1.26 59
Sprite 3386 1.27% 0.93 63
Water 3449 0.19% 0.95 56
The virtual image of input is added to the circle using the corresponding camera posture of each frame image by step S70 On cylinder video image, to realize cylindrical body augmented reality.
In step S70 " the corresponding camera posture of each frame image is used, the virtual image of input is added to described On cylindrical body video image, to realize cylindrical body augmented reality " it is additionally provided with the step of camera unsteady attitude is eliminated before, Its method are as follows:
Using extended Kalman filter smooth camera posture, the unstability of camera posture is eliminated.
When due to shooting video, the shake that equipment has in a way causes the dummy object of superposition to there may be certain It is unstable in kind degree, use extended Kalman filter smooth camera posture.It then, will using smoothed out camera posture Dummy object projects on true cylindrical body, it is ensured that dummy object is stable to be added on cylindrical body, to reach increasing The purpose of strong reality.
As shown in fig. 7, for the present invention is based on the operation visualizations of the system of the cylindrical body augmented reality method of 3D vision to show It is intended to, Fig. 7 left figure is that the virtual earth is added on the image containing mineral water bottle, and Fig. 7 right figure is the offline Three-dimensional Gravity of cylindrical body The pyramid of the model built, the right figure upper left corner represents the spatial position of camera.
As shown in figure 8, for the present invention is based on a kind of superpositions of embodiment of the cylindrical body augmented reality method of 3D vision The augmented reality result exemplary diagram of virtual earth, from left to right respectively represents virtual earth being superimposed upon can- can, cola- The effect of Coke bottle, sprite- Sprite bottle, water- mineral water bottle.
As shown in figure 9, for the present invention is based on a kind of replacements of embodiment of the cylindrical body augmented reality method of 3D vision The augmented reality result exemplary diagram of cylinder circumference texture from left to right respectively represents replacement can- can, cola- is laughable Bottle, sprite- Sprite bottle, water- mineral water bottle surrounding's texture effect.
The cylindrical body augmented reality system based on 3D vision of second embodiment of the invention, including input module, the world Establishment of coordinate system module, camera Attitude Calculation module, cylindrical body three-dimensional reconstruction module, cylindrical body video image initialization module, Interframe camera Attitude Tracking module, augmented reality module, output module;
The input module is configured to obtain the video image collection of cylindrical body setting multi-angle of view and input;
The world coordinate system establishes module, is configured to concentrate the video image of the multi-angle of view each image, intends Cylindrical body contour line is closed, and establishes world coordinate system;
The camera Attitude Calculation module is configured to cylindrical body contour line, the world coordinate system of fitting, utilizes projection Invariance and image-forming principle calculate the corresponding camera posture of every image;
The cylindrical body three-dimensional reconstruction module, the video image for being configured to extract the multi-angle of view concentrate each image Characteristic point, and it is based on the corresponding camera posture of described image, the corresponding spatial point of reconstruction features point obtains the cylindrical body three of reconstruction Dimension module;
The cylindrical body video image initialization module is configured to the cylinder body three-dimensional models of the reconstruction, obtains Corresponding cylindrical body video image is simultaneously initialized, and initial camera posture and three-dimensional-two-dimentional corresponding relationship are obtained;
The interframe camera Attitude Tracking module is configured to initial camera posture pass corresponding with three-dimensional-two dimension System carries out interframe camera Attitude Tracking and calculates, obtains the corresponding camera posture of each frame image
The augmented reality module is configured to use the corresponding camera posture of each frame image, by the virtual of input On image superposition to the cylindrical body video image, cylindrical body augmented reality is carried out;
The output module is configured as output to the later cylindrical body video image of augmented reality.
Person of ordinary skill in the field can be understood that, for convenience and simplicity of description, foregoing description The specific work process of system and related explanation, can refer to corresponding processes in the foregoing method embodiment, details are not described herein.
It should be noted that the cylindrical body augmented reality system provided by the above embodiment based on 3D vision, only more than The division for stating each functional module carries out for example, in practical applications, can according to need and by above-mentioned function distribution by not With functional module complete, i.e., by the embodiment of the present invention module or step decompose or combine again, for example, above-mentioned reality The module for applying example can be merged into a module, can also be further split into multiple submodule, described above complete to complete Portion or partial function.For module involved in the embodiment of the present invention, the title of step, it is only for distinguish modules Or step, it is not intended as inappropriate limitation of the present invention.
A kind of storage device of third embodiment of the invention, wherein being stored with a plurality of program, described program is suitable for by handling Device is loaded and is executed to realize the above-mentioned cylindrical body augmented reality method based on 3D vision.
A kind of processing unit of fourth embodiment of the invention, including processor, storage device;Processor is adapted for carrying out each Program;Storage device is suitable for storing a plurality of program;Described program is suitable for being loaded by processor and being executed to realize above-mentioned base In the cylindrical body augmented reality method of 3D vision.
Person of ordinary skill in the field can be understood that, for convenience and simplicity of description, foregoing description The specific work process and related explanation of storage device, processing unit, can refer to corresponding processes in the foregoing method embodiment, Details are not described herein.
Those skilled in the art should be able to recognize that, mould described in conjunction with the examples disclosed in the embodiments of the present disclosure Block, method and step, can be realized with electronic hardware, computer software, or a combination of the two, software module, method and step pair The program answered can be placed in random access memory (RAM), memory, read-only memory (ROM), electrically programmable ROM, electric erasable and can compile Any other form of storage well known in journey ROM, register, hard disk, moveable magnetic disc, CD-ROM or technical field is situated between In matter.In order to clearly demonstrate the interchangeability of electronic hardware and software, in the above description according to function generally Describe each exemplary composition and step.These functions are executed actually with electronic hardware or software mode, depend on technology The specific application and design constraint of scheme.Those skilled in the art can carry out using distinct methods each specific application Realize described function, but such implementation should not be considered as beyond the scope of the present invention.
Term " includes " or any other like term are intended to cover non-exclusive inclusion, so that including a system Process, method, article or equipment/device of column element not only includes those elements, but also including being not explicitly listed Other elements, or further include the intrinsic element of these process, method, article or equipment/devices.
So far, it has been combined preferred embodiment shown in the drawings and describes technical solution of the present invention, still, this field Technical staff is it is easily understood that protection scope of the present invention is expressly not limited to these specific embodiments.Without departing from this Under the premise of the principle of invention, those skilled in the art can make equivalent change or replacement to the relevant technologies feature, these Technical solution after change or replacement will fall within the scope of protection of the present invention.

Claims (10)

1. a kind of cylindrical body augmented reality method based on 3D vision characterized by comprising
Step S10 obtains cylindrical body multi-vision angle video image collection, as input picture collection;
Step S20 concentrates each image to the input picture, in the picture using Hough transformation method fitting cylindrical body Contour line, and establish the world coordinate system of the Circle in Digital Images cylinder;
Step S30, the contour line for the cylindrical body for concentrating each image to be fitted based on the input picture, the corresponding world Coordinate system is based on projective invariance and image-forming principle, calculates the corresponding camera posture of every image;
Step S40 extracts the characteristic point that the input picture concentrates each image, and is based on the corresponding camera appearance of described image State, the corresponding spatial point of reconstruction features point obtain the cylinder body three-dimensional models of reconstruction;
Step S50, the cylinder body three-dimensional models based on the reconstruction obtain corresponding cylindrical body video image and carry out initial Change, obtains initial camera posture and three-dimensional-two-dimentional corresponding relationship;
Step S60 is carried out interframe camera Attitude Tracking and is calculated based on the initial camera posture and three-dimensional-two-dimentional corresponding relationship, Obtain the corresponding camera posture of each frame image;
The virtual image of input is added to the cylindrical body using the corresponding camera posture of each frame image by step S70 On video image, to realize cylindrical body augmented reality.
2. the cylindrical body augmented reality method according to claim 1 based on 3D vision, which is characterized in that step S20 In " each image is concentrated to the input picture, using the contour line of Hough transformation method fitting cylindrical body in the picture, and Establish the world coordinate system of the Circle in Digital Images cylinder ", method are as follows:
Step S201 concentrates each image to the input picture, using two sides of Hough transformation method fitting cylindrical body Edge straight line l1And l2, while being fitted two conic section c of cylindrical body1And c2
Step S202, with the curve c2Central point o2Origin of the corresponding spatial point as world coordinate system, the curve c2 Central point o2To curve c2On a point X-axis of the straight line as world coordinate system, the curve c2Central point o2To song Line c1Central point o1Z axis of the straight line as world coordinate system, the conic section c2Corresponding space plane is world's seat The X-Y plane for marking system, completes the foundation of world coordinate system.
3. the cylindrical body augmented reality method according to claim 1 based on 3D vision, which is characterized in that step S30 In " contour line, corresponding world coordinate system based on the cylindrical body that the input picture concentrates each image to be fitted, base In projective invariance and image-forming principle, the corresponding camera posture of every image is calculated ", method are as follows:
Two straight lines and two curves, world coordinate systems based on fitting, calculate separately every image from world coordinate system to phase The rotational transformation matrix R and translation vector t, the rotational transformation matrix R and translation vector t of machine coordinate system are the corresponding phase of image Machine posture.
4. the cylindrical body augmented reality method according to claim 1 based on 3D vision, which is characterized in that step S40 In " characteristic point that the input picture concentrates each image is extracted, and is based on the corresponding camera posture of described image, reconstruction spy The step of being additionally provided with spatial point optimization after the corresponding spatial point of sign point ", method are as follows:
Step B10 minimizes the posture for optimizing each frame image using re-projection error according to three-dimensional-two-dimensional corresponding relationship With spatial point all observed by image;
Step B20, all spatial points observed by posture and image based on each frame image after the optimization, uses Global bundle adjustment optimizes the spatial point of all images and the camera posture of all images.
5. the cylindrical body augmented reality method according to claim 1 based on 3D vision, which is characterized in that step S50 In " the cylinder body three-dimensional models based on the reconstruction obtain corresponding cylindrical body video image and are initialized ", method Are as follows:
Step S501, the cylindrical body video image based on acquisition handle image using Linear P3P RANSAC, continuous to obtain The default corresponding camera posture of frame number image;
Step S502, judge the corresponding camera posture of the default frame number image degree of closeness whether be more than setting threshold value, Judging result be it is yes, then initialize completion;Judging result be it is no, then follow the steps S501.
6. the cylindrical body augmented reality method according to claim 1 based on 3D vision, which is characterized in that step S60 In " based on the initial camera posture and three-dimensional-two-dimentional corresponding relationship, carry out interframe camera Attitude Tracking and calculate, obtain each The corresponding camera posture of frame image ", method are as follows:
Step S601 detects angle point in the area-of-interest of current frame image and extracts two valued description;
Step S602 matches two valued description of current frame image and two valued description of previous frame image, obtains present frame figure As two dimension-two-dimentional relation with previous frame image;
Step S603, the two dimension-two of three-dimensional-two-dimentional relation and current frame image and previous frame image based on previous frame image Dimension relationship obtains three-dimensional-two-dimentional relation of current frame image;
It is corresponding to calculate current frame image using EPnP method for step S604, three-dimensional-two-dimentional relation based on the current frame image Camera posture.
7. the cylindrical body augmented reality method according to claim 1 based on 3D vision, which is characterized in that step S70 In " using the corresponding camera posture of each frame image, the virtual image of input is added to the cylindrical body video image On, carry out cylindrical body augmented reality " it is additionally provided with the step of camera unsteady attitude is eliminated, method before are as follows:
Using extended Kalman filter smooth camera posture, the unstability of camera posture is eliminated.
8. a kind of cylindrical body augmented reality system based on 3D vision, which is characterized in that including input module, world coordinate system Establish module, camera Attitude Calculation module, cylindrical body three-dimensional reconstruction module, cylindrical body video image initialization module, interframe phase Machine Attitude Tracking module, augmented reality module, output module;
The input module is configured to obtain the video image collection of cylindrical body setting multi-angle of view and input;
The world coordinate system establishes module, is configured to concentrate the video image of the multi-angle of view each image, fitting circle Cylinder contour line, and establish world coordinate system;
The camera Attitude Calculation module is configured to cylindrical body contour line, the world coordinate system of fitting, constant using projection Property and image-forming principle, calculate the corresponding camera posture of every image;
The cylindrical body three-dimensional reconstruction module, the video image for being configured to extract the multi-angle of view concentrate the feature of each image Point, and it is based on the corresponding camera posture of described image, the corresponding spatial point of reconstruction features point obtains the cylindrical body three-dimensional mould of reconstruction Type;
The cylindrical body video image initialization module is configured to the cylinder body three-dimensional models of the reconstruction, obtains and corresponds to Cylindrical body video image and initialized, obtain initial camera posture and three-dimensional-two-dimentional corresponding relationship;
The interframe camera Attitude Tracking module is configured to the initial camera posture and three-dimensional-two-dimentional corresponding relationship, into Row interframe camera Attitude Tracking calculates, and obtains the corresponding camera posture of each frame image
The augmented reality module is configured to use the corresponding camera posture of each frame image, by the virtual image of input It is added on the cylindrical body video image, carries out cylindrical body augmented reality;
The output module is configured as output to the later cylindrical body video image of augmented reality.
9. a kind of storage device, wherein being stored with a plurality of program, which is characterized in that described program is suitable for being loaded and being held by processor Row is to realize the described in any item cylindrical body augmented reality methods based on 3D vision of claim 1-8.
10. a kind of processing unit, including
Processor is adapted for carrying out each program;And
Storage device is suitable for storing a plurality of program;
It is characterized in that, described program is suitable for being loaded by processor and being executed to realize:
The described in any item cylindrical body augmented reality methods based on 3D vision of claim 1-8.
CN201910360629.XA 2019-04-30 2019-04-30 Cylinder augmented reality method, system and device based on three-dimensional vision Active CN110120101B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910360629.XA CN110120101B (en) 2019-04-30 2019-04-30 Cylinder augmented reality method, system and device based on three-dimensional vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910360629.XA CN110120101B (en) 2019-04-30 2019-04-30 Cylinder augmented reality method, system and device based on three-dimensional vision

Publications (2)

Publication Number Publication Date
CN110120101A true CN110120101A (en) 2019-08-13
CN110120101B CN110120101B (en) 2021-04-02

Family

ID=67520319

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910360629.XA Active CN110120101B (en) 2019-04-30 2019-04-30 Cylinder augmented reality method, system and device based on three-dimensional vision

Country Status (1)

Country Link
CN (1) CN110120101B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112288823A (en) * 2020-10-15 2021-01-29 武汉工程大学 Calibration method of standard cylinder curved surface point measuring equipment
CN112734914A (en) * 2021-01-14 2021-04-30 温州大学 Image stereo reconstruction method and device for augmented reality vision
CN113673283A (en) * 2020-05-14 2021-11-19 惟亚(上海)数字科技有限公司 Smooth tracking method based on augmented reality
CN114549660A (en) * 2022-02-23 2022-05-27 北京大学 Multi-camera calibration method, device and equipment based on cylindrical self-identification marker
CN115115708A (en) * 2022-08-22 2022-09-27 荣耀终端有限公司 Image pose calculation method and system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101101505A (en) * 2006-07-07 2008-01-09 华为技术有限公司 Method and system for implementing three-dimensional enhanced reality
KR20170090165A (en) * 2016-01-28 2017-08-07 허상훈 Apparatus for realizing augmented reality using multiple projector and method thereof
CN108898630A (en) * 2018-06-27 2018-11-27 清华-伯克利深圳学院筹备办公室 A kind of three-dimensional rebuilding method, device, equipment and storage medium
US20190026948A1 (en) * 2017-07-24 2019-01-24 Visom Technology, Inc. Markerless augmented reality (ar) system
US20190051054A1 (en) * 2017-08-08 2019-02-14 Smart Picture Technologies, Inc. Method for measuring and modeling spaces using markerless augmented reality
CN109389634A (en) * 2017-08-02 2019-02-26 蒲勇飞 Virtual shopping system based on three-dimensional reconstruction and augmented reality
CN109472873A (en) * 2018-11-02 2019-03-15 北京微播视界科技有限公司 Generation method, device, the hardware device of threedimensional model
CN109685913A (en) * 2018-12-21 2019-04-26 西安电子科技大学 Augmented reality implementation method based on computer vision positioning

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101101505A (en) * 2006-07-07 2008-01-09 华为技术有限公司 Method and system for implementing three-dimensional enhanced reality
KR20170090165A (en) * 2016-01-28 2017-08-07 허상훈 Apparatus for realizing augmented reality using multiple projector and method thereof
US20190026948A1 (en) * 2017-07-24 2019-01-24 Visom Technology, Inc. Markerless augmented reality (ar) system
CN109389634A (en) * 2017-08-02 2019-02-26 蒲勇飞 Virtual shopping system based on three-dimensional reconstruction and augmented reality
US20190051054A1 (en) * 2017-08-08 2019-02-14 Smart Picture Technologies, Inc. Method for measuring and modeling spaces using markerless augmented reality
CN108898630A (en) * 2018-06-27 2018-11-27 清华-伯克利深圳学院筹备办公室 A kind of three-dimensional rebuilding method, device, equipment and storage medium
CN109472873A (en) * 2018-11-02 2019-03-15 北京微播视界科技有限公司 Generation method, device, the hardware device of threedimensional model
CN109685913A (en) * 2018-12-21 2019-04-26 西安电子科技大学 Augmented reality implementation method based on computer vision positioning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ASAHI SUZUKI 等: "Design of an AR Marker for Cylindrical Surface", 《2013 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY ISMAR》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113673283A (en) * 2020-05-14 2021-11-19 惟亚(上海)数字科技有限公司 Smooth tracking method based on augmented reality
CN112288823A (en) * 2020-10-15 2021-01-29 武汉工程大学 Calibration method of standard cylinder curved surface point measuring equipment
CN112734914A (en) * 2021-01-14 2021-04-30 温州大学 Image stereo reconstruction method and device for augmented reality vision
CN114549660A (en) * 2022-02-23 2022-05-27 北京大学 Multi-camera calibration method, device and equipment based on cylindrical self-identification marker
CN114549660B (en) * 2022-02-23 2022-10-21 北京大学 Multi-camera calibration method, device and equipment based on cylindrical self-identification marker
CN115115708A (en) * 2022-08-22 2022-09-27 荣耀终端有限公司 Image pose calculation method and system

Also Published As

Publication number Publication date
CN110120101B (en) 2021-04-02

Similar Documents

Publication Publication Date Title
CN110120101A (en) Cylindrical body augmented reality method, system, device based on 3D vision
CN109214282B (en) A kind of three-dimension gesture critical point detection method and system neural network based
CN105913489B (en) A kind of indoor three-dimensional scenic reconstructing method using plane characteristic
CN104835144B (en) The method for solving camera intrinsic parameter using the picture and orthogonality of the centre of sphere of a ball
CN104748746B (en) Intelligent machine attitude determination and virtual reality loaming method
Tang et al. 3D mapping and 6D pose computation for real time augmented reality on cylindrical objects
CN102663820B (en) Three-dimensional head model reconstruction method
CN103729885B (en) Various visual angles projection registers united Freehandhand-drawing scene three-dimensional modeling method with three-dimensional
CN109272537A (en) A kind of panorama point cloud registration method based on structure light
CN106503671A (en) The method and apparatus for determining human face posture
CN109035327B (en) Panoramic camera attitude estimation method based on deep learning
CN102750704B (en) Step-by-step video camera self-calibration method
CN109523589A (en) A kind of design method of more robust visual odometry
CN107677274A (en) Unmanned plane independent landing navigation information real-time resolving method based on binocular vision
CN103077546B (en) The three-dimensional perspective transform method of X-Y scheme
CN107886546A (en) Utilize the method for ball picture and public self-polar triangle demarcation parabolic catadioptric video camera
CN107145224B (en) Human eye sight tracking and device based on three-dimensional sphere Taylor expansion
CN106485207A (en) A kind of Fingertip Detection based on binocular vision image and system
Petit et al. A robust model-based tracker combining geometrical and color edge information
CN108492017A (en) A kind of product quality information transmission method based on augmented reality
CN104537705A (en) Augmented reality based mobile platform three-dimensional biomolecule display system and method
CN107194984A (en) Mobile terminal real-time high-precision three-dimensional modeling method
CN110349225A (en) A kind of BIM model exterior contour rapid extracting method
CN101196988B (en) Palm locating and center area extraction method of three-dimensional palm print identity identification system
CN115830135A (en) Image processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant