CN103948431B - A kind of tracer method for designing being applied to surgical navigational gauge point error indicator - Google Patents

A kind of tracer method for designing being applied to surgical navigational gauge point error indicator Download PDF

Info

Publication number
CN103948431B
CN103948431B CN201410149067.1A CN201410149067A CN103948431B CN 103948431 B CN103948431 B CN 103948431B CN 201410149067 A CN201410149067 A CN 201410149067A CN 103948431 B CN103948431 B CN 103948431B
Authority
CN
China
Prior art keywords
point
coordinate
coordinate system
error
axis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201410149067.1A
Other languages
Chinese (zh)
Other versions
CN103948431A (en
Inventor
杨荣骞
林钦永
吴效明
司璇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201410149067.1A priority Critical patent/CN103948431B/en
Publication of CN103948431A publication Critical patent/CN103948431A/en
Application granted granted Critical
Publication of CN103948431B publication Critical patent/CN103948431B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a kind of tracer method for designing being applied to surgical navigational gauge point error indicator, comprising: the internal and external parameter and the certainty annuity coordinate system that obtain binocular optical navigation system; Define a discrete space point set; According to pinhole imaging system principle, spatial discrete points is projected on the imaging surface of left and right cameras, calculate theoretical sub-pix point coordinates; Centered by theoretical sub-pix point, determine a range of error, on left and right cameras imaging surface each get at random point in above-mentioned range of error several, the spatial point coordinate of these pairing pixels is rebuild according to the Epipolar geometry principle of binocular vision, and calculate the distance of these spatial point and former spatial point, using the space relative error magnitudes of the average of these distances as former spatial point; By visual for the space relative error distribution of operating theater instruments position.The present invention can instruct surgeon's operation apparatus in the region that space relative error is less to navigate, to improve the precision of surgical navigational.

Description

A kind of tracer method for designing being applied to surgical navigational gauge point error indicator
Technical field
The present invention relates to computer assisted surgery research field, specifically a kind of tracer method for designing being applied to surgical navigational gauge point error indicator.
Background technology
In recent years, optical operation navigation technology obtains to be promoted fast, gradually in the operations such as neurosurgery, Oral and Maxillofacial Surgery, orthopaedics.In optical operation navigation system, marker tracking location is the core content of surgical navigational, and its method for tracking and positioning and precision directly determine the precision of surgical navigational, range of application and practicality.
The precision of surgical navigational is not only correlated with the precision of marker tracking method, the position of also putting with optical positioning system is relevant, although the precision of marker tracking method is higher, but the relative position as operating theater instruments and optical positioning system is not be in best tracing area, then can greatly reduce final navigation accuracy, make surgical effect and preoperative planning deviation comparatively large, even cause operative failure.Its reason is that gauge point error distribution in available field of view is uneven, and diverse location equidistant with navigation system, error distribution is also different, and this depends mainly on orientation and the resolution of gauge point on image of video camera shot mark point.Therefore, need analyzing and positioning systematic error to distribute, and make it visual.Preoperative according to error map spectrum, adjustment optical positioning system position, makes the zone of action of operation tool be positioned at error comparatively zonule, to improve the precision of surgical navigational.
Therefore, design a kind of tracer method for designing being applied to surgical navigational gauge point error indicator and have very large necessity for the precision improving navigation.
Summary of the invention
It is uneven for the present invention is directed to gauge point error distribution in available field of view, diverse location equidistant with navigation system, error distribution is also this different problem, a kind of tracer method for designing being applied to surgical navigational gauge point error indicator is provided, the method can Positioning System Error distribute, and make it visual, thus instruct doctor's operation apparatus to navigate in space error comparatively zonule, improve the precision of surgical navigational.
Object of the present invention is realized by following technical scheme: a kind of tracer method for designing being applied to surgical navigational gauge point error indicator, the method is the space relative error distribution calculating surgical navigational binocular optical navigation system available field of view, then by visual for the space relative error distribution of operating theater instruments position.
Concrete, the method for the space relative error distribution of described calculating surgical navigational binocular optical navigation system available field of view is:
(1) internal and external parameter of binocular optical navigation system is obtained;
(2) system coordinate system of binocular optical navigation system is determined;
(3) the discrete space point set of a distribution in cuboid is defined;
(4) according to pinhole imaging system principle, spatial discrete points is projected on the imaging surface of left and right cameras, calculate theoretical sub-pix point coordinates;
(5) centered by theoretical sub-pix point, determine a circular or square range of error, on left and right cameras imaging surface each get at random point in above-mentioned range of error several, wherein left and right cameras get count identical, the spatial point coordinate of these pairing pixels is rebuild according to the Epipolar geometry principle of binocular vision, and calculate the distance of these spatial point and former spatial point, using the space relative error magnitudes of the average of these distances as former spatial point.
Concrete, in described step (1), the internal and external parameter of binocular optical navigation system obtains by demarcating left and right cameras, or given by manufacturer.
Concrete, in described step (2), the system coordinate system of binocular optical navigation system is defined as follows: with the line of two video camera photocentres for X-axis, and their mid point is initial point, and it is X-axis positive direction that initial point points to right video camera photocentre; Y-axis was defined as initial point and perpendicular to the Z axis of left and right cameras coordinate system, was positive direction above initial point pointing space; Use right-hand rule determination Z axis, the shooting direction pointing to left and right cameras is Z axis positive direction.
Concrete, in described step (3), the wide height of described cuboid is long parallel with the X, Y, Z axis of system coordinate system respectively, and system Z axis crosses the central point on two sides before and after cuboid.
Concrete, in described step (4), the step calculating theoretical sub-pix point coordinates is:
(4-1), under establishing system coordinate system, P point coordinates is (X w, Y w, Z w), the actual coordinate of this point coordinates in camera coordinate system is (X c, Y c, Z c), the corresponding coordinate of sub-pix point coordinates namely in image coordinate system be (u, v), P point transforms to the formula of camera coordinate system from system coordinate system:
X c Y c Z c = R X w Y w Z w + T
Here R is the orthogonal matrix of 3 × 3, and T is the vector of 3 × 1, and R, T are the external parameter of video camera;
(4-2) by coordinate (X c, Y c, Z c) be normalized to (X c/ Z c, Y c/ Z c, 1), (X c/ Z c, Y c/ Z c, 1) with the coordinate of theoretical linear model (X ' c/ Z ' c, Y ' c/ Z ' c, 1) and there is following relation:
X c ′ / Z c ′ Y c ′ / Z c ′ 1 = ( 1 + k 1 r 2 + k 2 r 4 + k 5 r 6 ) X c / Z c Y c / Z c 1 + 2 k 3 xy + k 4 ( r 2 + 2 x 2 ) k 3 ( r 2 + 2 y 2 ) + 2 k 4 xy 1 ;
Wherein x=X c/ Z c, y=Y c/ Z c, r 2=x 2+ y 2, k 1, k 2, k 3, k 4, k 5for 5 components of camera lens distortion factor matrix k;
(4-3) obtain by pinhole imaging system linear model P point to transform to image coordinate system coordinate transform formula from camera coordinate system:
u v 1 = A X c ′ / Z c ′ Y c ′ / Z c ′ 1 ;
Here A = f x s u 0 0 f y v 0 0 0 1 For the internal matrix of video camera, s is the obliquity factor of video camera, i.e. the angle of imaging plane two vertical axis, f x=f/d x, f y=f/d y, f is the focal length of video camera, and d x, d ybe the physical size of each pixel on x-axis, y-axis direction on image respectively, be all known parameters, by video camera, producer provides; u 0, v 0the image coordinate system initial point o in units of physical length icoordinate under the image coordinate system in units of pixel.
Concrete, in described step (5), the computational process of space relative error magnitudes is as follows:
(5-1) centered by theoretical sub-pix point, a circular or square range of error is determined, each some P got at random in above-mentioned range of error on left and right cameras imaging surface li(i=1,2 ..., N), P ri(i=1,2 ..., N), by P liand P rimatch one to one, form N 2plant combination;
(5-2) according to the formula in step (4-2), (4-3), the actual camera coordinate (X that each pixel described in step (5-1) is corresponding is obtained c/ Z c, Y c/ Z c, 1): P iput under left camera coordinate system vector meets following relation:
Wherein t lifor arbitrary value, make B li=[X lic/ Z lic, Y lic/ Z lic, 1] and T, then have in like manner can obtain, under right camera coordinate system vector meets following relation:
From O under system coordinate system lpoint is to P lithe vector of point for:
P → lgi = R lgi B li t li + T lgi
In like manner, under system coordinate system from O rpoint is to P rithe vector of point for:
P → rgi = R rgi B ri t ri + T rgi
Solving equation group P → lgi = R lgi B li t li + T lgi P → rgi = R rgi B ri t ri + T rgi Namely P is tried to achieve ithe coordinate of point under system coordinate system;
(5-3) N is obtained 2individual spatial point, after the coordinate under system coordinate system, calculates this N 2individual spatial point and former coordinate points distance average, as the relative error magnitudes of former spatial point.
Concrete, the described space relative error distribution visualization method by operating theater instruments position is: define a gray scale or color-bar, for each error amount gives a kind of gray scale or colour, build three-dimensional relative error distribution gray scale or color atlas, according to the distance of luminous point and binocular optical navigation system, two dimensional gray or color atlas and operating theater instruments luminous point position are shown.
Compared with prior art, tool has the following advantages and beneficial effect in the present invention:
The space relative error location mode of calculating surgical navigational binocular vision system available field of view proposed by the invention, can by visual for the space relative error distribution of operating theater instruments position, surgeon's operation apparatus in the region that space relative error is less is instructed to navigate, to improve the precision of surgical navigational.
Accompanying drawing explanation
Fig. 1 is the definition of system coordinate system in the inventive method and the spatial point imaging schematic diagram in left and right cameras.
Fig. 2 is the design sketch being rectangle distribution discrete space point in the inventive method.
Fig. 3 is national forest park in Xiaokeng schematic diagram.
Fig. 4 is centered by theoretical sub-pix point, determines the schematic diagram of a square range of error.
Fig. 5 is discrete space point visual field relative error distribution collection of illustrative plates schematic diagram.
Fig. 6 is the tracer design sketch of display two-dimensional field of view relative error distribution collection of illustrative plates and luminous point position.
Detailed description of the invention
Below in conjunction with embodiment and accompanying drawing, the present invention is described in further detail, but embodiments of the present invention are not limited thereto.
Embodiment 1
The present embodiment is a kind of tracer method for designing being applied to surgical navigational gauge point error indicator, adopts the tracer of this method design, can improve the precision of surgical navigational further.
This tracer, by the discrete space point set of definition one about the distribution in cuboid of binocular optical navigation system Z axis symmetry, determines the spatial point scope of spike.Computer memory point is at the theoretical sub-pix point coordinates of left and right cameras imaging surface, and reject theoretical sub-pix point and be positioned at the extraneous spatial point of imaging surface, then 8 pixels that theoretical sub-pix point is adjacent are got, rebuild their spatial point, get the average of these spatial point and former spatial point distance, as the space relative error magnitudes of former spatial point, then a gray scale or color-bar is defined, for each error amount gives a kind of gray scale or colour, according to the distance of luminous point and binocular optical navigation system, two dimensional gray or color atlas and luminous point position are shown.Concrete steps are as follows:
First, the internal and external parameter of binocular optical navigation system is obtained.The internal and external parameter of binocular optical navigation system is the call parameter of solution room point theoretical sub-pix point coordinates and reverse spatial point coordinate on left and right cameras imaging surface, general binocular optical navigation system obtains by demarcating left and right cameras, and the binocular optical navigation system of business is then by manufacturer's this parameter given.In the present invention, the binocular optical navigation system of use is the near infrared binocular optical positioning system through demarcating.
Determine the system coordinate system of binocular optical navigation system.Spatial discrete points position is conveniently determined in the definition of system coordinate system, and with the line of two video camera photocentres for X-axis, their mid point is initial point, and it is X-axis positive direction that initial point points to right video camera photocentre; Y-axis was defined as initial point and perpendicular to the Z axis of left and right cameras coordinate system, was positive direction above initial point pointing space; Use right-hand rule determination Z axis, the shooting direction pointing to left and right cameras is Z axis positive direction.As shown in Figure 1, (the X in figure w, Y w, Z w, O w) be system coordinate system, (X l, Y l, Z l, O l) be left camera coordinate system, (X r, Y r, Z r, O r) be right camera coordinate system, I l, I rleft and right imaging plane respectively, the some p that left camera coordinates is fastened lwith the some p that right camera coordinates is fastened rp in system coordinate system wpoint imaging, to polar plane Π by O l, p l, e ldetermine for 3, wherein e l, e rphotocentre line O lo rrespectively with I l, I rintersection point.Define the discrete space point set of a distribution in cuboid.The present embodiment, in system coordinate system Z axis positive direction z=300mm place's definition first spatial discrete points face, discrete point zone is 1200mm in the degree of depth of Z-direction, and namely last spatial discrete points face is positioned at z=1500mm place, the wide W=800mm in face, high H=800mm.The wide height that edge discrete point line is formed is long parallel with the X, Y, Z axis of system respectively, and system Z axis crosses the central point on two sides before and after cuboid.Spatial discrete points distributes with fixed interval dx=5mm, dy=5mm, dz=5mm, and makes discrete point about Z axis symmetric arrays on XYZ axle.As shown in Figure 2.
According to pinhole imaging system principle, spatial discrete points projected on the imaging surface of left and right cameras, calculate theoretical sub-pix point coordinates, its schematic diagram as shown in Figure 3.In the process calculated, relate to three coordinate systems, be respectively system coordinate system, camera coordinate system and image coordinate system, about the definition of system coordinate system, describe above, the definition of make a brief explanation here camera coordinate system and image coordinate system.In figure 3, O crepresent the photocentre of video camera, X c, Y cparallel with the plane of delineation respectively, Z cperpendicular to the plane of delineation, and be the optical axis of video camera, then O cand X c, Y c, Z ccoordinate axes forms camera coordinate system jointly.Image coordinate is two-dimensional coordinate system, in units of pixel, with the image upper left corner for initial point, usually carrys out the coordinate of each pixel in representative digit image with (u, v), also represents line number and columns in this pixel place array simultaneously.Computational process is as follows: in figure 3, and under system coordinate system, P point coordinates is (X w, Y w, Z w), in camera coordinate system, coordinate is (X c, Y c, Z c), there is a spin matrix R and translation vector T between system coordinate system and camera coordinate system, their pass is:
X c Y c Z c = R X w Y w Z w + T
Here R is the orthogonal matrix of 3 × 3, and T is the vector of 3 × 1, and they are external parameters of video camera.National forest park in Xiaokeng is an ideal model, in the imaging system of reality, there is lens distortion, can not meet this linear relationship completely, introduces nonlinear model here to obtain sub-pix point coordinates exactly in the process therefore converted.By the camera coordinates (X of linear model c, Y c, Z c) be normalized to (X c/ Z c, Y c/ Z c, 1), be subject to the impact of camera lens radial direction and tangential distortion, actual camera coordinates (X c/ Z c, Y c/ Z c, 1) with the coordinate of linear model (X ' c/ Z ' c, Y ' c/ Z ' c, 1) and there is following relation:.
X c ′ / Z c ′ Y c ′ / Z c ′ 1 = ( 1 + k 1 r 2 + k 2 r 4 + k 5 r 6 ) X c / Z c Y c / Z c 1 + 2 k 3 xy + k 4 ( r 2 + 2 x 2 ) k 3 ( r 2 + 2 y 2 ) + 2 k 4 xy 1 ;
Wherein x=X c/ Z c, y=Y c/ Z c, r 2=x 2+ y 2, k 1, k 2, k 3, k 4, k 5for 5 components of distortion factor k matrix.The pixel coordinate of definition sub-pix point is (u, v), can obtain by pinhole imaging system linear model P point to transform to image coordinate system coordinate transform formula from camera coordinate system:
u v 1 = A X c ′ / Z c ′ Y c ′ / Z c ′ 1
Here A = f x s u 0 0 f y v 0 0 0 1 For the internal matrix of video camera, s is the obliquity factor of video camera, i.e. the angle of imaging plane two vertical axis, f x=f/d x, f y=f/d y, f is the focal length of video camera, and d x, d ybe the physical size of each pixel on x-axis, y-axis direction on image respectively, their value is all known parameters, and by video camera, producer provides.
Calculate theoretical sub-pix point pixel coordinate (u, v) of spatial point, centered by theoretical sub-pix point, get minimum square region, namely get the region that 8 pixel adjacent with theoretical sub-pix point forms, as shown in Figure 4.For each 8 of the pixel of the error of calculation in left and right cameras, respectively get a point and to partner the spatial point coordinate reconstructing and project this two pixel, totally 64 kinds of combinations, then calculate they and the former spatial point distance average relative error magnitudes as former spatial point.Computational process is as follows: the theoretical subpixel coordinates supposing P point is (u, v), and the internal matrix of video camera is A, so P point normalization camera coordinates (X ' c/ Z ' c, Y ' c/ Z ' c, 1) and theoretical subpixel coordinates between meet following equation:
X c ′ / Z c ′ Y c ′ / Z c ′ 1 = A - 1 u v 1
Consider the impact of camera lens radial direction and tangential distortion, at actual camera coordinate (X c/ Z c, Y c/ Z c, 1) and desirable normalization coordinate (X ' c/ Z ' c, Y ' c/ Z ' c, 1) corresponding relation as follows.
X c ′ / Z c ′ Y c ′ / Z c ′ 1 = ( 1 + k 1 r 2 + k 2 r 4 + k 5 r 6 ) X c / Z c Y c / Z c 1 + 2 k 3 xy + k 4 ( r 2 + 2 x 2 ) k 3 ( r 2 + 2 y 2 ) + 2 k 4 xy
Actual camera coordinate (X can be calculated by formula above c/ Z c, Y c/ Z c, 1), therefore P iput under left camera coordinate system vector meets following relation:
Wherein t lifor arbitrary value, make B li=[X lic/ Z lic, Y lic/ Z lic, 1] t, then have in like manner can obtain, under right camera coordinate system vector meets following relation:
From O under system coordinate system lpoint is to P lithe vector of point for:
P → lgi = R lgi B li t li + T lgi
In like manner, under system coordinate system from O rpoint is to P rithe vector of point for:
P → rgi = R rgi B ri t ri + T rgi
Solving equation group P → lgi = R lgi B li t li + T lgi P → rgi = R rgi B ri t ri + T rgi P can be tried to achieve ithe coordinate of point under system.
Obtain 64 spatial point after the coordinate under system, calculate these 64 spatial point and former coordinate points distance average, as the relative error magnitudes of former spatial point.
Then according to scope definition gray scale or the color-bar of error amount, for all spatial point error amounts give a kind of gray scale or color, a solid space gray scale or color atlas is formed, as shown in Figure 5.Calculate the average of all luminous point Z axis in operating theater instruments, by the position display of the system XOY face two dimensional gray or color atlas and luminous point that are in this position out, as shown in Figure 6, in figure, A represents the position of three luminous points, B represents adding and average of three luminous point Z axis coordinates, thus help doctor to optimize the Camera composition of binocular optical navigation system in surgical navigational in the preoperative, and operation apparatus navigates in the region that space relative error is less in operation process.
Above-described embodiment is the present invention's preferably embodiment; but embodiments of the present invention are not restricted to the described embodiments; change, the modification done under other any does not deviate from spirit of the present invention and principle, substitute, combine, simplify; all should be the substitute mode of equivalence, be included within protection scope of the present invention.

Claims (7)

1. one kind is applied to the tracer method for designing of surgical navigational gauge point error indicator, it is characterized in that, the method is the space relative error distribution calculating surgical navigational binocular optical navigation system available field of view, then by visual for the space relative error distribution of operating theater instruments position;
The method of the space relative error distribution of described calculating surgical navigational binocular optical navigation system available field of view is:
(1) internal and external parameter of binocular optical navigation system is obtained;
(2) system coordinate system of binocular optical navigation system is determined;
(3) the discrete space point set of a distribution in cuboid is defined;
(4) according to pinhole imaging system principle, spatial discrete points is projected on the imaging surface of left and right cameras, calculate theoretical sub-pix point coordinates;
(5) centered by theoretical sub-pix point, determine a circular or square range of error, on left and right cameras imaging surface each get at random point in above-mentioned range of error several, wherein left and right cameras get count identical, the spatial point coordinate of these pairing pixels is rebuild according to the Epipolar geometry principle of binocular vision, and calculate the distance of these spatial point and former spatial point, using the space relative error magnitudes of the average of these distances as former spatial point.
2. the tracer method for designing being applied to surgical navigational gauge point error indicator according to claim 1, it is characterized in that, in described step (1), the internal and external parameter of binocular optical navigation system obtains by demarcating left and right cameras, or given by manufacturer.
3. the tracer method for designing being applied to surgical navigational gauge point error indicator according to claim 1, it is characterized in that, in described step (2), the system coordinate system of binocular optical navigation system is defined as follows: with the line of two video camera photocentres for X-axis, their mid point is initial point, and it is X-axis positive direction that initial point points to right video camera photocentre; Y-axis was defined as initial point and perpendicular to the Z axis of left and right cameras coordinate system, was positive direction above initial point pointing space; Use right-hand rule determination Z axis, the shooting direction pointing to left and right cameras is Z axis positive direction.
4. the tracer method for designing being applied to surgical navigational gauge point error indicator according to claim 1, it is characterized in that, in described step (3), the wide height of described cuboid is long parallel with the X, Y, Z axis of system coordinate system respectively, and system Z axis crosses the central point on two sides before and after cuboid.
5. the tracer method for designing being applied to surgical navigational gauge point error indicator according to claim 3, is characterized in that, in described step (4), the step calculating theoretical sub-pix point coordinates is:
(4-1), under establishing system coordinate system, P point coordinates is (X w, Y w, Z w), the actual coordinate of this point coordinates in camera coordinate system is (X c, Y c, Z c), the corresponding coordinate of sub-pix point coordinates namely in image coordinate system is (u, v), and P point transforms to the formula of camera coordinate system from system coordinate system:
X c Y c Z c = R X w Y w Z w + T
Here R is the orthogonal matrix of 3 × 3, and T is the vector of 3 × 1, and R, T are the external parameter of video camera;
(4-2) by coordinate (X c, Y c, Z c) be normalized to (X c/ Z c, Y c/ Z c, 1), (X c/ Z c, Y c/ Z c, 1) with the coordinate of theoretical linear model (X ' c/ Z ' c, Y ' c/ Z ' c, 1) and there is following relation:
X c ′ / Z c ′ Y c ′ / Z c ′ 1 = ( 1 + k 1 r 2 + k 2 r 4 + k 5 r 6 ) X c / Z c Y c / Z c 1 + 2 k 3 x y + k 4 ( r 2 + 2 x 2 ) k 3 ( r 2 + 2 y 2 ) + 2 k 4 x y 1 ;
Wherein x=X c/ Z c, y=Y c/ Z c, r 2=x 2+ y 2, k 1, k 2, k 3, k 4, k 5for 5 components of camera lens distortion factor matrix k;
(4-3) obtain by pinhole imaging system linear model P point to transform to image coordinate system coordinate transform formula from camera coordinate system:
u v 1 = A X c ′ / Z c ′ Y c ′ / Z c ′ 1 ;
Here A = f x s u 0 0 f y v 0 0 0 1 For the internal matrix of video camera, s is the obliquity factor of video camera, i.e. the angle of imaging plane two vertical axis, f x=f/d x, f y=f/d y, f is the focal length of video camera, d x, d ybe the physical size of each pixel on x-axis, y-axis direction on image respectively, be all known parameters, by video camera, producer provides; u 0, v 0the image coordinate system initial point o in units of physical length icoordinate under the image coordinate system in units of pixel.
6. the tracer method for designing being applied to surgical navigational gauge point error indicator according to claim 5, is characterized in that, in described step (5), the computational process of space relative error magnitudes is as follows:
(5-1) centered by theoretical sub-pix point, a circular or square range of error is determined, each some P got at random in above-mentioned range of error on left and right cameras imaging surface li(i=1,2 ..., N), P ri(i=1,2 ..., N), by P liand P rimatch one to one, form N 2plant combination;
(5-2) according to the formula in step (4-2), (4-3), the actual camera coordinate (X that each pixel described in step (5-1) is corresponding is obtained c/ Z c, Y c/ Z c, 1): P iput under left camera coordinate system vector meets following relation:
Wherein t lifor arbitrary value, make B li=[X lic/ Z lic, Y lic/ Z lic, 1] t, then have in like manner can obtain, under right camera coordinate system vector meets following relation:
From O under system coordinate system lpoint is to P lithe vector of point for:
P → l g i = R l g i B l i t l i + T l g i
In like manner, under system coordinate system from O rpoint is to P rithe vector of point for:
Solving equation group P → l g i = R l g i B l i t l i + T l g i P → r g i = R r g i B r i t r i + T r g i Namely P is tried to achieve ithe coordinate of point under system coordinate system;
(5-3) N is obtained 2individual spatial point, after the coordinate under system coordinate system, calculates this N 2individual spatial point and former coordinate points distance average, as the relative error magnitudes of former spatial point.
7. the tracer method for designing being applied to surgical navigational gauge point error indicator according to claim 1, it is characterized in that, the described space relative error distribution visualization method by operating theater instruments position is: define a gray scale or color-bar, for each error amount gives a kind of gray scale or colour, build three-dimensional relative error distribution gray scale or color atlas, according to the distance of luminous point and binocular optical navigation system, two dimensional gray or color atlas and operating theater instruments luminous point position are shown.
CN201410149067.1A 2014-04-14 2014-04-14 A kind of tracer method for designing being applied to surgical navigational gauge point error indicator Expired - Fee Related CN103948431B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410149067.1A CN103948431B (en) 2014-04-14 2014-04-14 A kind of tracer method for designing being applied to surgical navigational gauge point error indicator

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410149067.1A CN103948431B (en) 2014-04-14 2014-04-14 A kind of tracer method for designing being applied to surgical navigational gauge point error indicator

Publications (2)

Publication Number Publication Date
CN103948431A CN103948431A (en) 2014-07-30
CN103948431B true CN103948431B (en) 2016-01-20

Family

ID=51325887

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410149067.1A Expired - Fee Related CN103948431B (en) 2014-04-14 2014-04-14 A kind of tracer method for designing being applied to surgical navigational gauge point error indicator

Country Status (1)

Country Link
CN (1) CN103948431B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105919669B (en) * 2016-07-01 2018-07-20 华南理工大学 A method of realizing that optical operation navigation surgical instrument is demarcated using caliberating device
CN106344154B (en) * 2016-09-14 2018-11-09 大连理工大学 A kind of scaling method of the surgical instrument tip point based on maximal correlation entropy
CN107421476A (en) * 2017-05-11 2017-12-01 成都飞机工业(集团)有限责任公司 A kind of spatial hole position Measuring datum error compensation method
CN115778544B (en) * 2022-12-05 2024-02-27 方田医创(成都)科技有限公司 Surgical navigation precision indicating system, method and storage medium based on mixed reality

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6083163A (en) * 1997-01-21 2000-07-04 Computer Aided Surgery, Inc. Surgical navigation system and method using audio feedback
CN101327148A (en) * 2008-07-25 2008-12-24 清华大学 Instrument recognizing method for passive optical operation navigation
US9165114B2 (en) * 2010-03-11 2015-10-20 Koninklijke Philips N.V. Method and system for characterizing and visualizing electromagnetic tracking errors

Also Published As

Publication number Publication date
CN103948431A (en) 2014-07-30

Similar Documents

Publication Publication Date Title
CN107093195B (en) A kind of locating mark points method of laser ranging in conjunction with binocular camera
CN103292710B (en) A kind of distance measurement method applying binocular vision vision range finding principle
CN104376558B (en) Cuboid-based intrinsic parameter calibration method for Kinect depth camera
CN102927908B (en) Robot eye-on-hand system structured light plane parameter calibration device and method
CN103398660B (en) For obtaining the structured light vision sensor parameter calibration method of weld bead height information
CN111192235B (en) Image measurement method based on monocular vision model and perspective transformation
CN103948431B (en) A kind of tracer method for designing being applied to surgical navigational gauge point error indicator
Xie et al. Study on construction of 3D building based on UAV images
CN105043250B (en) A kind of double-visual angle data alignment method based on 1 common indicium points
CN109712232B (en) Object surface contour three-dimensional imaging method based on light field
CN104075688A (en) Distance measurement method of binocular stereoscopic gazing monitoring system
CN104864807A (en) Manipulator hand-eye calibration method based on active binocular vision
CN109579695B (en) Part measuring method based on heterogeneous stereoscopic vision
CN1971206A (en) Calibration method for binocular vision sensor based on one-dimension target
CN102032878A (en) Accurate on-line measurement method based on binocular stereo vision measurement system
CN1975324A (en) Double-sensor laser visual measuring system calibrating method
CN103278138A (en) Method for measuring three-dimensional position and posture of thin component with complex structure
CN104567666A (en) Measuring method for roller bearing block spatial position
CN1561502A (en) Strapdown system for three-dimensional reconstruction
CN110136211A (en) A kind of workpiece localization method and system based on active binocular vision technology
CN104034263A (en) Non-contact measurement method for sizes of forged pieces
CN105654476A (en) Binocular calibration method based on chaotic particle swarm optimization algorithm
CN103198481B (en) A kind of camera marking method
CN105091866A (en) Part position and posture identification visual system and calibration method thereof
CN102519434A (en) Test verification method for measuring precision of stereoscopic vision three-dimensional recovery data

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160120

CF01 Termination of patent right due to non-payment of annual fee