CN107621226A - The 3-D scanning method and system of multi-view stereo vision - Google Patents

The 3-D scanning method and system of multi-view stereo vision Download PDF

Info

Publication number
CN107621226A
CN107621226A CN201710585970.6A CN201710585970A CN107621226A CN 107621226 A CN107621226 A CN 107621226A CN 201710585970 A CN201710585970 A CN 201710585970A CN 107621226 A CN107621226 A CN 107621226A
Authority
CN
China
Prior art keywords
msub
light belt
mrow
group
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710585970.6A
Other languages
Chinese (zh)
Inventor
徐渊
王亚洲
边育心
周建华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN201710585970.6A priority Critical patent/CN107621226A/en
Publication of CN107621226A publication Critical patent/CN107621226A/en
Pending legal-status Critical Current

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a kind of 3-D scanning method and system of multi-view stereo vision, this method includes:S1:At least three groups of visual components by being arranged on around object to be scanned, in same level height obtain light belt image sets, and every group of light belt image sets include left light belt image and right light belt image;S2:Extract the light belt center of all left light belt images and right light belt image;S3:Light belt center is converted into three-dimensional coordinate data in group;S4:Combined calibrating, and three-dimensional coordinate data in all groups is changed to the same coordinate system, obtain world coordinate system data;S5:A cloud fusion is carried out with reference to the world coordinate system data in all level heights, obtains the three dimensional point cloud of object to be scanned.In the 3-D scanning method and system of the multi-view stereo vision of the present invention, comprehensive transversal scanning is carried out to body surface to be scanned with different view by least three groups of visual components, the blind zone problem in scanning is efficiently solved, improves the scan efficiency and scanning accuracy of system.

Description

The 3-D scanning method and system of multi-view stereo vision
Technical field
The present invention relates to the 3-D scanning method and system in 3-D scanning field, more particularly to a kind of multi-view stereo vision.
Background technology
3-D scanning is one of hot technology in recent years, wide because it possesses the advantages that simple in construction, precision is higher Apply generally in fields such as 3D scannings, commercial measurements.
At present, the scanning of three-dimensional point cloud has contact and contactless two major class:
(1) contact type scanning is mainly used in three-dimensional coordinate measurement, and its precision is high, and reproducibility is strong.But corresponding geometric properties Size is small, and the parts with large area free form surface can not measure, and sweep time is long and expensive.
(2) non-contact scanning mainly uses computer vision technique, is scanned using binocular or multi-vision visual system, Its precision is suitable and simple structure, because traditional binocular stereo vision is measured using passive type technology, it is impossible to effective solution The error hiding problem that repeat region, low texture region, texture similar area etc. are brought in image, and method of structured light can be fine Solve the problems, such as error hiding, but needed for the more complicated object of face type (containing the polyhedron that quadrangular object or curved surface are complicated) Scanning is repeated several times.For the three-dimensional laser scanning system using monocular or binocular stereo vision, can only be obtained from an angle The longitudinal direction point cloud information of body surface, but for the object that the complicated curved face object of face type or subregion are blocked, can exist The blind area of scanning, cause partial dot cloud shortage of data, considerably increase sweep time and the data processing time of system.
The content of the invention
The technical problem to be solved in the present invention is, there is provided a kind of 3-D scanning method of improved multi-view stereo vision and System.
The technical solution adopted for the present invention to solve the technical problems is:A kind of 3-D scanning of multi-view stereo vision is provided Method, comprise the following steps:
S1:By being arranged on around object to be scanned, in same level height diverse location at least three groups of visions Component obtains light belt image sets, and visual component described in every group includes two cameras and the line between two cameras Laser, light belt image sets described in every group include left light belt image and right light belt image;
S2:The light belt center of all left light belt images and the right light belt image is extracted respectively;
S3:By the light of the light belt center of left light belt image described in light belt image sets and the right light belt image described in every group Band center is converted to three-dimensional coordinate data in group;
S4:To carrying out combined calibrating between three-dimensional coordinate data in described group of visual component described in every group, and will be all Three-dimensional coordinate data is changed to the same coordinate system in described group, obtains world coordinate system data;
S5:A cloud fusion is carried out with reference to the world coordinate system data in all level heights, obtains the object to be scanned Three dimensional point cloud.
Preferably, in the step S2, the extraction at light belt center is carried out using gradient centroid method.
Preferably, in the gradient centroid method, including:
T1:Center-of-mass coordinate (x undetermined is tried to achieve by formula (1)tn,ytn);
Wherein, Q be the light belt image pixel value, n be pixel columns, LkFor minimum row, RKFor maximum row, t is to treat Determine the scope of center of mass point;
T2:Light belt centre coordinate (x is calculated by formula (2)c,yc),
Wherein, c subscripts represent actual center of mass point, MINxt functions be realize multiple center-of-mass coordinates relatively obtain with it is previous The most like coordinate value of row center-of-mass coordinate.
Preferably, in the step S3, three-dimensional coordinate data in group is carried out by formula (3) and (4) and calculated:
Wherein, the three-dimensional coordinate of a point P is (xP, yP, zP) on the object to be scanned, and point P takes the photograph in left camera and the right side As the light belt image coordinate that is projected on head is PL (xl, yl) and PR (xr, yr), f is camera focal distance, and B is binocular baseline Distance, parallax D=xl-xr.
Preferably, in the step S4, scaling method is:Left/right camera in light belt image sets described in every two groups is entered Row binocular calibration obtains spin matrix R and translation matrix T, and then three-dimensional coordinate data passes through formula p=in group described in this two groups Rq+T is changed, wherein, p, q are respectively three-dimensional coordinate data in group described in two groups.
Preferably, the quantity of the visual component is four groups.
Preferably, in the step S4, world coordinate system data are obtained by formula (5):
No.1:X=X1-U1
Z=L1-Z1
No.2:X=L2-Z2
Z=U2-X2
No.3:X=U3-X3
Z=Z3-L3
No.4:X=Z4-L4
Z=X4-U4 ... (5)
Wherein, No.1, No.2, No.3, No.4 represent visual component described in four groups, and L1, L2, L3, L4 are to be regarded described in four groups Feeling that component arrives the object under test central point distance, U1, U2, U3, U4 are the pixel abscissa at the correspondence light belt center, X1, X2, X3, X4 are the pixel abscissa at the light belt center.
Preferably, in the step S5, described cloud fusion includes:
W1:M and N are converged for every two groups of points, first calculates the Europe that the every bit that the point is converged in M converges N to the point Distance is drawn, if the Euler distance of current point pair is less than a threshold value, by current point to being defined as overlapping point pair;
W2:It is iterated according to this, establishes two points and converge M and N overlapping point to relation;
W3:By the normal direction feature of a cloud, corresponding registration point is looked in overlapping point set;
W4:Final merging point is determined by interpolation arithmetic between two corresponding registration points, completes point cloud fusion.
A kind of 3 D scanning system of multi-view stereo vision is also provided, the system performs above-mentioned method, the system Including:
At least three groups of visual components, it is arranged on around object to be scanned and is in diverse location in same level height, often The group visual component includes two cameras and the laser line generator between two cameras, and the visual component is used In obtaining light belt image sets, light belt image sets described in every group include left light belt image and right light belt image;
Extraction unit, the light belt center of all left light belt images and the right light belt image is extracted respectively;
Converting unit, by the light belt center of left light belt image described in light belt image sets described in every group and the right light belt figure The light belt center of picture is converted to three-dimensional coordinate data in group;
Unit is demarcated, to carrying out combined calibrating described in every group in group between three-dimensional coordinate data, and by all described groups Three-dimensional coordinate data is changed to the same coordinate system, obtains world coordinate system data;
Integrated unit, the fusion of cloud is carried out with reference to the world coordinate system data in all level heights, obtain described waiting to sweep Retouch the three dimensional point cloud of object.
Preferably, the system also includes the lifting platform for carrying the visual component and liftable motion.
The beneficial effects of the practice of the present invention is:In the 3-D scanning method and system of the multi-view stereo vision of the present invention, lead to Cross at least three groups of visual components and comprehensive transversal scanning is carried out to body surface to be scanned with different view, efficiently solve scanning In blind zone problem, while data processing algorithm is optimized, improves the scan efficiency and scanning accuracy of system.
Brief description of the drawings
Below in conjunction with drawings and Examples, the invention will be further described, in accompanying drawing:
Fig. 1 is the principle schematic of the 3 D scanning system of multi-view stereo vision in some embodiments of the invention;
Fig. 2 is visual component and scanning object in the 3 D scanning system of multi-view stereo vision in some embodiments of the invention The position relationship schematic diagram of body;
Fig. 3 is the stereo-visiuon measurement model of converting unit in Fig. 1;
Fig. 4 is the schematic diagram for the world coordinate system model that Fig. 1 acceptance of the bid order members are changed;
Fig. 5 is the flow chart of the 3-D scanning method of multi-view stereo vision in some embodiments of the invention.
Embodiment
In order to which technical characteristic, purpose and the effect of the present invention is more clearly understood, now compares accompanying drawing and describe in detail The embodiment of the present invention.
Fig. 1 shows the 3 D scanning system of the multi-view stereo vision in some embodiments of the invention, for being regarded from difference Angle carries out comprehensive transversal scanning to body surface to be scanned.The 3 D scanning system bag of multi-view stereo vision in the present embodiment Visual component 10, extraction unit 20, converting unit 30, demarcation unit 40, integrated unit 50 and lifting platform (not shown) are included, its In, visual component 10 is used to carry out 3-D scanning to object to be scanned and obtains light belt image sets, and lifting platform is used to carry vision The elevating movement of component 10, extraction unit 20 are used to extract light belt center, and converting unit 30 is used to be converted to light belt center in group Three-dimensional coordinate data, demarcation unit 40 is used to carry out combined calibrating, and three-dimensional coordinate data in all groups is changed to same seat Mark system, obtains world coordinate system data, and the world coordinate system data that integrated unit 50 is used to combine in all level heights are carried out Point cloud fusion, obtains the three dimensional point cloud of object to be scanned.
The 3 D scanning system of multi-view stereo vision in the present embodiment is used by being controlled to the lifting platform of precision Four groups of binocular cameras around shape obtain the line laser light belt image at diverse location, extraction light belt center, to getting Light belt image carry out Stereo matching, the coordinate at all light belt centers is calculated using principle of parallax;The road light belt centers of Bing Jiang tetra- Coordinate Conversion be unified world coordinate system, carry out the fusion of cloud data, finally rapidly obtain object three-dimensional point cloud Data.The core technology of the present invention divides two parts:First, the lifting platform of precision is controlled, using four groups around the double of shape Mesh camera obtains the line laser light belt image at diverse location, to obtain the light belt center of line laser, is carried using light belt barycenter Algorithm is taken, carries out the extraction at light belt center, Stereo matching is carried out to the light belt image got, institute is calculated using principle of parallax There is the coordinate at light belt center.The Coordinate Conversion at the road light belt centers of two Shi Jiang tetra- is unified world coordinate system, carries out cloud data Fusion, finally rapidly obtain object three dimensional point cloud.
Wherein, the quantity of visual component 10 is at least three groups, and is separately positioned on around object to be scanned and is in same Diverse location in level height.Every group of visual component 10 includes two cameras and the line laser between two cameras Device.Visual component 10 is used to obtain light belt image sets, and every group of light belt image sets include left light belt image and right light belt image.It is preferred that Ground, two camera structure levels, parameter is identical, and the present embodiment is 640*480 using the resolution ratio of binocular camera, binocular base Line is 9cm.Alternatively, can for modified using the parameter or standard of camera, such as resolution ratio of camera head is changed Into 1920x1080, or, according to the size of scanning object, it can dynamically adjust binocular baseline.
With reference to shown in Fig. 2, the quantity of visual component 10 is preferably four groups, i.e., is come using four groups of binocular cameras around shape The line laser light belt image at diverse location is obtained around object to be scanned, to obtain the light belt center of line laser.
Extraction unit 20 is connected with visual component 10, for extracting all left light belt images and right light belt image respectively Light belt center.Preferably, extraction unit 20 carries out the extraction at light belt center using gradient centroid method.Specifically, using gradient matter Heart method carries out the extraction at light belt center, is the gradient of the every a line for calculating light belt image first, is then relatively obtained by gradient The rough location of light belt central area;Then the barycenter of this band of position is calculated, in being extracted finally by neighborhood screening principle Heart point position.
In gradient centroid method, center-of-mass coordinate (x undetermined is tried to achieve by formula (1) firsttn,ytn);
Wherein, Q be light belt image pixel value, n be pixel columns, LkFor minimum row, RKFor maximum row, t is matter undetermined The scope of heart point;
Then, light belt centre coordinate (x is calculated by formula (2)c,yc),
Wherein, c subscripts represent actual center of mass point, MINxt functions be realize multiple center-of-mass coordinates relatively obtain with it is previous The most like coordinate value of row center-of-mass coordinate.
It should be noted that firstly, since light belt image there may be multiple hot spots per a line, center-of-mass coordinate undetermined is utilized Formula obtains center-of-mass coordinate (x undeterminedtn,ytn), wherein n is the columns of pixel;Then, matter undetermined required in formula (1) is utilized Heart coordinate is compared, and finds the immediate center-of-mass coordinate of previous row as final result.MINxt functions are to realize in formula Multiple center-of-mass coordinates relatively obtain the coordinate value most like with previous row center-of-mass coordinate.
Converting unit 30 is connected with extraction unit 20, for by the light belt of left light belt image in every group of light belt image sets The light belt center of the heart and right light belt image is converted to three-dimensional coordinate data in group.
Specifically, binocular camera polar curve is horizontal in the present embodiment, only need to be in any one of the light belt center of left mesh image Point is found in corresponding right mesh image with the light belt central point of a line, images match point as herein, utilizes binocular parallax Principle, binocular camera and the depth distance of testee are calculated, to realize conversion of the two-dimensional space to three dimensions.Such as figure It is binocular stereo vision measurement model shown in 3, if the three-dimensional coordinate of testee P points is (xP,yP,zP), binocular camera is clapped The P points taken the photograph on object, the image coordinate projected respectively on left camera and right camera be PL (xl, yl) and PR (xr, Yr), relationship below is obtained by triangle similarity relation, i.e., carrying out three-dimensional coordinate data in group by formula (3) and (4) calculates:
Wherein, the three-dimensional coordinate of a point P is (x on object to be scannedP,yP,zP), point P is on left camera and right camera The light belt image coordinate of projection is PL(xl,yl) and PR(xr,yr), f is camera focal distance, and B is binocular parallax range, parallax D=xl-xr
The three-dimensional coordinate data at space any point can be calculated by above formula (3), (4).
Demarcation unit 40 is connected with converting unit 30, for carrying out joint mark between three-dimensional coordinate data in every group of group It is fixed, and three-dimensional coordinate data in all groups is changed to the same coordinate system, obtain world coordinate system data.
Demarcation unit 40 scaling method be:Binocular calibration is carried out to left/right camera in every two groups of light belt image sets to obtain To spin matrix R and translation matrix T, then three-dimensional coordinate data is changed by formula p=Rq+T in two groups of groups, its In, p, q are respectively three-dimensional coordinate data in two groups of groups.Alternatively, for the demarcation of binocular camera, Matlab can be used Or the scaling scheme in Opencv, or, it can also be demarcated by calibration tool case, to obtain joining outside binocular camera Number, intrinsic parameter.
Specifically, the left camera in the left camera in binocular camera and adjacent binocular camera is partnered binocular Camera, and the binocular camera of each pair composition, the method demarcated again by Zhang Zhengyou, obtain translation matrix and spin matrix, will Its coordinate system is transformed into the three-dimensional coordinate under other coordinate systems, and then realizes the unification of coordinate system, actually by independent pair Mesh camera coordinates system, is converted in global coordinate system, has used three-dimensional splicing technology among these.Due to this splicing error It is larger, generally for this error is reduced, some marks need to be put in measurement zone, be spliced with fixed index point, and these Index point needs to meet to require for more than three.Realize that the coordinate system between three-dimensional corresponding point set is changed using four-tuple method Relation.It is to utilize two groups of mutually one-to-one three-dimensional coordinate point sets, calculates the spin matrix R peace between two coordinates Move vector T.The relation between two point sets is can be obtained by after R and T is calculated:P=Rq+T.
In certain embodiments, in order to obtain complete three dimensional point cloud, the connection of more module binocular cameras above is passed through Demarcation is closed, the cloud data of different visual angles is established under the same coordinate system.As shown in figure 4, be whole system from depression angle, The center of lifting platform is coordinate origin O.Line laser binocular module is that four groups of visual components 10 are defined as No.1, No.2, No.3, No.4.It is believed that boundaries of the No.1 and No.2 as first quartile, the rest may be inferred can obtain four groups of the quadrant coordinate system systems.Wherein Y-axis in world coordinate system represents the height that object is swept, and is determined by lifting platform.World coordinate system is obtained by formula (5) Data:
No.1:X=X1-U1
Z=L1-Z1
No.2:X=L2-Z2
Z=U2-X2
No.3:X=U3-X3
Z=Z3-L3
No.4:X=Z4-L4
Z=X4-U4 ... (5)
Wherein, No.1, No.2, No.3, No.4 represent four groups of visual components 10, and L1, L2, L3, L4 are four groups of visual components 10 arrive object under test central point distance, and U1, U2, U3, U4 are the pixel abscissa at corresponding light belt center, and X1, X2, X3, X4 are The pixel abscissa at light belt center.
So far the scanning cross-section point cloud information of four direction will be respectively from, world coordinates is mapped to and fastens.
Integrated unit 50 is connected with demarcation unit 40, enters for combining the world coordinate system data in all level heights Row point cloud fusion, obtains the three dimensional point cloud of object to be scanned.Specifically, the point cloud fusion under different visual angles can have difference The layering of degree, overlapping phenomenon, the redundancy of a cloud is so inevitably caused, and can be reconstructed in follow-up three-dimensional point cloud When easily cause the ambiguity of model.The fusion purpose of three-dimensional point cloud is to solve the problems, such as above, to make the point cloud model in later stage more To be smooth.
Point cloud fusion includes:First, M and N are converged for every two groups of points, first calculating point converges each point-to-point cloud in M Collect N Euler's distance, if Euler's distance of current point pair is less than a threshold value, by current point to being defined as overlapping point pair;Then, It is iterated according to this, establishes two points and converge M and N overlapping point to relation;Afterwards, by the normal direction feature of a cloud, overlapping Point set in look for corresponding registration point;Finally, final fusion is determined by interpolation arithmetic between two corresponding registration points Point, complete point cloud fusion.It should be noted that the specific implementation of interpolation arithmetic is realized using bilinear interpolation, due to The step of bilinear interpolation is prior art, is not particularly limited herein, as long as correlation computations can be realized.
It should be noted that extraction unit 20, converting unit 30, demarcation unit 40, integrated unit 50 can be respectively independent Unit, or, or at least the mode of combination of two even entirety.Extraction unit 20, converting unit 30, demarcation unit 40th, the implementation of integrated unit 50 can be the forms such as circuit, MCU, single-chip microcomputer, software.
Below in conjunction with multi-view stereo vision in Fig. 1-5 and some embodiments of the invention 3-D scanning method to the present invention one The operation principle of the 3 D scanning system of multi-view stereo vision illustrates in a little embodiments.
The 3-D scanning method of the multi-view stereo vision of the present embodiment is used to different view enter body surface to be scanned The comprehensive transversal scanning of row.The method of the present embodiment comprises the following steps S1-S5.
In step sl, by being arranged on around object to be scanned, in same level height, diverse location is at least Three groups of visual components 10 obtain light belt image sets, and every group of visual component 10 includes two cameras and between two cameras Laser line generator, every group of light belt image sets include left light belt image and right light belt image.Preferably, two camera structure water Flat, parameter is identical, and the present embodiment is 640*480 using the resolution ratio of binocular camera, and binocular baseline is 9cm.Alternatively, may be used For modified using the parameter or standard of camera, such as make resolution ratio of camera head into 1920x 1080, or, root According to the size of scanning object, binocular baseline can be dynamically adjusted.
The quantity of visual component 10 is preferably four groups, i.e., obtains diverse location using four groups of binocular cameras around shape The line laser light belt image at place, to obtain the light belt center of line laser.
In step s 2, the light belt center of all left light belt images and right light belt image is extracted respectively.Preferably, using ladder Spend the extraction that centroid method carries out light belt center.Specifically, the extraction at light belt center is carried out using gradient centroid method, is to calculate first The gradient of every a line of light belt image, the rough location of light belt central area is then relatively obtained by gradient;Then this is calculated The barycenter of the band of position, principle is screened finally by neighborhood to extract center position.
In gradient centroid method, including:
T1:Center-of-mass coordinate (x undetermined is tried to achieve by formula (1)tn,ytn);
Wherein, Q be light belt image pixel value, n be pixel columns, LkFor minimum row, RKFor maximum row, t is matter undetermined The scope of heart point;
T2:Light belt centre coordinate (x is calculated by formula (2)c,yc),
Wherein, c subscripts represent actual center of mass point, MINxt functions be realize multiple center-of-mass coordinates relatively obtain with it is previous The most like coordinate value of row center-of-mass coordinate.
It should be noted that firstly, since light belt image there may be multiple hot spots per a line, center-of-mass coordinate undetermined is utilized Formula obtains center-of-mass coordinate (x undeterminedtn,ytn), wherein n is the columns of pixel;Then, matter undetermined required in formula (1) is utilized Heart coordinate is compared, and finds the immediate center-of-mass coordinate of previous row as final result.MINxt functions are to realize in formula Multiple center-of-mass coordinates relatively obtain the coordinate value most like with previous row center-of-mass coordinate.
In step s3, by the light belt of the light belt center of left light belt image and right light belt image in every group of light belt image sets The heart is converted to three-dimensional coordinate data in group.
Specifically, binocular camera polar curve is horizontal in the present embodiment, only need to be in any one of the light belt center of left mesh image Point is found in corresponding right mesh image with the light belt central point of a line, images match point as herein, utilizes binocular parallax Principle, binocular camera and the depth distance of testee are calculated, to realize conversion of the two-dimensional space to three dimensions.Such as figure It is binocular stereo vision measurement model shown in 3, if the three-dimensional coordinate of testee P points is (xP,yP,zP), binocular camera is clapped The P points taken the photograph on object, the image coordinate projected respectively on left camera and right camera be PL (xl, yl) and PR (xr, Yr), relationship below is obtained by triangle similarity relation, i.e., carrying out three-dimensional coordinate data in group by formula (3) and (4) calculates:
Wherein, the three-dimensional coordinate of a point P is (x on object to be scannedP,yP,zP), point P is on left camera and right camera The light belt image coordinate of projection is PL(xl,yl) and PR(xr,yr), f is camera focal distance, and B is binocular parallax range, parallax D=xl-xr
The three-dimensional coordinate data at space any point can be calculated by above formula (3), (4).
In step s 4, to carrying out combined calibrating between three-dimensional coordinate data in the group of every group of visual component 10, and by institute There is three-dimensional coordinate data in group to change to the same coordinate system, obtain world coordinate system data.
Wherein, scaling method is:Binocular calibration is carried out to left/right camera in every two groups of light belt image sets and obtains spin moment Battle array R and translation matrix T, then three-dimensional coordinate data is changed by formula p=Rq+T in two groups of groups, wherein, p, q difference For three-dimensional coordinate data in two groups of groups.Alternatively, for the demarcation of binocular camera, can use in Matlab or Opencv Scaling scheme, or, can also be demarcated by calibration tool case, to obtain binocular camera outer parameter, intrinsic parameter.
Specifically, the left camera in the left camera in binocular camera and adjacent binocular camera is partnered binocular Camera, and the binocular camera of each pair composition, the method demarcated again by Zhang Zhengyou, obtain translation matrix and spin matrix, will Its coordinate system is transformed into the three-dimensional coordinate under other coordinate systems, and then realizes the unification of coordinate system, actually by independent pair Mesh camera coordinates system, is converted in global coordinate system, has used three-dimensional splicing technology among these.Due to this splicing error It is larger, generally for this error is reduced, some marks need to be put in measurement zone, be spliced with fixed index point, and these Index point needs to meet to require for more than three.Realize that the coordinate system between three-dimensional corresponding point set is changed using four-tuple method Relation.It is to utilize two groups of mutually one-to-one three-dimensional coordinate point sets, calculates the spin matrix R peace between two coordinates Move vector T.The relation between two point sets is can be obtained by after R and T is calculated:P=Rq+T.
In this step, in order to obtain complete three dimensional point cloud, by the combined calibrating of more module binocular cameras above, The cloud data of different visual angles is established under the same coordinate system.As shown in figure 4, being whole system from depression angle, lifting platform Center be coordinate origin O.Line laser binocular module is that four groups of visual components 10 are defined as No.1, No.2, No.3, No.4. It is believed that boundaries of the No.1 and No.2 as first quartile, the rest may be inferred can obtain four groups of the quadrant coordinate system systems.The wherein world Y-axis in coordinate system represents the height that object is swept, and is determined by lifting platform.World coordinate system data are obtained by formula (5):
No.1:X=X1-U1
Z=L1-Z1
No.2:X=L2-Z2
Z=U2-X2
No.3:X=U3-X3
Z=Z3-L3
No.4:X=Z4-L4
Z=X4-U4 ... (5)
Wherein, No.1, No.2, No.3, No.4 represent four groups of visual components 10, and L1, L2, L3, L4 are four groups of visual components 10 arrive object under test central point distance, and U1, U2, U3, U4 are the pixel abscissa at corresponding light belt center, and X1, X2, X3, X4 are The pixel abscissa at light belt center.
So far the scanning cross-section point cloud information of four direction will be respectively from, world coordinates is mapped to and fastens.
In step s 5, a cloud fusion is carried out with reference to the world coordinate system data in all level heights, obtained to be scanned The three dimensional point cloud of object.Specifically, the point cloud fusion under different visual angles can have different degrees of layering, overlapping phenomenon, The redundancy of a cloud is so inevitably caused, and the discrimination of model can be easily caused in follow-up three-dimensional point cloud reconstruct Justice.The fusion purpose of three-dimensional point cloud is to solve the problems, such as above, to make the point cloud model in later stage more smooth.
Point cloud fusion includes:
W1:M and N are converged for every two groups of points, first calculates Euler's distance that each point-to-point that point is converged in M converges N, if When Euler's distance of current point pair is less than a threshold value, by current point to being defined as overlapping point pair;
W2:It is iterated according to this, establishes two points and converge M and N overlapping point to relation;
W3:By the normal direction feature of a cloud, corresponding registration point is looked in overlapping point set;
W4:Final merging point is determined by interpolation arithmetic between two corresponding registration points, completes point cloud fusion.
The 3-D scanning method and system of multi-view stereo vision described by the embodiment of the present invention, complexity can not increased External hardware under conditions of realize high performance three-dimensional reconstruction effect, for current three-dimensional point cloud sweep speed is excessively slow and scanning Blind area phenomenon, the present embodiment carry out level to body surface and scanned so that systematicness is greatly lifted.The present embodiment structure Simply, precision is suitable, sweep speed is fast, can be used in the occasions such as workpiece scanning, historical relic scanning, large scene scanning, tool from now on There is good application prospect.
Described above is only the preferred embodiment of the present invention, and protection scope of the present invention is not limited merely to above-mentioned implementation Example, all technical schemes belonged under thinking of the present invention belong to protection scope of the present invention.It should be pointed out that for the art Those of ordinary skill for, several improvements and modifications without departing from the principles of the present invention, these improvements and modifications Also it should be regarded as protection scope of the present invention.

Claims (10)

1. a kind of 3-D scanning method of multi-view stereo vision, it is characterised in that comprise the following steps:
S1:By being arranged on around object to be scanned, in same level height diverse location at least three groups of visual components (10) light belt image sets are obtained, visual component (10) described in every group includes two cameras and between two cameras Laser line generator, light belt image sets described in every group include left light belt image and right light belt image;
S2:The light belt center of all left light belt images and the right light belt image is extracted respectively;
S3:By in the light belt of the light belt center of left light belt image described in light belt image sets and the right light belt image described in every group The heart is converted to three-dimensional coordinate data in group;
S4:To carrying out combined calibrating between three-dimensional coordinate data in described group of visual component (10) described in every group, and will be all Three-dimensional coordinate data is changed to the same coordinate system in described group, obtains world coordinate system data;
S5:A cloud fusion is carried out with reference to the world coordinate system data in all level heights, obtains the three of the object to be scanned Tie up cloud data.
2. according to the method for claim 1, it is characterised in that in the step S2, light belt is carried out using gradient centroid method The extraction at center.
3. according to the method for claim 2, it is characterised in that in the gradient centroid method, including:
T1:Center-of-mass coordinate (x undetermined is tried to achieve by formula (1)tn,ytn);
<mrow> <msub> <mi>x</mi> <mrow> <mi>t</mi> <mi>n</mi> </mrow> </msub> <mo>=</mo> <mfrac> <mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>=</mo> <msub> <mi>L</mi> <mi>k</mi> </msub> </mrow> <msub> <mi>R</mi> <mi>k</mi> </msub> </munderover> <mi>Q</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> </mrow> <mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>=</mo> <msub> <mi>L</mi> <mi>k</mi> </msub> </mrow> <msub> <mi>R</mi> <mi>k</mi> </msub> </munderover> <mi>Q</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow>
ytn=yj……(1)
Wherein, Q be the light belt image pixel value, n be pixel columns, LkFor minimum row, RKFor maximum row, t is matter undetermined The scope of heart point;
T2:Light belt centre coordinate (x is calculated by formula (2)c, yc),
xc=MINxt(|xc1 xt0|, | xc1 xt1|, | xc1 xt2| ...)
yc=yt……(2)
Wherein, c subscripts represent actual center of mass point, and MINxt functions are to realize relatively to obtain and previous row matter in multiple center-of-mass coordinates The most like coordinate value of heart coordinate.
4. according to the method described in claim any one of 1-3, it is characterised in that in the step S3, by formula (3) and (4) three-dimensional coordinate data in group is carried out to calculate:
<mrow> <msub> <mi>X</mi> <mi>l</mi> </msub> <mo>=</mo> <mi>f</mi> <mfrac> <msub> <mi>x</mi> <mi>p</mi> </msub> <msub> <mi>z</mi> <mi>p</mi> </msub> </mfrac> </mrow>
<mrow> <msub> <mi>X</mi> <mi>r</mi> </msub> <mo>=</mo> <mi>f</mi> <mfrac> <mrow> <msub> <mi>x</mi> <mi>p</mi> </msub> <mo>-</mo> <mi>B</mi> </mrow> <msub> <mi>z</mi> <mi>p</mi> </msub> </mfrac> </mrow>
<mrow> <mi>Y</mi> <mo>=</mo> <mi>f</mi> <mfrac> <msub> <mi>y</mi> <mi>c</mi> </msub> <msub> <mi>z</mi> <mi>c</mi> </msub> </mfrac> <mn>......</mn> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <msub> <mi>x</mi> <mi>p</mi> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mi>BX</mi> <mi>l</mi> </msub> </mrow> <mi>D</mi> </mfrac> </mrow>
<mrow> <msub> <mi>y</mi> <mi>p</mi> </msub> <mo>=</mo> <mfrac> <mrow> <mi>B</mi> <mi>Y</mi> </mrow> <mi>D</mi> </mfrac> </mrow>
<mrow> <msub> <mi>z</mi> <mi>p</mi> </msub> <mo>=</mo> <mfrac> <mrow> <mi>B</mi> <mi>f</mi> </mrow> <mi>D</mi> </mfrac> <mn>.....</mn> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow>
Wherein, the three-dimensional coordinate of a point P is (x on the object to be scannedP, yP, zP), point P is on left camera and right camera The light belt image coordinate of projection is PL(x1, y1) and PR(xr, yr), f is camera focal distance, and B is binocular parallax range, parallax D=x1-xr
5. according to the method described in claim any one of 1-3, it is characterised in that in the step S4, scaling method is:To every In light belt image sets described in two groups left/right camera carry out binocular calibration obtain spin matrix R and translation matrix T, then this two Three-dimensional coordinate data is changed by formula p=Rq+T in described group of group, wherein, p, q are respectively three-dimensional in group described in two groups Coordinate data.
6. according to the method described in claim any one of 1-3, it is characterised in that the quantity of the visual component (10) is four Group.
7. according to the method for claim 6, it is characterised in that in the step S4, world coordinates is obtained by formula (5) Coefficient evidence:
No.1:X=X1-U1
Z=L1-Z1
No.2:X=L2-Z2
Z=U2-X2
No.3:X=U3-X3
Z=Z3-L3
No.4:X=Z4-L4
Z=X4-U4 ... (5)
Wherein, No.1, No.2, No.3, No.4 represent visual component (10) described in four groups, and L1, L2, L3, L4 are to be regarded described in four groups Feel that component (10) arrives the object under test central point distance, U1, U2, U3, U4 are the horizontal seat of pixel at the corresponding light belt center Mark, X1, X2, X3, X4 are the pixel abscissa at the light belt center.
8. according to the method for claim 1, it is characterised in that in the step S5, described cloud fusion includes:
W1:Converge M and N for every two groups of points, first calculate the point converge every bit in M to the point converge N Euler away from From if the Euler distance of current point pair is less than a threshold value, by current point to being defined as overlapping point pair;
W2:It is iterated according to this, establishes two points and converge M and N overlapping point to relation;
W3:By the normal direction feature of a cloud, corresponding registration point is looked in overlapping point set;
W4:Final merging point is determined by interpolation arithmetic between two corresponding registration points, completes point cloud fusion.
9. a kind of 3 D scanning system of multi-view stereo vision, it is characterised in that the system perform claim requires any one of 1-8 Described method, the system include:
At least three groups of visual components (10), it is arranged on around object to be scanned and is in diverse location in same level height, often The group visual component (10) includes two cameras and the laser line generator between two cameras, the vision group Part (10) is used to obtain light belt image sets, and light belt image sets described in every group include left light belt image and right light belt image;
Extraction unit (20), the light belt center of all left light belt images and the right light belt image is extracted respectively;
Converting unit (30), by the light belt center of left light belt image described in light belt image sets described in every group and the right light belt figure The light belt center of picture is converted to three-dimensional coordinate data in group;
Unit (40) is demarcated, to carrying out combined calibrating described in every group in group between three-dimensional coordinate data, and by all described groups Three-dimensional coordinate data is changed to the same coordinate system, obtains world coordinate system data;
Integrated unit (50), the fusion of cloud is carried out with reference to the world coordinate system data in all level heights, obtain described waiting to sweep Retouch the three dimensional point cloud of object.
10. system according to claim 9, it is characterised in that the system also includes carrying the visual component (10) And the lifting platform of liftable motion.
CN201710585970.6A 2017-07-18 2017-07-18 The 3-D scanning method and system of multi-view stereo vision Pending CN107621226A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710585970.6A CN107621226A (en) 2017-07-18 2017-07-18 The 3-D scanning method and system of multi-view stereo vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710585970.6A CN107621226A (en) 2017-07-18 2017-07-18 The 3-D scanning method and system of multi-view stereo vision

Publications (1)

Publication Number Publication Date
CN107621226A true CN107621226A (en) 2018-01-23

Family

ID=61088865

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710585970.6A Pending CN107621226A (en) 2017-07-18 2017-07-18 The 3-D scanning method and system of multi-view stereo vision

Country Status (1)

Country Link
CN (1) CN107621226A (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108717728A (en) * 2018-07-19 2018-10-30 安徽中科智链信息科技有限公司 A kind of three-dimensional reconstruction apparatus and method based on various visual angles depth camera
CN109035330A (en) * 2018-08-17 2018-12-18 深圳蓝胖子机器人有限公司 Cabinet approximating method, equipment and computer readable storage medium
CN109523633A (en) * 2018-09-30 2019-03-26 先临三维科技股份有限公司 Model scanning method, apparatus, equipment, storage medium and processor
CN109727277A (en) * 2018-12-28 2019-05-07 江苏瑞尔医疗科技有限公司 The body surface of multi-view stereo vision puts position tracking
CN109938841A (en) * 2019-04-11 2019-06-28 哈尔滨理工大学 A kind of surgical instrument navigation system based on the fusion of more mesh camera coordinates
CN110044300A (en) * 2019-01-22 2019-07-23 中国海洋大学 Amphibious 3D vision detection device and detection method based on laser
CN110599546A (en) * 2019-08-28 2019-12-20 贝壳技术有限公司 Method, system, device and storage medium for acquiring three-dimensional space data
CN110779933A (en) * 2019-11-12 2020-02-11 广东省智能机器人研究院 Surface point cloud data acquisition method and system based on 3D visual sensing array
CN110907457A (en) * 2019-12-19 2020-03-24 长安大学 Aggregate morphological feature detection system and method based on 3D point cloud data
CN111721194A (en) * 2019-03-19 2020-09-29 北京伟景智能科技有限公司 Multi-laser-line rapid detection method
CN111738971A (en) * 2019-03-19 2020-10-02 北京伟景智能科技有限公司 Circuit board stereo scanning detection method based on line laser binocular stereo vision
CN111768448A (en) * 2019-03-30 2020-10-13 北京伟景智能科技有限公司 Spatial coordinate system calibration method based on multi-camera detection
CN111829435A (en) * 2019-08-27 2020-10-27 北京伟景智能科技有限公司 Multi-binocular camera and line laser cooperative detection method
CN113074633A (en) * 2021-03-22 2021-07-06 西安工业大学 Automatic detection system and detection method for overall dimension of material
CN113103228A (en) * 2021-03-29 2021-07-13 航天时代电子技术股份有限公司 Teleoperation robot
CN113222891A (en) * 2021-04-01 2021-08-06 东方电气集团东方锅炉股份有限公司 Line laser-based binocular vision three-dimensional measurement method for rotating object
CN113538547A (en) * 2021-06-03 2021-10-22 苏州小蜂视觉科技有限公司 Depth processing method of 3D line laser sensor and dispensing equipment
CN113813170A (en) * 2021-08-30 2021-12-21 中科尚易健康科技(北京)有限公司 Target point conversion method between cameras of multi-camera physiotherapy system
CN115047472A (en) * 2022-03-30 2022-09-13 北京一径科技有限公司 Method, device and equipment for determining laser radar point cloud layering and storage medium

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1595054A (en) * 2004-07-14 2005-03-16 天津大学 Compatible and accurate calibration method for double eye line structure photo-sensor and implementing apparatus
CN102003938A (en) * 2010-10-11 2011-04-06 中国人民解放军信息工程大学 Thermal state on-site detection method for large high-temperature forging
CN102364299A (en) * 2011-08-30 2012-02-29 刘桂华 Calibration technology for multiple structured light projected three-dimensional profile measuring heads
CN102768728A (en) * 2012-06-27 2012-11-07 山东大学 Scanning galvanometer-based stereo character image collecting and processing method
CN102878925A (en) * 2012-09-18 2013-01-16 天津工业大学 Synchronous calibration method for binocular video cameras and single projection light source
CN103542981A (en) * 2013-09-28 2014-01-29 大连理工大学 Method for measuring rotary inertia through binocular vision
CN103940369A (en) * 2014-04-09 2014-07-23 大连理工大学 Quick morphology vision measuring method in multi-laser synergic scanning mode
CN103971353A (en) * 2014-05-14 2014-08-06 大连理工大学 Splicing method for measuring image data with large forgings assisted by lasers
CN104183010A (en) * 2013-05-22 2014-12-03 上海迪谱工业检测技术有限公司 Multi-view three-dimensional online reconstruction method
CN104457574A (en) * 2014-12-11 2015-03-25 天津大学 Device for measuring volume of irregular object in non-contact measurement mode and method
CN104933718A (en) * 2015-06-23 2015-09-23 广东省自动化研究所 Physical coordinate positioning method based on binocular vision
CN104930985A (en) * 2015-06-16 2015-09-23 大连理工大学 Binocular vision three-dimensional morphology measurement method based on time and space constraints
CN105716542A (en) * 2016-04-07 2016-06-29 大连理工大学 Method for three-dimensional data registration based on flexible feature points
CN105741304A (en) * 2016-03-02 2016-07-06 南昌航空大学 Laser stripe center extraction algorithm
CN106949845A (en) * 2017-01-19 2017-07-14 南京航空航天大学 Two-dimensional laser galvanometer scanning system and scaling method based on binocular stereo vision

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1595054A (en) * 2004-07-14 2005-03-16 天津大学 Compatible and accurate calibration method for double eye line structure photo-sensor and implementing apparatus
CN102003938A (en) * 2010-10-11 2011-04-06 中国人民解放军信息工程大学 Thermal state on-site detection method for large high-temperature forging
CN102364299A (en) * 2011-08-30 2012-02-29 刘桂华 Calibration technology for multiple structured light projected three-dimensional profile measuring heads
CN102768728A (en) * 2012-06-27 2012-11-07 山东大学 Scanning galvanometer-based stereo character image collecting and processing method
CN102878925A (en) * 2012-09-18 2013-01-16 天津工业大学 Synchronous calibration method for binocular video cameras and single projection light source
CN104183010A (en) * 2013-05-22 2014-12-03 上海迪谱工业检测技术有限公司 Multi-view three-dimensional online reconstruction method
CN103542981A (en) * 2013-09-28 2014-01-29 大连理工大学 Method for measuring rotary inertia through binocular vision
CN103940369A (en) * 2014-04-09 2014-07-23 大连理工大学 Quick morphology vision measuring method in multi-laser synergic scanning mode
CN103971353A (en) * 2014-05-14 2014-08-06 大连理工大学 Splicing method for measuring image data with large forgings assisted by lasers
CN104457574A (en) * 2014-12-11 2015-03-25 天津大学 Device for measuring volume of irregular object in non-contact measurement mode and method
CN104930985A (en) * 2015-06-16 2015-09-23 大连理工大学 Binocular vision three-dimensional morphology measurement method based on time and space constraints
CN104933718A (en) * 2015-06-23 2015-09-23 广东省自动化研究所 Physical coordinate positioning method based on binocular vision
CN105741304A (en) * 2016-03-02 2016-07-06 南昌航空大学 Laser stripe center extraction algorithm
CN105716542A (en) * 2016-04-07 2016-06-29 大连理工大学 Method for three-dimensional data registration based on flexible feature points
CN106949845A (en) * 2017-01-19 2017-07-14 南京航空航天大学 Two-dimensional laser galvanometer scanning system and scaling method based on binocular stereo vision

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
周建华: "多组双目激光融合的三维扫描***研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
罗庆生等: "《仿生四足机器人技术》", 30 April 2016, 北京理工大学出版社 *
隋修武: "《机械电子工程原理与***设计》", 31 January 2014, 国防工业出版社 *

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108717728A (en) * 2018-07-19 2018-10-30 安徽中科智链信息科技有限公司 A kind of three-dimensional reconstruction apparatus and method based on various visual angles depth camera
CN109035330A (en) * 2018-08-17 2018-12-18 深圳蓝胖子机器人有限公司 Cabinet approximating method, equipment and computer readable storage medium
CN109523633A (en) * 2018-09-30 2019-03-26 先临三维科技股份有限公司 Model scanning method, apparatus, equipment, storage medium and processor
CN109523633B (en) * 2018-09-30 2023-06-02 先临三维科技股份有限公司 Model scanning method, device, equipment, storage medium and processor
CN109727277A (en) * 2018-12-28 2019-05-07 江苏瑞尔医疗科技有限公司 The body surface of multi-view stereo vision puts position tracking
CN110044300A (en) * 2019-01-22 2019-07-23 中国海洋大学 Amphibious 3D vision detection device and detection method based on laser
CN110044300B (en) * 2019-01-22 2024-04-09 中国海洋大学 Amphibious three-dimensional vision detection device and detection method based on laser
CN111738971B (en) * 2019-03-19 2024-02-27 北京伟景智能科技有限公司 Circuit board stereoscopic scanning detection method based on line laser binocular stereoscopic vision
CN111721194A (en) * 2019-03-19 2020-09-29 北京伟景智能科技有限公司 Multi-laser-line rapid detection method
CN111738971A (en) * 2019-03-19 2020-10-02 北京伟景智能科技有限公司 Circuit board stereo scanning detection method based on line laser binocular stereo vision
CN111768448A (en) * 2019-03-30 2020-10-13 北京伟景智能科技有限公司 Spatial coordinate system calibration method based on multi-camera detection
CN109938841A (en) * 2019-04-11 2019-06-28 哈尔滨理工大学 A kind of surgical instrument navigation system based on the fusion of more mesh camera coordinates
CN111829435A (en) * 2019-08-27 2020-10-27 北京伟景智能科技有限公司 Multi-binocular camera and line laser cooperative detection method
CN110599546A (en) * 2019-08-28 2019-12-20 贝壳技术有限公司 Method, system, device and storage medium for acquiring three-dimensional space data
CN110779933A (en) * 2019-11-12 2020-02-11 广东省智能机器人研究院 Surface point cloud data acquisition method and system based on 3D visual sensing array
CN110907457A (en) * 2019-12-19 2020-03-24 长安大学 Aggregate morphological feature detection system and method based on 3D point cloud data
CN113074633B (en) * 2021-03-22 2023-01-31 西安工业大学 Automatic detection system and detection method for overall dimension of material
CN113074633A (en) * 2021-03-22 2021-07-06 西安工业大学 Automatic detection system and detection method for overall dimension of material
CN113103228A (en) * 2021-03-29 2021-07-13 航天时代电子技术股份有限公司 Teleoperation robot
CN113103228B (en) * 2021-03-29 2023-08-15 航天时代电子技术股份有限公司 Teleoperation robot
CN113222891B (en) * 2021-04-01 2023-12-22 东方电气集团东方锅炉股份有限公司 Line laser-based binocular vision three-dimensional measurement method for rotating object
CN113222891A (en) * 2021-04-01 2021-08-06 东方电气集团东方锅炉股份有限公司 Line laser-based binocular vision three-dimensional measurement method for rotating object
CN113538547A (en) * 2021-06-03 2021-10-22 苏州小蜂视觉科技有限公司 Depth processing method of 3D line laser sensor and dispensing equipment
CN113813170A (en) * 2021-08-30 2021-12-21 中科尚易健康科技(北京)有限公司 Target point conversion method between cameras of multi-camera physiotherapy system
CN113813170B (en) * 2021-08-30 2023-11-24 中科尚易健康科技(北京)有限公司 Method for converting target points among cameras of multi-camera physiotherapy system
WO2023185943A1 (en) * 2022-03-30 2023-10-05 北京一径科技有限公司 Method, apparatus and device for determining lidar point cloud layering, and storage medium
CN115047472A (en) * 2022-03-30 2022-09-13 北京一径科技有限公司 Method, device and equipment for determining laser radar point cloud layering and storage medium

Similar Documents

Publication Publication Date Title
CN107621226A (en) The 3-D scanning method and system of multi-view stereo vision
CN108269279B (en) Three-dimensional reconstruction method and device based on monocular 3 D scanning system
CN107945268B (en) A kind of high-precision three-dimensional method for reconstructing and system based on binary area-structure light
CN103106688B (en) Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering
CN106097348B (en) A kind of fusion method of three-dimensional laser point cloud and two dimensional image
CN105160702B (en) The stereopsis dense Stereo Matching method and system aided in based on LiDAR point cloud
CN102692214B (en) Narrow space binocular vision measuring and positioning device and method
CN101393012B (en) Novel binocular stereo vision measuring device
CN106023303B (en) A method of Three-dimensional Gravity is improved based on profile validity and is laid foundations the dense degree of cloud
CN107833270A (en) Real-time object dimensional method for reconstructing based on depth camera
CN107767442A (en) A kind of foot type three-dimensional reconstruction and measuring method based on Kinect and binocular vision
CN108038902A (en) A kind of high-precision three-dimensional method for reconstructing and system towards depth camera
CN104574432B (en) Three-dimensional face reconstruction method and three-dimensional face reconstruction system for automatic multi-view-angle face auto-shooting image
CN104156972A (en) Perspective imaging method based on laser scanning distance measuring instrument and multiple cameras
CN104408732A (en) Large-view-field depth measuring system and method based on omni-directional structured light
CN109727277B (en) Body surface positioning tracking method for multi-eye stereo vision
CN106127745A (en) The combined calibrating method of structure light 3 D visual system and line-scan digital camera and device
CN102938142A (en) Method for filling indoor light detection and ranging (LiDAR) missing data based on Kinect
CN106780573B (en) A kind of method and system of panorama sketch characteristic matching precision optimizing
CN104318616A (en) Colored point cloud system and colored point cloud generation method based on same
CN111091076B (en) Tunnel limit data measuring method based on stereoscopic vision
CN106500625A (en) A kind of telecentricity stereo vision measuring apparatus and its method for being applied to the measurement of object dimensional pattern micron accuracies
CN109146769A (en) Image processing method and device, image processing equipment and storage medium
CN113362457A (en) Stereoscopic vision measurement method and system based on speckle structured light
CN107374638A (en) A kind of height measuring system and method based on binocular vision module

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180123

RJ01 Rejection of invention patent application after publication