CN105225269A - Based on the object modelling system of motion - Google Patents

Based on the object modelling system of motion Download PDF

Info

Publication number
CN105225269A
CN105225269A CN201510609138.6A CN201510609138A CN105225269A CN 105225269 A CN105225269 A CN 105225269A CN 201510609138 A CN201510609138 A CN 201510609138A CN 105225269 A CN105225269 A CN 105225269A
Authority
CN
China
Prior art keywords
motion
dimensional
camera
image
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510609138.6A
Other languages
Chinese (zh)
Other versions
CN105225269B (en
Inventor
熊蓉
陈颖
章逸丰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201510609138.6A priority Critical patent/CN105225269B/en
Publication of CN105225269A publication Critical patent/CN105225269A/en
Application granted granted Critical
Publication of CN105225269B publication Critical patent/CN105225269B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The present invention discloses a kind of object modelling system based on motion, comprise the motion of more than color depth vision camera, Three Degree Of Freedom, image data acquiring transmitting device, the main control computer including the system environments that vision software runs and externally connected with display screen, motion is fixed on workbench, can realize around object movement, maximum feature of the present invention is to carry out continuous sampling and analysis to target object, by the pose of several RGB images and depth image and corresponding camera, obtain its three-dimensional model.By motion by camera motion to assigned address, the pose error of calculation can be reduced, effectively can improve the accuracy to object modeling, thus the similar scenes such as other operation project and civilian, military project can be generalized to more scientificly.

Description

Based on the object modelling system of motion
Technical field
The present invention relates to a kind of system of three-dimensional body being carried out to three-dimensional modeling, specifically, is a kind of object modelling system based on motion.
Background technology
In recent years, vision-aided system has had increasing application in object dimensional modeling etc., and the Bundler system as NoahSnavely research and development is applied to large-scale three dimensional scene rebuilding.In object modelling, Berkeley University have developed system single body being realized to three-dimensional reconstruction, by the hardware platform put up, by multiple camera, in different angles, image acquisition is carried out to object, by ICP, the some cloud obtained is mated, carried out merging the 3 d grid model obtaining object again by TSDF, provide object dimensional information and character pair for mechanical arm captures object.
Summary of the invention
The object of the invention is the in-problem improvement of three-dimensional reconstruction system for Berkeley, propose one and improve the 3 d modeling system that feasible cost is lower, observation is more continuous whole.For carrying out accurate observation and analysis to the pose of object, thus auxiliary mechanical arm captures, complete appointed task, can in the scenes such as robot vision, robot crawl, 3D printing, realize the identification to object and pose estimation, thus can obtain and wait to capture the complete three-dimensional information of object and feature texture information, for the motion of robot and corresponding operating establish better basis.
For reaching the object of foregoing invention, the present invention proposes the vision system theory that a kind of motion and camera combine, utilize motion end the color depth vision camera surrounding target object of which movement fixed gather RGB image and the depth image of object, in conjunction with movable information and image procossing blending algorithm, three-dimensional reduction is carried out to target object, obtains the three-dimensional model of object.
Concrete technical scheme of the present invention is as follows:
The invention discloses a kind of object modelling system based on motion, comprise the motion of more than color depth vision camera, Three Degree Of Freedom, image data acquiring transmitting device, the main control computer including the system environments that vision software runs and externally connected with display screen, motion is fixed on workbench, can realize around object movement; Color depth vision camera is fixed on motion end, makes it carry out the collection of color depth image round target object by the motion of motion; Color depth vision camera is connected with Image Data Acquisition Card by data line; The data collected, by including the main control computer bus of the system environments of vision software operation, are sent to main control computer and process by Image Data Acquisition Card; Result output to include vision software run system environments main control computer externally connected with display screen on show.
As improving further, system of the present invention comprises following comprising modules: motion is planned and control module, image capture module, image processing module, three-dimensional data Fusion Module, display module online, wherein, motion is planned online and can online be planned the track of motion according to model accuracy with control module and control; Image capture module can realize the collection of RGB image to object and depth image; RGB image can be carried out Feature Points Matching by image processing module, calculates the three-dimensional coordinate of corresponding point, and depth information can be converted into three-dimensional point cloud; Three-dimensional data Fusion Module according to camera pose, by unified for all three-dimensional point under same coordinate, can construct the three-dimensional model of target object; Display module can realize the 3-D display of object.
As improving further, object modelling system based on motion of the present invention, three-dimensional body information is divided into three-dimensional point cloud information and SIFT feature information, three-dimensional point cloud information completes from object profile and describes the profile of object, relatively dense, SIFT feature information then carries out three-dimensional description from minutia to object, relatively sparse, Global Information and detailed information are combined, obtains the Complete three-dimensional information of target object.
The invention also discloses a kind of operation method of the object modelling system based on motion, concrete steps are as follows:
(1) hardware system is built, each camera inside and outside parameter of semi-automatic demarcation;
(2) motion according to planned trajectory by camera motion to assigned address, image acquisition is carried out to target object;
(3) the color depth vision camera of being carried by motion end obtains RGB image and the depth image of target object, in conjunction with movable information and the camera pose parameter of motion, realizes object dimensional spatial attitude information reverting;
(4) multiframe information splicing, combines the three-dimensional point cloud obtained and object dimensional spatial attitude, realizes the fusion of some cloud, until complete movement locus sequence or obtain complete model, obtains the three-dimensional model of object;
(5) the SIFT three-dimensional point cloud atlas and three dimensional depth point cloud chart that process the object obtained are input to externally connected with display screen.
Maximum feature of the present invention is to carry out continuous sampling and analysis to target object, by the pose of several RGB images and depth image and corresponding camera, obtains its three-dimensional model.By motion by camera motion to assigned address, the pose error of calculation can be reduced, effectively can improve the accuracy to object modeling, thus the similar scenes such as other operation project and civilian, military project can be generalized to more scientificly.
The beneficial effect that the present invention has is as follows:
(1) the present invention proposes a kind of System and method for that accurately can complete object dimensional modeling on motion, while motion carries out moving according to specified path, accurately can obtain the pose transformation relation between adjacent two width images, accelerating algorithm travelling speed can provide more accurate modeling result for user or automated system;
(2) the RGB image that color depth vision camera got of the present invention and depth image combine, and can recover the profile of object, can obtain again the SIFT feature information of object;
(3) the present invention is compared with the method for traditional multiple camera fixed positions, can greatly reduce costs, and camera in imaging without dead angle;
(4) method and system in the present invention can also be applied in the several scenes such as 3D printing, domestic robot crawl.Therefore, the present invention is that one is very practical, object modelling system effectively, has good application prospect.
Accompanying drawing explanation
Fig. 1 is system hardware structure block diagram;
Fig. 2 is the framework FB(flow block) of system;
Fig. 3 is consecutive point cloud corresponding point matching and consecutive point cloud corresponding camera pose transformation relation schematic diagram;
Fig. 4 is point cloud registering process flow diagram;
Fig. 5 is the some cloud integration technology schematic diagram based on space;
In Fig. 1,1 be color depth vision camera, 2 is motion, 3 be image data acquiring transmitting device, 4 for the main control computer, 5 including the system environments that vision software runs are vision software, 6 is externally connected with display screen.
Embodiment
Object modelling system provided by the invention, is made up of the motion 2 of color depth vision camera more than 1, three degree of freedom, image data acquiring transmitting device 3, the main control computer 4 including vision software and externally connected with display screen 6; Wherein color depth vision camera 1 is fixed on motion 2 end, and the visual field covers effective coverage, object place, carries out image acquisition; , the processor that the data collected are sent to image data acquiring transmitting device 3, is processed by including the pci bus of the main control computer 4 of the system environments of vision software operation by Image Data Acquisition Card by image data acquiring transmitting device 3; Result outputs to the externally connected with display screen 6 of the main control computer 4 including the system environments that vision software runs.
Including the main control computer 4 of system environments that vision software runs comprises as lower module: the three-dimensional modeling module of motion 2 motion planning and control module, image capture module, target object.System architecture flow process is: the online planning of motion 2 and control module are according to object modeling result and demand precision, movement locus is planned online, the color depth vision camera 1 being fixed on motion 2 end is moved to assigned address, completes RGB image and the depth image collection of target object; Image capture module carries out image acquisition to camera fields of view overlay area, gained image is input to three-dimensional modeling module, carry out line modeling, when after model convergence, motion 2 stop motion, and the SIFT feature point cloud chart built by object and TSDF merge the point cloud chart obtained outputs on external-connection displayer.
Motion 2 is planning and control module online, online trajectory planning is carried out according to modeling requirement precision and article size, to the camera motion of motion 2 end be fixed on to assigned address, namely the camera pose variation relation that adjacent two frame cameras are corresponding is known, the pose conversion institute that can reduce adjacent two two field pictures calculating cameras is time-consuming, and can reduce the error caused by Camera extrinsic is demarcated.This module directly can obtain the position orientation relation of the corresponding camera of adjacent two frame pictures, and energy pick up speed, improves precision.
Target object three-dimensional modeling module, can by the multiple image of collected by camera, carry out the three-dimensional point cloud information that pose conversion obtains object, TSDF is used to carry out fusion treatment to a cloud on this basis, the RGB image that wherein color depth vision camera 1 gathers obtains the SIFT three-dimensional point cloud information of object, and the dense three-dimensional point cloud information of the object that depth image obtains.
Three-dimensional virtual scene rendering module of the present invention is based on the three-dimensional virtual scene rendering module of OpenGL: in three-dimensional virtual scene arbitrarily angled viewing object three-dimensional model and output on externally connected with display screen 6.
Object dimensional modeling method provided by the invention, described operating procedure is as follows:
(1) build hardware system, demarcate color depth vision camera 1 internal reference;
(2) motion 2 is according to modeling accuracy, carries out online trajectory planning, and when motion 2 end movement is to assigned address, the color depth vision camera 1 pair of object being fixed on motion 2 end is sampled;
(3) after sampling, gained image is processed, obtain the three-dimensional point cloud of object, often adopt a sub-picture, the model of modeling is recalculated and upgrades, model is more complete and accurate, and the new three-dimensional information that newly-increased image provides is fewer, after object model convergence, motion 2 stop motion, terminates modeling;
(4) result data are input to three-dimensional virtual scene rendering module, show in externally connected with display screen 6.Wherein in said method, controlled motion mechanism 2 motion is adopted with the following method:
(1) zero-bit of off-line calibration motion 2 and the internal reference of color depth vision camera 1;
(2) according to size and the modeling accuracy of object, implement online trajectory planning, built mould model accuracy is calculated;
(3) when model converges to demand precision or motion 2 completes movement locus sequence, motion 2 stop motion.Wherein in said method, object dimensional modeling is adopted with the following method:
(1) utilize motion 2 movement locus, obtain the pose transformation relation of the corresponding camera of adjacent two two field pictures;
(2) obtain the image of object under each pose, combining camera pose carries out image co-registration;
(3) by image co-registration result again iteration merge, until motion 2 complete movement locus sequence or obtain complete model.
Three-dimensional virtual scene wherein in said method reappears to be adopted with the following method:
(1) the object dimensional mode input of gained will be calculated to three-dimensional virtual scene display module;
(2) according to physical size, by model and physical size unification.
Below in conjunction with accompanying drawing, pass through specific embodiment, technical scheme of the present invention is illustrated, the color depth vision camera 1 that detailed description the present invention utilizes, motion 2 move and the conversion of camera pose coordinates, realize object dimensional modeling and analysis, and reappear in the scene of virtual three dimensional space.
Fig. 1 is system hardware structure block diagram, whole system is by motion 2 more than color depth vision camera 1, Three Degree Of Freedom, image data acquiring transmitting device 3, the main control computer 4 including the system environments that vision software runs and externally connected with display screen 6, the system environments that vision software 5 runs comprises all Core Feature software intersections be hidden under vision software user interface such as the driving of video camera under different operating system, image capture software, image processing software, the bridge that to be hardware system carry out with user exchanges.The system environments that vision is run and hardware system constitute each functional module of whole vision system.
Color depth vision camera 1 is fixed on motion 2, the effective coverage at camera fields of view coverage goal object place, and by the motion of On-line Control motion 2, camera is sampled around object; By image data acquiring transmitting device 3, the data that color depth vision camera 1 collects are sent to the main control computer 4 including the system environments that vision software runs, carry out data processing by vision software 5; The externally connected with display screen 6 that result outputs to the main control computer 4 including the system environments that vision software runs shows.
Fig. 2 is the framework FB(flow block) of system, includes the main control computer 4 of system environments that vision software runs and comprises as lower module: motion 2 moves online trajectory planning module and control module, object dimensional reconstructed module, three-dimensional virtual scene rendering module.
First the off-line calibration to motion zero-bit and camera parameter is completed, motion 2 moves online trajectory planning module precision according to demand, the color depth vision camera 1 be fixed on motion 2 is sampled round target object, and drives camera moves to assigned address, thus avoid and join by camera calibration the error that the pose transformation relation that solves the corresponding cameras of adjacent two two field pictures brings outward, and by the online planning of track, can better define planning strategy and termination condition, namely when institute's established model convergence or motion 2 complete movement locus sequence, the then motion of stop-motion 2, automatically terminate.
Object dimensional reconstructed module to be sampled the multiple image obtained according to color depth camera 1, directly can obtain the corresponding relation between two-dimensional signal to three-dimensional information, by the online trajectory planning of motion 2, the rough pose transformation relation that every two field picture is corresponding can be obtained, again accuracy registration is carried out to a cloud, again TSDF fusion is carried out to the image obtained of sampling, thus complete the three-dimensional modeling of object, wherein the sample RGB image that obtains of color depth vision camera 1 is the SIFT feature point cloud of object, and depth map is then the dense point cloud information of object.And by three-dimensional virtual scene rendering module, just can from the modeling effect of arbitrarily angled viewing object, namely it is in three-dimensional state.
Fig. 3 is consecutive point cloud corresponding point matching and consecutive point cloud corresponding camera pose transformation relation schematic diagram, in have one-to-one relationship due to the three-dimensional point in SIFT feature point and depth image, so do not need to mate SIFT feature point, avoid and solving of three-dimensional coordinate is carried out to unique point.Consecutive point cloud C iand C i+1between the pose transformation relation of corresponding camera be [R ci, t ci+1], and the pose variation relation between motion 2 end and camera is [R e, t e], the pose transformation relation between corresponding motion 2 end of some cloud and workbench is [R ei, t ei], the pose transformation relation between the corresponding camera of some cloud and workbench is [R eci, t ect] then each pose transformation relation is satisfied:
[R eci,t eci]=[R e,t e][R ei,t ei](1)
[R eci+1,t eci+1]=[R e,t e][R ei+1,t ei+1](2)
Then consecutive point cloud C iand C i+1between corresponding camera pose transformation relation meet:
[R ci,t ci+1]=[R eci,t eci][R eci,t eci] -1=[R e,t e][R ei,t ei][R ei+1,t ei+1] -1[R e,t e] -1(3)
Wherein [R ei, t ei] and [R ei+1, t ei+1] can be obtained by the track of motion 2, but the variation relation [R of camera and motion 2 end e, t e] be difficult to accurately obtain, an estimated value can only be provided, need the coupling of carrying out a cloud on the basis of initial estimate.
Fig. 4 is point cloud registering process flow diagram, after estimating the initial value of point cloud registering, needs to carry out accuracy registration to it.Using first some cloud as template point cloud i-th width initial estimation registration point cloud correspondence is mated with template point cloud, according to a criterion by template point cloud C scenewith the i-th amplitude point cloud C iset up corresponding collection, if correspondence is counted as n, then the pose conversion average error between 2 clouds meets:
e ( R , t ) = 1 n Σ i = 1 n | | Q i P i - ( R e Q i + t e ) | | - - - ( 4 )
Motion 2 drives camera to move, and when carrying out point cloud matching, providing an initial situation, greatly can reduce matching error by motion 2, improves matching precision, now corresponding pointed set meet:
Q i′=R eQ i+t e(5)
Now to point set C scenewith point set C i' mate, iterative initial rough can be reduced and mate the error brought and save the algorithm time used, algorithm travelling speed can be improved preferably.After carrying out rough matching, then exact matching is carried out on this basis, in order to make coupling more accurate, needing rejecting abnormalities match point and strengthening matching constraint condition.Exceptional point can be given and convert solving of parameter afterwards and bring comparatively big error, and its correct match point must meet that distance between 2 is less and angle between its normal vector is less.Distance wherein between 2 adopts Euclidean distance:
||Q i′-P j||<m+std×γ(6)
M is point set C scenewith point set C ithe mean distance of ' match point, the standard deviation of std respective distances, γ is a constant.K-d tree is set up to a cloud, local surface analysis is carried out to the k neighborhood established, thus the normal vector of estimation point:
C = 1 k Σ i = 1 k ( p i - p i - ) ( p i - p i - ) T - - - ( 7 )
Wherein for a p icentre coordinate in k neighborhood, can solve its eigenwert for nonsingular matrix C, if its eigenwert is λ 0, λ 1, λ 2, get conduct point p minimum in three inormal vector.Then as 2 p i, Q inormal vector cOS distance meet:
s i m ( F i P , F i Q ) = F i p F i Q | | F i p | | × | | F i Q | | > τ - - - ( 8 )
Wherein τ is a constant, and when COS distance and 1 more close, then the angle of 2 correspondent method vectors is less.After rejecting abnormalities coupling, matching constraint is strengthened, namely puts p i, P i+1at point set C scenein be consecutive point, then put P i, P i+1at point set C i' its Q that matches each other j', Q i' be also consecutive point, meet:
| | | P i - P i + 1 | | - | | Q j &prime; - Q l &prime; | | | | P i - P i + 1 | | + | | Q j &prime; - Q l &prime; | | | < &zeta; - - - ( 9 )
Fig. 5 is the some cloud integration technology schematic diagram based on space, color depth vision camera 1 is sampled, obtain depth map striograph, the original depth frame data obtained in color depth vision camera 1 are converted to the floating data in units of rice by SDK, and then these data are optimized, by obtaining the coordinate information of camera, these floating datas to be converted to color depth vision camera 1 towards consistent cloud data, TSDF fusion is carried out to the cloud data obtained.Wherein a TSDF cubic grid represents three dimensions, and in cube, each grid deposits the distance of this grid to object model surface, represent to be blocked on surface side and visible side, and zero crossing is exactly the point on surface with positive and negative simultaneously.An object model is had, when there being new data to add model in cube on the left of Fig. 5:
D i + 1 ( x ) = W i ( x ) D i ( x ) + w i + 1 ( x ) d i + 1 ( x ) W i ( x ) + w i + 1 ( x ) - - - ( 10 )
W i+1(x)=W i(x)+w i+1(x)(11)
The accumulation grid Distance geometry weight of width image before its neutralization represents, and the grid Distance geometry weight of expression width image.The distance value of same grid is merged by weight, and new weight is both weight sums, obtains shown on the right side of Fig. 5 after fusion.
What more than exemplify is only the preferred embodiment of the present invention; the present invention is not limited to above embodiment; the oher improvements and changes that those skilled in the art directly derive without departing from the spirit and concept in the present invention or associate, all should think and be included in protection scope of the present invention.

Claims (4)

1. the object modelling system based on motion, it is characterized in that, comprise the motion (2) of more than a color depth vision camera (1), Three Degree Of Freedom, image data acquiring transmitting device (3), the main control computer (4) including the system environments that vision software runs and externally connected with display screen (6), described motion (2) is fixed on workbench, can realize around object movement; Described color depth vision camera (1) is fixed on motion (2) end, makes it carry out the collection of color depth image round target object by the motion of motion (2); Described color depth vision camera (1) is connected with Image Data Acquisition Card by data line; The data collected, by including main control computer (4) bus of the system environments of vision software operation, are sent to the main control computer (4) including the system environments that vision software runs and process by the Image Data Acquisition Card of described image data acquiring transmitting device (3); Result output to include vision software run system environments main control computer (4) externally connected with display screen (6) on show.
2. the object modelling system based on motion according to claim 1, it is characterized in that, the described main control computer (4) including the system environments that vision software runs comprises as lower module: motion (2) is planning and control module, image capture module, image processing module, three-dimensional data Fusion Module and display module online, and described motion (2) online planning and control module can be planned online according to the track of model accuracy to motion (2) and control; Described image capture module can realize the collection of RGB image to object and depth image; RGB image can be carried out Feature Points Matching by described image processing module, calculates the three-dimensional coordinate of corresponding point, and depth information can be converted into three-dimensional point cloud; Described three-dimensional data Fusion Module according to camera pose, by unified for all three-dimensional point under same coordinate, can construct the three-dimensional model of target object; Described display module can realize the 3-D display of object.
3. the object modelling system based on motion according to claim 1, it is characterized in that, the information of described three-dimensional body is divided into three-dimensional point cloud information and SIFT feature information, three-dimensional point cloud information completes the three-dimensional description to object from object profile, SIFT feature information carries out three-dimensional description from minutia to object, Global Information and detailed information are combined, obtains the Complete three-dimensional information treating modeling object.
4. an operation method for the object modelling system based on motion as described in claim 1 or 2 or 3, it is characterized in that, concrete steps are as follows:
1), hardware system is built, semi-automatic demarcation color depth vision camera (1) inside and outside parameter;
2), motion (2) according to planned trajectory by camera motion to assigned address, target object is taken pictures sampling;
3) the color depth vision camera (1) of, being carried by motion (2) end obtains RGB image and the depth image for the treatment of modeling object, in conjunction with movable information and the camera pose parameter of motion (2), realize object dimensional spatial attitude information reverting;
4), multiframe information splicing the three-dimensional point cloud obtained and object dimensional spatial attitude are combined, realize some cloud fusion, until complete movement locus sequence or obtain complete model, obtain the three-dimensional model of object;
5), the SIFT three-dimensional point cloud atlas and three dimensional depth point cloud chart that process the object obtained are input to externally connected with display screen (6).
CN201510609138.6A 2015-09-22 2015-09-22 Object modelling system based on motion Active CN105225269B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510609138.6A CN105225269B (en) 2015-09-22 2015-09-22 Object modelling system based on motion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510609138.6A CN105225269B (en) 2015-09-22 2015-09-22 Object modelling system based on motion

Publications (2)

Publication Number Publication Date
CN105225269A true CN105225269A (en) 2016-01-06
CN105225269B CN105225269B (en) 2018-08-17

Family

ID=54994216

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510609138.6A Active CN105225269B (en) 2015-09-22 2015-09-22 Object modelling system based on motion

Country Status (1)

Country Link
CN (1) CN105225269B (en)

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105965891A (en) * 2016-05-13 2016-09-28 佛山市云端容灾信息技术有限公司 Semi-stereo scanning and sampling method and system used for 3D printing
CN105971583A (en) * 2016-06-27 2016-09-28 中国矿业大学(北京) Equipment and method for acquiring holographic model of drilled hole
CN106898022A (en) * 2017-01-17 2017-06-27 徐渊 A kind of hand-held quick three-dimensional scanning system and method
CN106997614A (en) * 2017-03-17 2017-08-01 杭州光珀智能科技有限公司 A kind of large scale scene 3D modeling method and its device based on depth camera
CN107610212A (en) * 2017-07-25 2018-01-19 深圳大学 Scene reconstruction method, device, computer equipment and computer-readable storage medium
CN107671857A (en) * 2017-10-11 2018-02-09 上海交通大学 For service robot operation demonstration and the three-dimensional artificial platform of proof of algorithm
CN108748137A (en) * 2018-04-11 2018-11-06 陈小龙 A kind of material object scanning modeling method and its application
CN109003325A (en) * 2018-06-01 2018-12-14 网易(杭州)网络有限公司 A kind of method of three-dimensional reconstruction, medium, device and calculate equipment
CN109978931A (en) * 2019-04-04 2019-07-05 北京悉见科技有限公司 Method for reconstructing three-dimensional scene and equipment, storage medium
CN110340891A (en) * 2019-07-11 2019-10-18 河海大学常州校区 Mechanical arm positioning grasping system and method based on cloud template matching technique
CN110340883A (en) * 2018-04-05 2019-10-18 欧姆龙株式会社 Information processing unit, information processing method and computer readable storage medium
CN110377033A (en) * 2019-07-08 2019-10-25 浙江大学 A kind of soccer robot identification based on RGBD information and tracking grasping means
CN111015655A (en) * 2019-12-18 2020-04-17 深圳市优必选科技股份有限公司 Mechanical arm grabbing method and device, computer readable storage medium and robot
CN111360851A (en) * 2020-02-19 2020-07-03 哈尔滨工业大学 Hybrid servo control device and method for robot integrating touch and vision
CN111415388A (en) * 2020-03-17 2020-07-14 Oppo广东移动通信有限公司 Visual positioning method and terminal
CN111640175A (en) * 2018-06-21 2020-09-08 华为技术有限公司 Object modeling movement method, device and equipment
CN111882661A (en) * 2020-07-23 2020-11-03 清华大学 Method for reconstructing three-dimensional scene of video
CN111917978A (en) * 2020-07-21 2020-11-10 北京全路通信信号研究设计院集团有限公司 Adjusting device and method of industrial camera and shooting device
CN112051777A (en) * 2020-09-14 2020-12-08 南京凯正电子有限公司 Intelligent control alternating current servo system
CN112132881A (en) * 2016-12-12 2020-12-25 华为技术有限公司 Method and equipment for acquiring dynamic three-dimensional image
CN112147637A (en) * 2019-06-28 2020-12-29 杭州海康机器人技术有限公司 Robot repositioning method and device
CN113119099A (en) * 2019-12-30 2021-07-16 深圳富泰宏精密工业有限公司 Computer device and method for controlling mechanical arm to clamp and place object
WO2021212844A1 (en) * 2020-04-21 2021-10-28 广东博智林机器人有限公司 Point cloud stitching method and apparatus, and device and storage device
CN113709441A (en) * 2020-05-22 2021-11-26 杭州海康威视数字技术股份有限公司 Scanning device, camera pose determining method and device and electronic device
WO2022040920A1 (en) * 2020-08-25 2022-03-03 南京翱翔智能制造科技有限公司 Digital-twin-based ar interactive system and method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101825442A (en) * 2010-04-30 2010-09-08 北京理工大学 Mobile platform-based color laser point cloud imaging system
CN103019024A (en) * 2012-11-29 2013-04-03 浙江大学 System for realtime and accurate observation and analysis of table tennis rotating and system operating method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101825442A (en) * 2010-04-30 2010-09-08 北京理工大学 Mobile platform-based color laser point cloud imaging system
CN103019024A (en) * 2012-11-29 2013-04-03 浙江大学 System for realtime and accurate observation and analysis of table tennis rotating and system operating method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
郑红 等: "基于精确模型的云台摄像机自标定", 《机器人》 *

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105965891A (en) * 2016-05-13 2016-09-28 佛山市云端容灾信息技术有限公司 Semi-stereo scanning and sampling method and system used for 3D printing
CN105971583A (en) * 2016-06-27 2016-09-28 中国矿业大学(北京) Equipment and method for acquiring holographic model of drilled hole
CN105971583B (en) * 2016-06-27 2023-12-15 中国矿业大学(北京) Device and method for acquiring drilling holographic model
CN112132881A (en) * 2016-12-12 2020-12-25 华为技术有限公司 Method and equipment for acquiring dynamic three-dimensional image
CN106898022A (en) * 2017-01-17 2017-06-27 徐渊 A kind of hand-held quick three-dimensional scanning system and method
CN106997614A (en) * 2017-03-17 2017-08-01 杭州光珀智能科技有限公司 A kind of large scale scene 3D modeling method and its device based on depth camera
CN107610212A (en) * 2017-07-25 2018-01-19 深圳大学 Scene reconstruction method, device, computer equipment and computer-readable storage medium
CN107610212B (en) * 2017-07-25 2020-05-12 深圳大学 Scene reconstruction method and device, computer equipment and computer storage medium
CN107671857A (en) * 2017-10-11 2018-02-09 上海交通大学 For service robot operation demonstration and the three-dimensional artificial platform of proof of algorithm
CN110340883A (en) * 2018-04-05 2019-10-18 欧姆龙株式会社 Information processing unit, information processing method and computer readable storage medium
US11426876B2 (en) 2018-04-05 2022-08-30 Omron Corporation Information processing apparatus, information processing method, and program
CN108748137A (en) * 2018-04-11 2018-11-06 陈小龙 A kind of material object scanning modeling method and its application
CN109003325A (en) * 2018-06-01 2018-12-14 网易(杭州)网络有限公司 A kind of method of three-dimensional reconstruction, medium, device and calculate equipment
CN109003325B (en) * 2018-06-01 2023-08-04 杭州易现先进科技有限公司 Three-dimensional reconstruction method, medium, device and computing equipment
US11436802B2 (en) 2018-06-21 2022-09-06 Huawei Technologies Co., Ltd. Object modeling and movement method and apparatus, and device
CN111640175A (en) * 2018-06-21 2020-09-08 华为技术有限公司 Object modeling movement method, device and equipment
CN109978931A (en) * 2019-04-04 2019-07-05 北京悉见科技有限公司 Method for reconstructing three-dimensional scene and equipment, storage medium
CN109978931B (en) * 2019-04-04 2021-12-31 中科海微(北京)科技有限公司 Three-dimensional scene reconstruction method and device and storage medium
CN112147637A (en) * 2019-06-28 2020-12-29 杭州海康机器人技术有限公司 Robot repositioning method and device
CN110377033A (en) * 2019-07-08 2019-10-25 浙江大学 A kind of soccer robot identification based on RGBD information and tracking grasping means
CN110340891A (en) * 2019-07-11 2019-10-18 河海大学常州校区 Mechanical arm positioning grasping system and method based on cloud template matching technique
CN110340891B (en) * 2019-07-11 2022-05-24 河海大学常州校区 Mechanical arm positioning and grabbing system and method based on point cloud template matching technology
CN111015655B (en) * 2019-12-18 2022-02-22 深圳市优必选科技股份有限公司 Mechanical arm grabbing method and device, computer readable storage medium and robot
CN111015655A (en) * 2019-12-18 2020-04-17 深圳市优必选科技股份有限公司 Mechanical arm grabbing method and device, computer readable storage medium and robot
CN113119099A (en) * 2019-12-30 2021-07-16 深圳富泰宏精密工业有限公司 Computer device and method for controlling mechanical arm to clamp and place object
CN111360851A (en) * 2020-02-19 2020-07-03 哈尔滨工业大学 Hybrid servo control device and method for robot integrating touch and vision
CN111415388B (en) * 2020-03-17 2023-10-24 Oppo广东移动通信有限公司 Visual positioning method and terminal
CN111415388A (en) * 2020-03-17 2020-07-14 Oppo广东移动通信有限公司 Visual positioning method and terminal
WO2021212844A1 (en) * 2020-04-21 2021-10-28 广东博智林机器人有限公司 Point cloud stitching method and apparatus, and device and storage device
CN113709441A (en) * 2020-05-22 2021-11-26 杭州海康威视数字技术股份有限公司 Scanning device, camera pose determining method and device and electronic device
CN111917978A (en) * 2020-07-21 2020-11-10 北京全路通信信号研究设计院集团有限公司 Adjusting device and method of industrial camera and shooting device
CN111882661A (en) * 2020-07-23 2020-11-03 清华大学 Method for reconstructing three-dimensional scene of video
WO2022040920A1 (en) * 2020-08-25 2022-03-03 南京翱翔智能制造科技有限公司 Digital-twin-based ar interactive system and method
CN112051777B (en) * 2020-09-14 2022-12-30 南京凯正电子有限公司 Intelligent control alternating current servo system
CN112051777A (en) * 2020-09-14 2020-12-08 南京凯正电子有限公司 Intelligent control alternating current servo system

Also Published As

Publication number Publication date
CN105225269B (en) 2018-08-17

Similar Documents

Publication Publication Date Title
CN105225269A (en) Based on the object modelling system of motion
CN111968129B (en) Instant positioning and map construction system and method with semantic perception
CN109461180B (en) Three-dimensional scene reconstruction method based on deep learning
CN110189399B (en) Indoor three-dimensional layout reconstruction method and system
CN108537876A (en) Three-dimensional rebuilding method, device, equipment based on depth camera and storage medium
CN101581575B (en) Three-dimensional rebuilding method based on laser and camera data fusion
WO2019219013A1 (en) Three-dimensional reconstruction method and system for joint optimization of human body posture model and appearance model
CN111045017A (en) Method for constructing transformer substation map of inspection robot by fusing laser and vision
Se et al. Vision based modeling and localization for planetary exploration rovers
CN110992487B (en) Rapid three-dimensional map reconstruction device and reconstruction method for hand-held airplane fuel tank
CN105844696A (en) Image positioning method and device based on ray model three-dimensional reconstruction
Shi et al. Calibrcnn: Calibrating camera and lidar by recurrent convolutional neural network and geometric constraints
CN103247075A (en) Variational mechanism-based indoor scene three-dimensional reconstruction method
CN112907631B (en) Multi-RGB camera real-time human body motion capture system introducing feedback mechanism
CN107767424A (en) Scaling method, multicamera system and the terminal device of multicamera system
Ruf et al. Real-time on-board obstacle avoidance for UAVs based on embedded stereo vision
CN106203429A (en) Based on the shelter target detection method under binocular stereo vision complex background
Tykkälä et al. A dense structure model for image based stereo SLAM
Kurz et al. Bundle adjustment for stereoscopic 3d
Hou et al. Octree-based approach for real-time 3d indoor mapping using rgb-d video data
CN117152829A (en) Industrial boxing action recognition method of multi-view self-adaptive skeleton network
Wang et al. Automated mosaicking of UAV images based on SFM method
Yang et al. A review of visual odometry in SLAM techniques
Zhang et al. Point cloud registration with 2D and 3D fusion information on mobile robot integrated vision system
Chen et al. Multi-robot point cloud map fusion algorithm based on visual SLAM

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant