CN205451195U - Real -time three -dimensional some cloud system that rebuilds based on many cameras - Google Patents

Real -time three -dimensional some cloud system that rebuilds based on many cameras Download PDF

Info

Publication number
CN205451195U
CN205451195U CN201620172757.3U CN201620172757U CN205451195U CN 205451195 U CN205451195 U CN 205451195U CN 201620172757 U CN201620172757 U CN 201620172757U CN 205451195 U CN205451195 U CN 205451195U
Authority
CN
China
Prior art keywords
dimensional
camera
video camera
module
reconstruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201620172757.3U
Other languages
Chinese (zh)
Inventor
丁晓华
张攭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Eagle Eye Online Electronics Technology Co ltd
Original Assignee
Shenzhen Eagle Eye Online Electronics Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Eagle Eye Online Electronics Technology Co ltd filed Critical Shenzhen Eagle Eye Online Electronics Technology Co ltd
Priority to CN201620172757.3U priority Critical patent/CN205451195U/en
Application granted granted Critical
Publication of CN205451195U publication Critical patent/CN205451195U/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The utility model provides a real -time three -dimensional some cloud system that rebuilds based on many cameras, include, the three -dimensional cubical space of the static background of confined solid color, the camera and a plurality of lamp source and computer of a plurality of of fixed mounting around cubical space, wherein camera and computer link, the computer includes: the camera calibration module for confirm the dimensional orientation and the inside parameter of each camera, projection look -up table calculation module for establish the projection look -up table, the module is cut apart to the object profile, cuts apart an object profile in the image that obtains the camera collection, the three -dimensional reconstruction module for according to the calibration information of camera, the three -dimensional positional information and the projection look -up table of seizure grain piece, obtain dynamic object and rebuild data. The utility model discloses the three -dimensional reconstruction accuracy is higher, efficiency is higher, universal relevance, calculation are simpler.

Description

A kind of real-time three-dimensional point cloud reconstructing system based on multiple-camera
Technical field
This utility model three-dimensional information technical field, particularly relates to a kind of real-time three-dimensional point cloud reconstructing system based on multiple-camera.
Background technology
Three-dimensional reconstruction is the problem that the field such as computer vision, computer graphics is common.So-called three-dimensional body rebuilds 2D image or the 2.5D depth data referring to be obtained space three-dimensional object by sensing equipment, therefrom recovers the three dimensional structure of object.Sensing type can include photoimageable, laser scanning, thermal imaging etc..Three-dimensional reconstruction has many application at computer-aided design, 3D animation/game, scientific algorithm and the aspect such as virtual reality, commercial measurement.
At present, conventional on the market three-dimensional reconstruction equipment or method include three kinds.The first is method based on laser scanning, and it is mainly based upon the principle of laser ranging, and the structure of this kind equipment typically mainly includes an accurate laser range finder of high speed, digital camera and the auxiliary equipment of guided laser motion.The method using laser scanning has precision advantage high, fireballing, but its equipment is the most expensive, and volume etc. is the biggest;Second method is method based on stereoscopic vision, is i.e. shot object from different perspectives by more than two video cameras, analyzes the view data obtained, therefrom calculates the locus of each three-dimensional point of object, thus obtain three-dimensional reconstruction data.This kind of method key needs to solve the computational problems such as high-precision camera calibration, feature extraction and Stereo matching, only obtains accurate matching result, its is possible to recover object dimensional point cloud information accurately.The third method is based on single camera and the method for auxiliary information, this kind of method needs to stamp the auxiliary information such as active hot spot on object, calculate the three dimensional local information of object, such as some equipment by analyzing the changes such as the deformation of texture hot spot and use the infrared projector facula information to object projective structure.This kind of method disadvantage is that extra auxiliary equipment, and the scene of object rapid movement can met difficulty.And the method having some three-dimensional reconstructions, contour of object segmentation is not clean, and contour of object and object appearance colouring information obtain inaccurate, computationally intensive, and step is complicated.
Utility model content
This utility model is for above technical problem, it is provided that a kind of real-time three-dimensional point cloud reconstructing system based on multiple-camera, degree of accuracy is higher, in hgher efficiency, be generally suitable for, calculate simpler.
The technical solution adopted in the utility model is as follows, a kind of real-time three-dimensional point cloud reconstructing system based on multiple-camera, including, the three-dimensional space of the static background of the solid color closed, several the video camera being fixedly mounted on solid space surrounding and several lamp source and computers, wherein said video camera is connected with computer, and described computer includes:
Camera calibration module, for determining dimensional orientation and the inner parameter of each video camera;
Projection look-up table computing module, is used for setting up projection look-up table;
Contour of object segmentation module, is partitioned into contour of object in the image obtained by camera acquisition;
Three-dimensional reconstruction module, for the calibration information according to video camera, the three dimensional local information of seizure grain block and projection look-up table, obtains dynamic object reconstruction data.
Preferably, described system also includes colour reconstruction module, for, after obtaining object reconstruction data, utilizing the colouring information of contour of object to rebuild object appearance color.
Preferably, the quantity of described video camera at least includes 10, it is possible to capture space covers imaging, and synchronous acquisition resolution can be more than, no less than 640*480, frame per second, the color image data that 24 frames are per second.
Reconstruction accuracy of the present utility model can with Autonomous Control, rebuild that efficiency is high, it is simpler to calculate, be suitable for parallel computation, reduction truer.
Accompanying drawing explanation
Fig. 1 is use flow chart of steps of the present utility model.
Fig. 2 is that multiple video camera is installed and three-dimensional space structural representation.
Fig. 3 is single camera calibration plate and Image Acquisition schematic diagram thereof
Fig. 4 is multi-camera system scaling board and Image Acquisition schematic diagram thereof.
Fig. 5 is that three-dimensional space divides grain block schematic diagram.
Fig. 6 is project objects and projection look-up table establishment schematic diagram.
Fig. 7 is three-dimensional body reconstructed results exemplary plot.
Fig. 8 is system construction drawing of the present utility model.
Detailed description of the invention
Illustrate below in conjunction with the accompanying drawings, this utility model is introduced in detail, it will be appreciated that simply introduce detailed description of the invention of the present utility model below, and do not lie in restriction protection domain of the present utility model.
A kind of real-time three-dimensional point cloud reconstructing system based on multiple-camera, as described in Figure 8, including, the three-dimensional space 1 of the static background of the solid color closed, several the video camera 2 being fixedly mounted on solid space 1 surrounding and several lamp sources 3 and computer 4, wherein said video camera 2 is connected with computer 4, described computer 4 includes, camera calibration module 41, for determining dimensional orientation and the inner parameter of each video camera 2;Projection look-up table computing module 42, is used for setting up projection look-up table;Contour of object segmentation module 43, being partitioned into contour of object in the image collected by video camera 2;Three-dimensional reconstruction module 44, for the calibration information according to video camera 2, the three dimensional local information of seizure grain block and projection look-up table, obtain dynamic object reconstruction data, described system includes colour reconstruction module 45, for, after obtaining object reconstruction data, utilizing the colouring information of contour of object that object appearance color is rebuild.
Operational approach is as follows, step as shown in Figure 1:
1, multiple-camera synchronous is built: as shown in Figure 2, mark off suitable size in sufficiently large indoor, three-dimensional space that length, width and height are equal, this three-dimensional space is that the space of subject, i.e. subject can only be limited in motion in this three-dimensional space.For convenience of explanation, this three-dimensional space is called capture space;Fixedly mounting the video camera of some on capture space surrounding correct position, video camera number is preferably no less than 10.All video cameras can cover imaging to capture space, and synchronous acquisition resolution can be more than, no less than 640*480, frame per second, the color image data that 24 frames are per second.In order to make object reconstruction not disturbed by other dynamic objects, in order to be sufficiently accurate it may be desired to background is static, can cover with the solid color eyelid covering of light-permeable in capture space surrounding, thus lay out " performance stage " of closing.Arranging lamp source on " stage " peripheral correct position and orientation, make the uniform illumination of offstage by controlling lamp source, all video cameras are connected on a computer by USB3.0 or FireWire interface.
2, utilizing camera calibration module to carry out multi-camera system demarcation: as shown in Figure 3-4, it is to determine the dimensional orientation of each video camera and inner parameter that multi-camera system is demarcated.Known each camera calibration information, then may determine that spatial point and its relation between two-dimensional pixel coordinate in each video camera image planes, multi-camera system is demarcated, following three steps are used to demarcate: (1) uses the inner parameter that conventional camera marking method demarcates each video camera, so-called inner parameter to include focal length of camera, video camera principal point coordinate and image deformation parameter;(2) the demarcation waffle slab of the suitable size that tiles on capture space ground so that each video camera can cover imaging;(3) grid one jiao definition world coordinate system C is being demarcatedw, each video camera, to demarcating waffle slab shooting, by the intrinsic parameters of the camera demarcated, demarcates each mesh point of waffle slab at world coordinate system CwOn position coordinates and each video camera obtain demarcate the pixel coordinate of each mesh point in waffle slab, the external parameter of video camera can be calculated by the method that pose is estimated, the namely azimuth information of video camera, position and orientation estimation method can use existing position and orientation estimation method.
3, projection look-up table is set up by projection look-up table computing module: as capture space is evenly dividing little stereo block of the same size along length by Fig. 5, if capture space length is L, length is respectively evenly dividing A section, thus obtains number N of little stereo blockvFor Nv=A × A × A, each little stereo block is a size ofFor ease of dividing exactly, A can be taken as the aliquot number of L.Convenient for follow-up explanation, this little stereo block is referred to as grain block, by constant NvIt is referred to as dividing resolution, orderAnd referred to as rebuild resolution.Based on world coordinate system CwObtain the center position coordinate of each grain block, by the inside and outside parameter of the most proven video camera, then can be obtained each grain block central point pixel coordinate in each video camera image planes by perspective projection imaging model.Assuming that (Xi,Yi,Zi) it is at space three-dimensional point Pi,For PiImaging point on video camera k, meets perspective projection relation as follows between them:
WhereinSpatial pose and inner parameter is obtained by demarcation for video camera k.If video camera number is Nc, then to NvEach block central point in individual grain block, all will obtain the N of correspondencecIndividual projected pixel coordinate.Otherwise, the subpoint set in each video camera image planes of the capture space all grains block central point can be obtained by perspective projection imaging model, to each subpoint in the subpoint set in each image planes, the capture space grain set of blocks of correspondence, the most corresponding identical subpoint of the projection of the central point of each block in this set of blocks can be found.Such as, as shown in Figure 6, to certain image planes Ii(i=1,2 ..., Nc), perspective projection imaging model can obtain capture space NvIndividual grain block is in image planes IiOn all projection images vegetarian refreshments, if these subpoints composition setWherein pijRepresent pixel coordinate, then to each pijCan find the grain set of blocks of correspondence, each block in this set is at IiSubpoint coordinate be pij.For saving memory headroom, the storage of this set of blocks is that corresponding grain block is at all N of capture spacevSequence number in individual grain set of blocks.Thus, to each image planes IiAn invariable projection look-up table can be set up, by this projection look-up table, to each picture point pijCan quickly find the grain block sequence number set of correspondence, the projection of each grain block corresponding in this block sequence number set is pij.Set up and project look-up table:
4, contour of object segmentation is carried out by contour of object segmentation module: its prospect of image obtained by under above-mentioned shooting environmental is subject, in order to be partitioned into contour of object from image, with the shooting of each video camera at next group sequence image of driftlessness object situation, therefrom obtain the average image of this group sequence image, as Background.It is located at t and is obtained N by multiple-camera synchronouscWidth image, for convenience of explanation, claims this to correspond to the N of synchronization tcWidth image is t multi views, and usesRepresenting, wherein the image corresponding to i-th video camera is usedRepresent.If the background image corresponding to i-th video camera isThen can obtain contour of object image by simple differential techniqueConcrete, it is assumed that correspond toThe each channel image of RGB be respectivelyCorrespond toThe each channel image of RGB be respectivelyCorresponding to the contour of object image after segmentationThe each channel image of RGB be respectivelyPixel coordinate with logical (x, y) represents, obtains the image after segmentation by following rule:
IfWithIt is asynchronously-1, thenIt is judged as the pixel in foreground object profile.Contour of object pixel set can be obtained by said methodThe most each set elementIt is the vector of 6 dimensions, i.e.Wherein (xj,yj) it is pixel coordinate,For respective pixel color value.Use the said method can be fromArea clean Ground Split that middle object covers out, and contains the colouring information of this object area.Owing to have employed static simple background and uniform illumination, by the method can be real-time acquisition contour of object accurately and object appearance colouring information.In addition to the above methods, it would however also be possible to employ other increasingly complex foreground segmentation methods, such as background subtraction method.
5, three-dimensional reconstruction is carried out by three-dimensional reconstruction module: after obtaining t multi views contour of object view data, it is possible to use camera calibration information, capture space grain block three dimensional local information and projection look-up table reconstruct the subject three-dimensional reconstruction in t quickly and accurately.Each moment is carried out three-dimensional reconstruction, it becomes possible to obtain dynamic object reconstruction data.Specifically comprising the following steps that of three-dimensional reconstruction
1) N that capture space is comprisedvIndividual each block of grain block defines one and comprises NcThe binary string state variable of positionAnd to initialize each all position of binary string state variable be 0 state, it is assumed for example that Nc=10, then the state variable to each blockDefining its initial value is
2) each width is split the contour of object image obtainedCarry out following scan operation: by the projection look-up table corresponding to this video camera, successively to the pixel set belonging to foreground object contour area thereinEach pixelFind the grain block sequence number set of correspondence,For set element number.If it is correspondingGrain set of blocks be SVij={ vijk| k=1,2 ..., Kij, wherein vijkRepresent kth grain block sequence number in capture space all grains set of blocks, i.e. vijk∈{1,2,…,Nv}.According to SVij={ vijk| k=1,2 ..., Kij, by each block sequence number vijkThe state variable of corresponding grain block is updated assignment, will be updated to 1 by corresponding i-th bit binary numeral, represent that this block is positioned at the contour of object region of correspondence at i-th video camera image planes subpoint.
3) according to 2) t multi views all objects contour images is scanned after operation terminates by described method, then only have each binary digit numerical value of those bulk state variablees be all 1 for the grain block on object.Such as, for Nc=10, then only haveGrain block be the grain block on object.For convenience of explanation, this kind of grain block is called object grain block.If represent the reconstruction three-dimensional point of correspondence with the central three-dimensional point of each object grain block, it is hereby achieved that object reconstruction three dimensional point cloud.For convenience of explanation, the three-dimensional point in this object reconstruction three-dimensional point cloud is called body point.
4) being positioned at the point of body surface owing to three-dimensional reconstruction has only to rebuild, the point being comprised in interior of articles is the most meaningless.In order to improve display and processing speed, internal point can be removed.Use following method can quickly remove internal point: to each reconstruction point, it is judged that its up and down, left and right and front and back 6 neighborhood the most all reconstruction point of point.If all neighborhood points are reconstruction point, then this reconstruction point is internal point, can converge conjunction from reconstruction point with it and be rejected, and finally gives subject and lays foundations cloud at the Three-dimensional Gravity of t, and reconstructed results is as shown in Figure 7.
6, last colour reconstruction: after obtaining object dimensional reconstruction cloud data, it is possible to use object appearance is rebuild by the colouring information of contour of object.Appearance herein is rebuild and is referred to recover the colouring information of object reconstruction point cloud, and the concrete grammar that appearance is rebuild is as follows:
First, the grain block of each reconstruction is defined one or four dimensional vector Cn=(r, g, b, and m) (n=1,2 ..., Nshape, NshapeNumber for body grain block), and initialize this vector element value and be 0;
Secondly, the pixel set to each foreground object contour area successivelyIt is scanned operation, finds each pixel by projection look-up tableCorresponding grain set of blocks SVij={ vijk| k=1,2 ..., Kij, and find out the grain block wherein belonging to reconstruction object grain block.Each object grain block of being found out depth value in current camera image planes can be calculated by perspective projection model, and determine from the reconstruction object grain block that video camera image planes are nearestThen by this pixelColouring information (rij,gij,bij) recorded in the following mannerFour corresponding dimensional vector Cn=(r, g, b, m):
Finally, each object grain block is calculated color value: to object grain block, if m value is not 0 in four dimensional vectors of correspondence, then the color data of this block is calculated asIf m value is 0 in four dimensional vectors of correspondence, then obtained by the grain block color value interpolation of peripheral adjacent.May finally obtain comprising the color three dimension cloud data of the colouring information corresponding to object appearance.
In order to improve reconstruction efficiency, can be when creating projection look-up table, by recording the depth information of each block, and then the grain set of blocks that each contour pixel is corresponding is ranked up storage according to depth size, thus avoids again carrying out depth calculation in above-mentioned steps.
Embodiment of the present utility model simply introduces its detailed description of the invention, does not lies in and limits its protection domain.The technical staff of the industry may be made that some is revised under the inspiration of the present embodiment, therefore all equivalence changes done according to this utility model the scope of the claims or modification, belong in this utility model scope of the patent claims.

Claims (3)

1. a real-time three-dimensional point cloud reconstructing system based on multiple-camera, it is characterized in that, including, the three-dimensional space of the static background of the solid color closed, several the video camera being fixedly mounted on solid space surrounding and several lamp source and computers, wherein said video camera is connected with computer, described computer includes: camera calibration module, for determining dimensional orientation and the inner parameter of each video camera;
Projection look-up table computing module, is used for setting up projection look-up table;
Contour of object segmentation module, is partitioned into contour of object in the image obtained by camera acquisition;
Three-dimensional reconstruction module, for the calibration information according to video camera, the three dimensional local information of seizure grain block and projection look-up table, obtains dynamic object reconstruction data.
Real-time three-dimensional point cloud reconstructing system based on multiple-camera the most according to claim 1, it is characterised in that also include colour reconstruction module, for, after obtaining object reconstruction data, utilizing the colouring information of contour of object to rebuild object appearance color.
Real-time three-dimensional point cloud reconstructing system based on multiple-camera the most according to claim 1, it is characterized in that, the quantity of described video camera at least includes 10, capture space can be covered imaging, and synchronous acquisition resolution can be more than, no less than 640*480, frame per second, the color image data that 24 frames are per second.
CN201620172757.3U 2016-03-07 2016-03-07 Real -time three -dimensional some cloud system that rebuilds based on many cameras Expired - Fee Related CN205451195U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201620172757.3U CN205451195U (en) 2016-03-07 2016-03-07 Real -time three -dimensional some cloud system that rebuilds based on many cameras

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201620172757.3U CN205451195U (en) 2016-03-07 2016-03-07 Real -time three -dimensional some cloud system that rebuilds based on many cameras

Publications (1)

Publication Number Publication Date
CN205451195U true CN205451195U (en) 2016-08-10

Family

ID=56605935

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201620172757.3U Expired - Fee Related CN205451195U (en) 2016-03-07 2016-03-07 Real -time three -dimensional some cloud system that rebuilds based on many cameras

Country Status (1)

Country Link
CN (1) CN205451195U (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106998430A (en) * 2017-04-28 2017-08-01 北京瑞盖科技股份有限公司 360 degree of video playback methods based on polyphaser
CN107170037A (en) * 2016-03-07 2017-09-15 深圳市鹰眼在线电子科技有限公司 A kind of real-time three-dimensional point cloud method for reconstructing and system based on multiple-camera
WO2018045532A1 (en) * 2016-09-08 2018-03-15 深圳市大富网络技术有限公司 Method for generating square animation and related device
CN111915671A (en) * 2020-07-15 2020-11-10 安徽清新互联信息科技有限公司 Personnel trajectory tracking method and system for working area
CN113177949A (en) * 2021-04-16 2021-07-27 中南大学 Large-size rock particle feature identification method and device
CN113376953A (en) * 2021-05-20 2021-09-10 达闼机器人有限公司 Object projection reconstruction system

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107170037A (en) * 2016-03-07 2017-09-15 深圳市鹰眼在线电子科技有限公司 A kind of real-time three-dimensional point cloud method for reconstructing and system based on multiple-camera
WO2018045532A1 (en) * 2016-09-08 2018-03-15 深圳市大富网络技术有限公司 Method for generating square animation and related device
CN108140252A (en) * 2016-09-08 2018-06-08 深圳市大富网络技术有限公司 A kind of generation method and relevant device of square animation
CN106998430A (en) * 2017-04-28 2017-08-01 北京瑞盖科技股份有限公司 360 degree of video playback methods based on polyphaser
CN106998430B (en) * 2017-04-28 2020-07-21 北京瑞盖科技股份有限公司 Multi-camera-based 360-degree video playback method
CN111915671A (en) * 2020-07-15 2020-11-10 安徽清新互联信息科技有限公司 Personnel trajectory tracking method and system for working area
CN113177949A (en) * 2021-04-16 2021-07-27 中南大学 Large-size rock particle feature identification method and device
CN113177949B (en) * 2021-04-16 2023-09-01 中南大学 Large-size rock particle feature recognition method and device
CN113376953A (en) * 2021-05-20 2021-09-10 达闼机器人有限公司 Object projection reconstruction system
CN113376953B (en) * 2021-05-20 2022-09-27 达闼机器人股份有限公司 Object projection reconstruction system

Similar Documents

Publication Publication Date Title
CN205451195U (en) Real -time three -dimensional some cloud system that rebuilds based on many cameras
CN109872397B (en) Three-dimensional reconstruction method of airplane parts based on multi-view stereo vision
US10818029B2 (en) Multi-directional structured image array capture on a 2D graph
CN110728671B (en) Dense reconstruction method of texture-free scene based on vision
CN110853075B (en) Visual tracking positioning method based on dense point cloud and synthetic view
US20190251738A1 (en) System and method for infinite synthetic image generation from multi-directional structured image array
ES2351961T3 (en) PROCEDURE BASED ON IMAGES OF REPRESENTATION AND REPRODUCTION OF THREE-DIMENSIONAL OBJECTS.
CN107170037A (en) A kind of real-time three-dimensional point cloud method for reconstructing and system based on multiple-camera
CN113192179B (en) Three-dimensional reconstruction method based on binocular stereo vision
Tung et al. Simultaneous super-resolution and 3D video using graph-cuts
CN102609950B (en) Two-dimensional video depth map generation process
CN109859249B (en) Scene flow estimation method based on automatic layering in RGBD sequence
CN115937461B (en) Multi-source fusion model construction and texture generation method, device, medium and equipment
US8340399B2 (en) Method for determining a depth map from images, device for determining a depth map
KR100335617B1 (en) Method for synthesizing three-dimensional image
WO2023004559A1 (en) Editable free-viewpoint video using a layered neural representation
Román et al. Automatic Multiperspective Images.
CN113345084B (en) Three-dimensional modeling system and three-dimensional modeling method
Koch Automatic reconstruction of buildings from stereoscopic image sequences
Alshawabkeh et al. Automatic multi-image photo texturing of complex 3D scenes
Verhoeven et al. From 2D (to 3D) to 2.5 D: not all gridded digital surfaces are created equally
Liu et al. Fog effect for photography using stereo vision
JP2003271928A (en) Three-dimensional modeling device, and method and program thereof
Wong et al. 3D object model reconstruction from image sequence based on photometric consistency in volume space
JP2004013869A (en) Apparatus for generating three-dimensional shape, method therefor, and its program

Legal Events

Date Code Title Description
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160810

Termination date: 20210307

CF01 Termination of patent right due to non-payment of annual fee