CN106210700A - The acquisition system of 3-D view, display system and the intelligent terminal being suitable for - Google Patents

The acquisition system of 3-D view, display system and the intelligent terminal being suitable for Download PDF

Info

Publication number
CN106210700A
CN106210700A CN201610552813.0A CN201610552813A CN106210700A CN 106210700 A CN106210700 A CN 106210700A CN 201610552813 A CN201610552813 A CN 201610552813A CN 106210700 A CN106210700 A CN 106210700A
Authority
CN
China
Prior art keywords
camera head
image
pixel
view
voxel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610552813.0A
Other languages
Chinese (zh)
Other versions
CN106210700B (en
Inventor
于炀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhangjiagang Kangdexin Optronics Material Co Ltd
Original Assignee
SHANGHAI WEI ZHOU MICROELECTRONICS TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANGHAI WEI ZHOU MICROELECTRONICS TECHNOLOGY Co Ltd filed Critical SHANGHAI WEI ZHOU MICROELECTRONICS TECHNOLOGY Co Ltd
Priority to CN201610552813.0A priority Critical patent/CN106210700B/en
Publication of CN106210700A publication Critical patent/CN106210700A/en
Application granted granted Critical
Publication of CN106210700B publication Critical patent/CN106210700B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/207Image signal generators using stereoscopic image cameras using a single 2D image sensor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/363Image reproducers using image projection screens

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses the acquisition system of a kind of 3-D view, display system and the intelligent terminal being suitable for.Wherein, described acquisition system includes: arrange at least two the first camera head placed in the middle on the same line and at least one second camera head;Wherein, the optical axis direction of described first camera head is consistent and is each perpendicular to described straight line, comprises common image-region in the image absorbed;Set up the second camera head in described first camera head at least side separately, for supplementing described first camera head owing to blocking the image-region being failed jointly to photograph;The image preprocess apparatus being all connected with each camera head, wherein, described camera head is the first camera head or the second camera head;Described image preprocess apparatus is for carrying out pretreatment based on three-dimensional reconstruction to each camera head connected and/or captured each image.The present invention solves the problem that in the image that two the first camera heads are absorbed, occlusion area cannot build 3-D view.

Description

The acquisition system of 3-D view, display system and the intelligent terminal being suitable for
Technical field
The present embodiments relate to image processing techniques, particularly relate to the acquisition system of a kind of 3-D view, display system And the intelligent terminal being suitable for.
Background technology
3-D view is to obtain based on the two width views with overlay chart picture are carried out three-dimensional reconstruction.In process of reconstruction In, being affected by the overlapping region of two width views, three-dimensional reconstruction system is only capable of overlapping region is built 3-D view.And for not Overlapping region, then cannot be carried out three-dimensional because of the disappearance of view data.It is vertical for so result in zone line in 3-D view Body image, neighboring area are two dimensional image, and stereoeffect is poor.
In order to solve the problems referred to above, by shooting multiple images around the object of wanted solid show in prior art, then Multiple images are utilized to carry out three-dimensional reconstruction by three-dimensional reconstruction system.But, along with the emergence of outdoor scene enhancement techniques, above-mentioned elder generation around Object shoots, then the mode carrying out three-dimensional reconstruction cannot meet Image Acquisition demand when building 3-D view in real time.
Accordingly, it would be desirable to obtain the acquisition system of different visual angles view with two camera heads improve existing.
Summary of the invention
The present invention provides the acquisition system of a kind of 3-D view, display system and the intelligent terminal being suitable for, to be embodied as Marginal area in 3-D view provides the view data for three-dimensional.
First aspect, embodiments provides the acquisition system of a kind of 3-D view, including: it is arranged on same straight line On at least two the first camera head placed in the middle and at least one second camera head;Wherein, the light of described first camera head Direction of principal axis is consistent and is each perpendicular to described straight line, comprises common image-region in the image absorbed;Set up separately described first Second camera head of camera head at least side, for supplementing the image that described first camera head is failed jointly to photograph Region;The image preprocess apparatus being all connected with each camera head, wherein, described camera head is the first camera head or second Camera head;Described image preprocess apparatus is for carrying out base to each camera head connected and/or captured each image Pretreatment in three-dimensional reconstruction.
Second aspect, the embodiment of the present invention additionally provides the display system of a kind of 3-D view, including: initialization module, For the size of two dimensional image provided based on acquisition system and the size of default screen, determine the pixel chi in described screen The voxel size of display space before very little and described screen;Pretreatment module carries out left and right to the two dimensional image of pairing pre-in each viewpoint Figure coupling;Estimation module, for the two width two dimensional images matched based on described image preprocess apparatus, estimates graphics pixel Material;Wherein, each 3-D view material package containing organizing parameter more, and described parameter includes wherein corresponding same thing in a width two dimensional image Reason spatial point is at the subpoint position of described screen and corresponding parallax information;Space reconstruction and processing module, for according to every The spatial information of two camera heads and corresponding three-dimensional picture material, fill the spatial pixel values of voxel in described display space; Viewpoint projection process module, for determining each viewpoint sub-pixel position in respective pixel based on given viewpoint;Interleaving block, uses In based on each viewpoint ray cast to place, corresponding sub-pixel position pixel region by way of the projection not blocking voxel account for institute State the ratio of pixel region, being weighted not blocking the pixel value of same sub-pixel position in voxel one by one, and it is assigned to institute State sub-pixel position corresponding in pixel region.
The third aspect, the embodiment of the present invention additionally provides the display system of a kind of 3-D view, including: initialization module, For the size of two dimensional image provided based on acquisition system and the size of default screen, determine the pixel chi in described screen The voxel size of display space before very little and described screen;Pretreatment module carries out left and right to the two dimensional image of pairing pre-in each viewpoint Figure coupling;And the default drift angle of each second camera head for being provided based on described acquisition system, by each second shooting Two dimensional image captured by device, all maps on described screen, completes the optical axis coupling that different optical axis obtains;Estimation module, uses In the every pair of two dimensional image mated is estimated 3-D view material;Wherein, described two dimensional image includes the image after mapping;Often Individual 3-D view material package containing organizing parameter more, and described parameter includes a Same Physical spatial point subpoint position at described screen Put and this physical space point parallax information on the screen;Space reconstruction and processing module, for imaging according to each two The spatial information of device and corresponding three-dimensional picture material, fill the spatial pixel values of voxel in described display space;Meanwhile, right 3d space after rebuilding carries out in space filtering etc. and processes;Viewpoint projection process module, is used for giving pilot and determines each viewpoint Sub-pixel position in respective pixel;Based on given viewpoint, viewpoint projection process module, for determining that each viewpoint is in respective pixel In sub-pixel position;Interleaving block, for based on institute way on each viewpoint ray cast to place, corresponding sub-pixel position pixel region The projection not blocking voxel of warp accounts for the ratio of described pixel region, one by one will not block the picture of same sub-pixel position in voxel Element value is weighted, and is assigned in described pixel region the sub-pixel position of correspondence.Fourth aspect, the embodiment of the present invention also provides for A kind of intelligent terminal, including: the acquisition system of 3-D view as above;And the display system of the 3-D view matched System.
The present invention, by setting up the second camera head separately at the first camera head side, solves two the first camera head institutes The problem that in the image of picked-up, underlapped region cannot build 3-D view;Meanwhile, this case not only supplements two first and takes the photograph The overlapping image region absorbed as device, it is also possible to expand the image range that two the first camera heads are absorbed;It addition, will Each camera head is placed on the same line, it is possible to be easy to this acquisition system be arranged on intelligent terminal's (such as handheld device), Sufficient 3-D view material is provided for three-dimensional processing device (as bore hole 3D shows).
Accompanying drawing explanation
Fig. 1 is the structural representation of the acquisition system of a kind of 3-D view in the embodiment of the present invention one;
Fig. 2 is the structural representation of the acquisition system of another 3-D view in the embodiment of the present invention one;
Fig. 3 is the structural representation of the acquisition system of another 3-D view in the embodiment of the present invention one;
Fig. 4 is the structural representation of the display system in the embodiment of the present invention two;
Fig. 5 is that the parallax composition of a pixel in two width two dimensional image common images regions in the embodiment of the present invention two is shown It is intended to;
Fig. 6 is the intersectional region schematic diagrams in display space of two viewpoints in the embodiment of the present invention two;
Fig. 7 is the friendships at display space when projecting to a pixel region in screen of two viewpoints in the embodiment of the present invention two The schematic diagram in remittance region;
Fig. 8 is the friendships at display space when projecting to a pixel region in screen of two viewpoints in the embodiment of the present invention two The another schematic diagram in remittance region;
Fig. 9 is each sub-pixel position and the corresponding relation schematic diagram of viewpoint in the embodiment of the present invention two;
Figure 10 be in the embodiment of the present invention two by viewpoint to screen pixels region projection time, do not block voxel and respective pixel The perspective view in region;
Figure 11 is for having the two dimensional image of the drift angle mapping relations schematic diagram on screen in the embodiment of the present invention two.
Detailed description of the invention
The present invention is described in further detail with embodiment below in conjunction with the accompanying drawings.It is understood that this place is retouched The specific embodiment stated is used only for explaining the present invention, rather than limitation of the invention.It also should be noted that, in order to just Part related to the present invention is illustrate only rather than entire infrastructure in description, accompanying drawing.
Embodiment one
Fig. 1,2 being the structural representation of acquisitions system of the 3-D views that the embodiment of the present invention one provides, the present embodiment is suitable The situation of the two dimensional image of three-dimensional imaging it is used for for picked-up.Described acquisition system can be installed on intelligent terminal.Specifically, described Acquisition system 1 comprises some camera heads, and specifically, each camera head includes the first camera head 11 and the second camera head 12.Described acquisition system 1 also includes the image preprocess apparatus 13 being connected with each camera head.
Wherein, the quantity of described first camera head 11 is at least two, and the optical axis direction of each first camera head 11 Consistent and be each perpendicular to place straight line.Compared to the second camera head 12, all first camera heads 11 are placed in the middle and set.Such as, If the quantity of camera head is altogether 4, then middle two is the first camera head 11, and both sides are the second camera head 12.Again As, if the quantity of camera head is altogether 3, the most adjacent two is the first camera head 11, and another is the second camera head 12.For another example, if the quantity of camera head is altogether 5, then in the middle of in three or middle three two be the first camera head 11, remaining as the second camera head 12.
Preferably, the quantity of described camera head is the integral multiple of 2, and all camera heads are symmetrical arranged.So can be right Obtaining of claiming supplements the image-region that the first camera head 11 is failed jointly to photograph.
Two dimensional image captured by every two first camera heads 11 has common images region.Set up separately and image described first Second camera head 12 of device 11 at least side, for supplementing the figure that described first camera head 11 is failed jointly to photograph As region.
Here, the first camera head 11 is when shooting, collectively covered region, its visual angle can photograph common image Region, the region that visual angle cannot collectively cover need to be supplemented by the visual angle of the second camera head 12.So, follow-up image is located in advance Reason device 13 can carry out pretreatment based on three-dimensional reconstruction to each camera head connected and/or captured each image.With Just the 3-dimensional image processing apparatus being connected with described image preprocess apparatus 13 is according to the common images captured by each camera head Region, builds 3-D view.
Overlap, each first camera head 11 is had to have a standard lens in order to ensure the visual angle between each camera head, each Two camera heads 12 configure wide-angle lens;Or, all camera heads are respectively provided with standard lens.
The optical axis direction of all camera heads is consistent and is each perpendicular to described straight line, as shown in Figure 1.Or the second shooting dress Put 12 and default drift angle is set to direction placed in the middle, as shown in Figure 2.
According to the lens characteristics of each camera head, between the distance between the central point of all adjacent camera heads can wait Every.Or the distance between distance camera head adjacent from other central point between the first camera head 11 central point is different. Such as, described acquisition system 1 includes: have first camera head A1, A2 of standard lens, has the second shooting of standard lens Device 12 includes: the B1 adjacent for camera head A1 with first, and the B2 adjacent for camera head A2 with first, and wherein, second takes the photograph As device B1 and B2, all axis of symmetry directions between first camera head A1, A2 turn α angle.Wherein, the first camera head A1 and Distance between A2 central point is d1, and the distance between the first camera head A1 and the second camera head B1 central point is taken the photograph with first As the distance between device A2 and the second camera head B2 central point is d2.
Described image preprocess apparatus 13 is for carrying out base to each camera head connected and/or captured each image Pretreatment in three-dimensional reconstruction.
Here, carry out estimating disparity information for the ease of follow-up based on image two-by-two, described image preprocess apparatus 13 can be right The parameter of each camera head is adjusted.Such as, to the auto-exposure control of camera head, auto focus control and the whitest Balance control is adjusted.Or, each image received is filtered by described image preprocess apparatus 13, white balance etc. Reason.Again pretreated each image is transferred to described 3-dimensional image processing apparatus.
In a kind of alternative, described image preprocess apparatus 13 includes: synchronization module, image signal processing blocks, Image cropping module.
Described synchronization module is connected with each image signal processing, for the frame synchronization of each described image signal processing module And parameter synchronization is arranged, and send synchronic command to each image signal processing module.Described synchronic command includes but not limited to: Synchronize triggering command, and below at least one: the unification of each camera head take pictures parameter, the filtering parameter of each image, respectively scheme As filtered target component etc..
Described image signal processing module is identical with each camera head quantity, and each image signal processing module connects one and takes the photograph As device, for based on the synchronic command received, the parameter of the camera head connected is configured, and/or for inciting somebody to action Captured view data carries out signal processing.
In one case, if the model of each camera head is identical, the most described image signal processing is at the finger of synchronic command Under showing, send unified parameter of taking pictures to each camera head, and obtain the image captured by corresponding camera head.
If the model of each camera head is different, the most described image signal processing is by the ginseng of taking pictures of correspondence in synchronic command oneself Number is sent to connected camera head, and obtains the image captured by corresponding camera head.
And/or, still another in the case of, the model of the camera head no matter each image signal processing is connected whether phase With, the filtering parameter that can be provided according to described synchronic command or target filtering parameter, the image received is carried out at signal Reason.Wherein, described signal processing includes denoising and white balance etc..
Described image cropping module has two camera heads in common images region to be connected with shooting, for based on common images The image that two camera heads are absorbed by region respectively is cut out.
As it is shown on figure 3, in comprising the acquisition system of two the first camera heads and two the second camera heads, image is cut out Cut-off-die block R12 connects image signal processing module I SP1 and ISP2, and image cropping module R23 connects image signal processing module ISP2 and ISP3, image cropping module R34 connects image signal processing module I SP3 and ISP4, and image cropping module R41 connects Image signal processing module I SP4 and ISP1.Synchronization module connect image signal processing blocks ISP1, ISP2, ISP3 and ISP4。
Two width images are entered respectively by each image cropping module according to the common images region in the two width images received Row cutting.
Here, the available matching way based on profile, image block characteristics etc. of described image cropping module is schemed jointly As region, and obtained common images region is given cutting.
Described display system image processing apparatus is connected with described image preprocess apparatus 13, can be according to each adjacent shooting dress Put captured common images region and build corresponding three-dimensional image projection, obtain 3-D view.
The technical scheme of the present embodiment, solving region underlapped in the image that two camera heads are absorbed cannot build The problem of 3-D view;Each camera head is placed on the same line, it is possible to be easy to this acquisition system is arranged on intelligence simultaneously Can terminal (as the helmet of augmented reality or virtual reality, glasses etc.), provide sufficient three-dimensional for three-dimensional processing device Picture material.
It addition, need distance, camera head lens and the shooting dress selected between each camera head according to actual design The quantity put, it is possible to provide more particularly suitable image information for display system.
Embodiment two
As shown in Figure 4, the present embodiment provides the display system of a kind of 3-D view.It is single that described display system may be installed one In only electronic equipment, it is connected by external interface with above-mentioned acquisition system.Can also be with the described acquisition system integration in an intelligence In energy terminal.Described display system is for the parallel situation of the optical axis of all camera heads.Described display system is for according to institute Common images region in the two-dimensional images received constitutes 3-D view.Wherein, this 3-D view is exemplified as naked eye three-dimensional Image.
Described display system 2 includes: initialization module 20, pretreatment module 21, estimation module 22, space reconstruction and process Module 23, viewpoint projection process module 24, interleaving block 25.
Described initialization module 20 is for based on image preprocess apparatus in the acquisition system described in the various embodiments described above The size of the two dimensional image provided and the size of default screen, aobvious before determining the Pixel Dimensions in described screen and described screen Show the voxel size in space.
Here, the size of the size of described two dimensional image and screen can represent with millimeter, inch etc..Described default screen Size can need according to the design of intelligent terminal depending on.Wherein, in screen, the size of pixel region is p=l/n, and wherein, l is The size of two dimensional image, n is screen size.Described initialization module 20 shows sky before determining screen according to the size of pixel region Between voxel size.Wherein, the length and width of voxel can be consistent with the length and width of pixel region or be the default ratio of length and width of pixel region Example.Wherein, described voxel refers to the least unit for constituting display space.The pixel being similar in screen, in the present embodiment Voxel can be unit cube or need dimensionality reduction to unit rectangles or unit segment according to calculating.
Described pretreatment module 21 is for carrying out left and right figure coupling to the two dimensional image of pairing pre-in each viewpoint.
Specifically, described pretreatment module 21 obtains for pretreatment to be schemed about viewpoint, makes left and right figure coupling, and it is more suitable for Disparity estimation.One of Preprocessing Algorithm is Histogram Matching algorithm, its object is to make left and right figure brightness, chromaticity match.
Described estimation module 22 estimates 3-D view material for being based upon the every pair of two dimensional image mated;Wherein, often Individual 3-D view material package containing organizing parameter more, and described parameter includes the picture that Same Physical spatial point projects on the screen Element region and this physical space point parallax information on the screen.
Here, the two width two dimensions that each image cropping module in described acquisition system is provided by described estimation module 22 Image is as two width two dimensional images of pairing;The recycling algorithm for estimating such as 3DRS algorithm or Lucas Kanade algorithm, estimates every To two dimensional image subpoint position (i.e. subpoint place pixel region) on the screen and parallax information.As it is shown in figure 5, The pixel of same scene pixel region inner projection point position on the screen in the common images region of two width two dimensional images It is set to crAnd cl, then the distance between two positions is parallax information.Described estimation module 22 obtains organizing ginseng by algorithm for estimating more Number.
Described space reconstruction and processing module 23 are for the spatial information according to each two camera head and corresponding three-dimensional figure Pixel material, fills the spatial pixel values of voxel in described display space;And, the 3d space after rebuilding is filtered etc. Reason.
Here, described space reconstruction and processing module 23 utilize the corner relation in triangle theorem, calculate two width X-Y schemes The common images region projection of picture on the screen time, the threedimensional model built in display space, and obtain and described three The voxel that dimension module is overlapping, is assigned to overlap by the pixel value of each pixel in common images region in a wherein width two dimensional image Voxel.Then, the 3d space after rebuilding is carried out by described space reconstruction and processing module 23 based on color, texture, illumination etc. Filtering, adjustment etc. process.
Preferably, described space reconstruction and processing module 23 include: spatial modeling submodule and assignment submodule.(the most not Give diagram)
Described spatial modeling submodule, for when each camera head to shoot two dimensional image is as viewpoint, utilizes each shooting dress The spatial information put, when the pixel region on screen projects, calculates two viewpoint light confluence in described display space Territory.
As shown in Figure 6, described spatial modeling submodule is with pre-two camera heads matched as viewpoint, on described screen The subpoint place pixel region determined projects, and when light display space before described screen intersects, obtains corresponding Intersectional region S.Utilize the spatial information of two camera heads, the parameter of corresponding subpoint, between described screen and viewpoint away from From, described spatial modeling submodule is calculated the intersectional region S band of position in described display space, and transfers to assignment Module.
Described assignment submodule is for the overlapping cases according to described intersectional region Yu voxel, by pixel in relevant parameter Pixel value be assigned at least one voxel overlapping with described intersectional region.
Here, described assignment submodule is according to presetting the composition position of each voxel of described display space, size, really Determine to partly overlap with described intersectional region or the most overlapping whole voxel;And then right according to default overlapping cases-assignment mode Should be related to, the pixel value of pixel in relevant parameter is assigned at least one voxel overlapping with described intersectional region.
Specifically, described assignment submodule is according to the corresponding relation of the overlapping cases-assignment mode preset, by relevant parameter The pixel value of middle pixel be assigned to the mode of at least one voxel overlapping with described intersectional region include following any one:
1) according to spatial information and the corresponding three-dimensional picture material of each two camera head, determine corresponding to each group of parameter Intersectional region at least one key point;The pixel value of pixel in relevant parameter is assigned to each described key point fallen The voxel entered.
Wherein, described spatial information includes: the spacing between each camera head central point of pre-pairing, and optionally wraps Containing actual photographed distance etc..
Wherein, described key point includes but not limited to: the central point of intersectional region S, the borderline point of intersectional region S etc.. Such as, four drift angles of described intersectional region S, and the midpoint on four limits.
Described assignment submodule by should the pixel value of pixel in the parameter of intersectional region S be assigned to determined by close The voxel that key point is fallen into.
Such as, as it is shown in fig. 7, described assignment submodule is according to the spatial information of two camera heads and corresponding three-dimensional figure Pixel material etc. determine four drift angles s1, s2, s3, s4 on intersectional region and four midpoints, limit respectively fall in voxel t1, t2, t3 and T4, the most described assignment submodule the pixel value of the pixel in parameter corresponding for this intersectional region is assigned to simultaneously voxel t1, t2, T3 and t4.
2) according to spatial information and the corresponding three-dimensional picture material of each two camera head, determine corresponding to each group of parameter The overlap proportion of intersectional region and at least one voxel;According to described overlap proportion, by the pixel of pixel in relevant parameter Value is assigned to corresponding voxel.
Here, described assignment submodule is according in the spatial information of each two camera head and corresponding three-dimensional picture material One group of parameter calculate the length of light intersectional region S and the width of corresponding subpoint place pixel region.Then, described assignment submodule Block utilizes formulaThe area of zoning S.Obtained by described assignment submodule utilizes Area and the proportionate relationship of overlapping each voxel area, be assigned to ratio maximum by the pixel value of pixel in this group parameter Voxel.Wherein, v is the length of side of voxel, and voxel described herein is set to regular cube or square, wsjFor intersectional region S at voxel Width value shared by, lsjFor intersectional region S shared height value in voxel.Portion overlapping in t2 voxel for region S as shown in Figure 7 Divide and utilizeFormula calculates its area.Part overlapping in t2 voxel for region S as shown in Figure 8 utilizes lsjwsjFormula calculates Its area.
Owing to can be the limited amount of pixel of voxel assignment own, therefore, even if utilize all parameters to voxel assignment, Obtained assignment voxel is still sparse.In a kind of optimal way, in order to improve computational efficiency, described assignment submodule exists After each voxel assignment, it is judged that assignment voxel in accounting for described display space the coverage of all voxels whether reach The range threshold preset, the most then continue as new voxel assignment, if it is not, then exit voxel assignment.Wherein, described assignment Module can account for the quantity of all voxels according only to assignment voxel and be used as coverage.Or, described assignment submodule passes through Add up assignment voxel distribution in all voxels to determine coverage.Described range threshold can be a fixed value, also Depending on can be according to estimated number of parameters.
It should be noted that in a kind of optional mode, the voxel that voxel is not assignment of institute's assignment, if wanted assignment Voxel be assigned, the most not repeated assignment of values.
In order to reduce computation complexity, described space reconstruction and processing module 23 are with the voxel of the dimension of vertical described screen For row unit, utilize the pixel value of pixel in the spatial information of each two camera head and each 3-D view material, fill out line by line Fill the spatial pixel values of each plane voxel.
Specifically, described space reconstruction and processing module 23, will by with the voxel of vertical described screen dimension for row unit Three-dimensional voxel dimensionality reduction becomes the voxel (such as foursquare voxel) of two dimension, composes the voxel of two dimension according still further to aforesaid way Value.
After completing voxel assignment, described display system 2 can determine described screen according to the structure of display screen grating Viewpoint corresponding to each pixel region sub-pixel position in Mu, and according to the viewpoint represented by each camera head or based on respectively taking the photograph The viewpoint expanded as device, is projected in the voxel in described display space in corresponding pixel region, obtains 3-D view.
In order to reduce amount of calculation, the projection pattern in described display system 2, can be by viewpoint projection process module 24 and intertexture Module 25 performs.
Described viewpoint projection process module 24 is for based on rebuilding rear space, determining each viewpoint sub-picture in respective pixel Element position;It is filtered waiting simultaneously for projection backsight point and processes.
Here, described viewpoint can be each camera head, it is also possible to insert new viewpoint between each camera head, and will Each camera head and newly inserted viewpoint are as the viewpoint of pre-determining.Wherein, the viewpoint of insertion can be with decile two adjacent shooting dress Distance between putting, or, it is that corresponding interpolation coefficient is with camera head spacing long-pending with adjacent viewpoint spacing.Interpolation is regarded Point is located along the same line with each camera head.For the viewpoint inserted, described viewpoint projection process module 24 can be according to phase The projection in this given viewpoint of the adjacent image captured by least one camera head determines the image at inserted viewpoint. Meanwhile, the image to all viewpoints is filtered waiting and processes, in order to for follow-up interleaving treatment, it is provided that the image that color is unified.
Described viewpoint projection process module 24 is arranged according to the grating of display screen, the institute that each viewpoint obtained by calculating is corresponding State each sub-pixel position in screen pixels region.Such as, as it is shown in figure 9, each pixel region is made up of tri-sub-pixel positions of RGB, Described viewpoint projection process module 24 obtains the viewpoint numbering that each sub-pixel position is corresponding, and starts interleaving block 25.
Described interleaving block 25 is for based on institute way on each viewpoint ray cast to place, corresponding sub-pixel position pixel region The projection not blocking voxel of warp accounts for the ratio of described pixel region, one by one will not block the picture of same sub-pixel position in voxel Element value is weighted, and is assigned in described pixel region the sub-pixel position of correspondence.
Here, described interleaving block 25 be perpendicular to described screen direction as projecting direction, voxel is abbreviated to parallel Axis line segment or the axis line segment on voxel surface in described screen.Described interleaving block 25 calculates a certain viewpoint in projection On direction, at least part of line segment on the voxel not being blocked projects in each pixel region of described screen, and will projection Line segment and the ratio of pixel region width, as the weight of the sub-pixel value of this voxel;Further according to pixel region sub-pixel position (R, G or B sub-pixel position), is weighted the corresponding sub-pixel value in each voxel, and the value after weighting is assigned to described pixel On corresponding sub-pixel position in region.
Such as, as shown in Figure 10, the pixel region p in screen represents with ab line segment, and voxel 1,2,3,4,5 is viewpoint view All voxels during projecting to pixel region p, wherein, a length of benchmark covered with the projection of the axis of each voxel, Determining that voxel 1,2,3 is the voxel not being blocked, voxel 4,5 is the voxel being blocked.Described interleaving block 25 is by voxel 1,2,3 In the part that is not blocked project to the line segment length of pixel region p respectively with the ratio of ab line segment length as voxel 1,2,3 Weight, further according to viewpoint view to should pixel region p sub-pixel position be R sub-pixel position, by R picture in voxel 1,2,3 Element value is sued for peace after being multiplied by weight respectively, obtains the sub-pixel value of R sub-pixel position in this pixel region p.
Described interleaving block 25 uses the projection pattern of above-mentioned example, composes pixel regions all on described screen Value, obtains 3-D view.
The technical scheme of the present embodiment is owing to adding the spatial information of actual camera head and the most reverse Rebuild the threedimensional model before assuming screen, relend the projection helping threedimensional model on screen, it is possible to increase 3-D view is visually Third dimension.
Embodiment three
Unlike embodiment two, tilt the feelings of a predetermined angle to direction placed in the middle particular for the second camera head Condition, presetting partially of each second camera head that described pretreatment module 21 is additionally operable to be provided based on described image preprocess apparatus Angle, by the two dimensional image captured by each second camera head, all maps on described screen, completes the optical axis that different optical axis obtains Join.
Specifically, as shown in figure 11, described pretreatment module 21 according to the optical axis of the second camera head 12 being perpendicular to State and determine that the image captured by this second camera head 12 projects on this plane in optical axis and the plane that intersects with described screen. Such as, the projected position on this plane of the pt point in display space is pt1, equally, this second camera head 12, pt point and The point intersected with described screen on the light of pt1 point place is pt2, and this pt2 point is defined as second and takes the photograph by described pretreatment module 21 As the two dimensional image captured by device 12 on the screen shown in two dimensional image, and described two dimensional image is estimated as follow-up To should the two dimensional image of position camera head when meter module 22, space reconstruction and processing module 23 etc. process.
The each module of the present embodiment is exemplified below:
Initialization module for the size of two dimensional image provided based on above-mentioned acquisition system and the size of default screen, Determine the voxel size of display space before the Pixel Dimensions in described screen and described screen.
Pretreatment module 21 is for carrying out left and right figure coupling to the two dimensional image of pairing pre-in each viewpoint;And for based on The default drift angle of each second camera head that described image preprocess apparatus is provided, by two captured by each second camera head Dimension image, all maps on described screen.
Estimation module 22 estimates 3-D view material for the every two width two dimensional images for having mated;Wherein, described two dimension Image includes the image after mapping;Each 3-D view material package containing organizing parameter more, and often group parameter includes wherein in piece image Corresponding Same Physical spatial point in the subpoint position of described screen and two width images Same Physical spatial point on screen Parallax information.
Space reconstruction and processing module 23 are for the spatial information according to each two camera head and corresponding three-dimensional pixel Material, fills the spatial pixel values of voxel in described display space, and to process such as the realistic space rebuild are filtered.
Viewpoint projection process module 24 is for based on rebuilding rear space, determining each viewpoint sub-pixel in respective pixel Position.
Interleaving block 25 for based on each viewpoint ray cast to place, corresponding sub-pixel position pixel region by way of The projection not blocking voxel accounts for the ratio of described pixel region, one by one will not block the pixel value of same sub-pixel position in voxel It is weighted, and is assigned in described pixel region the sub-pixel position of correspondence.
It should be noted that the function of the same name that each functional module in the present embodiment can be previously mentioned at foregoing embodiments On the basis of module, carry out 3-D view process for the image after mapping in the present embodiment.
Note, above are only presently preferred embodiments of the present invention and institute's application technology principle.It will be appreciated by those skilled in the art that The invention is not restricted to specific embodiment described here, can carry out for a person skilled in the art various obvious change, Readjust and substitute without departing from protection scope of the present invention.Therefore, although by above example, the present invention is carried out It is described in further detail, but the present invention is not limited only to above example, without departing from the inventive concept, also Other Equivalent embodiments more can be included, and the scope of the present invention is determined by scope of the appended claims.

Claims (10)

1. the acquisition system of a 3-D view, it is characterised in that including:
At least two the first camera head placed in the middle on the same line and at least one second camera head are set;
Wherein, the optical axis direction of described first camera head is consistent and is each perpendicular to described straight line, comprises in the image absorbed Common image-region;
Described second camera head sets up separately in described first camera head at least side, is used for supplementing described first camera head institute Fail the image-region jointly photographed;
The image preprocess apparatus being all connected with each camera head, wherein, described camera head is the first camera head or second Camera head;
Described image preprocess apparatus is for carrying out based on three-dimensional each camera head connected and/or captured each image The pretreatment rebuild.
The acquisition system of 3-D view the most according to claim 1, it is characterised in that the quantity of all camera heads is 2 Integral multiple, and all camera heads are symmetrical arranged.
The acquisition system of 3-D view the most according to claim 1, it is characterised in that the center of all adjacent camera heads Distance between point is at equal intervals;Or the distance camera head central point adjacent with other between two first camera head central points Between distance different.
The acquisition system of 3-D view the most according to claim 1, it is characterised in that the first camera head has standard mirror Head, the second camera head configuration wide-angle lens;Or, all camera heads are respectively provided with standard lens.
The acquisition system of 3-D view the most according to claim 1, it is characterised in that described image processing apparatus includes:
The image signal processing module identical with each camera head quantity, each image processing signal processing connects a shooting dress Put, for based on the synchronic command received, the parameter of the camera head connected is configured, and/or for being clapped The view data taken the photograph carries out signal processing;
Synchronization module, for frame synchronization and the parameter synchronization setting of each described image signal processing module, and to each picture signal Processing module sends described synchronic command;
The image cropping module that two camera heads in common images region are connected is had, for dividing based on common images region with shooting The other image being absorbed two camera heads carries out cutting.
6. according to the acquisition system of described 3-D view arbitrary in claim 1-5, it is characterised in that all camera heads Optical axis direction is consistent and is each perpendicular to described straight line.
7. according to the acquisition system of described 3-D view arbitrary in claim 1-5, it is characterised in that described second shooting dress Put and default drift angle is set to direction placed in the middle.
8. the display system of a 3-D view, it is characterised in that including:
Initialization module, for the size of two dimensional image provided based on acquisition system as claimed in claim 6 with preset The size of screen, determines the voxel size of display space before the Pixel Dimensions in described screen and described screen;
Pretreatment module, for carrying out left and right figure coupling to the two dimensional image of pairing pre-in each viewpoint;
Estimation module, estimates 3-D view material for every pair of two dimensional image for having mated;Wherein, each 3-D view material Comprising and organize parameter more, described parameter includes that in a wherein width two dimensional image, corresponding Same Physical spatial point is in the projection of described screen Point position and corresponding parallax information;
Space reconstruction and processing module, for the spatial information according to each two camera head and corresponding three-dimensional picture material, fill out Fill the spatial pixel values of voxel in described display space;
Viewpoint projection process module, for based on rebuilding rear space, determining each viewpoint sub-pixel position in respective pixel;
Interleaving block, for based on each viewpoint ray cast to place, corresponding sub-pixel position pixel region by way of do not block The projection of voxel accounts for the ratio of described pixel region, adding not blocking the pixel value of same sub-pixel position in voxel one by one Power, and it is assigned in described pixel region the sub-pixel position of correspondence.
9. the display system of a 3-D view, it is characterised in that including:
Initialization module, for the size of two dimensional image provided based on acquisition system as claimed in claim 7 with preset The size of screen, determines the voxel size of display space before the Pixel Dimensions in described screen and described screen;
Pretreatment module, for carrying out left and right figure coupling to the two dimensional image of pairing pre-in each viewpoint;And for based on described The default drift angle of each second camera head that acquisition system is provided, by the two dimensional image captured by each second camera head, all Map on described screen, complete the optical axis coupling that different optical axis obtains;
Estimation module, estimates 3-D view material for every pair of two dimensional image for having mated;Wherein, described two dimensional image includes Image after mapping;Each 3-D view material package containing organizing parameter more, and described parameter includes that Same Physical spatial point is at described screen One subpoint position of curtain and this physical space point parallax information on the screen;
Space reconstruction and processing module, for the spatial information according to each two camera head and corresponding three-dimensional picture material, fill out Fill the spatial pixel values of voxel in described display space;
Viewpoint projection process module, for based on rebuilding rear space, determining each viewpoint sub-pixel position in respective pixel;
Interleaving block, for based on each viewpoint ray cast to place, corresponding sub-pixel position pixel region by way of do not block The projection of voxel accounts for the ratio of described pixel region, adding not blocking the pixel value of same sub-pixel position in voxel one by one Power, and it is assigned in described pixel region the sub-pixel position of correspondence.
10. an intelligent terminal, it is characterised in that including:
The acquisition system of the 3-D view as described in arbitrary in claim 6-7;
And, with the display system of the 3-D view as described in arbitrary in claim 8-9 that described acquisition system matches.
CN201610552813.0A 2016-07-14 2016-07-14 Acquisition system, display system and the intelligent terminal being applicable in of 3-D image Active CN106210700B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610552813.0A CN106210700B (en) 2016-07-14 2016-07-14 Acquisition system, display system and the intelligent terminal being applicable in of 3-D image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610552813.0A CN106210700B (en) 2016-07-14 2016-07-14 Acquisition system, display system and the intelligent terminal being applicable in of 3-D image

Publications (2)

Publication Number Publication Date
CN106210700A true CN106210700A (en) 2016-12-07
CN106210700B CN106210700B (en) 2019-03-05

Family

ID=57477298

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610552813.0A Active CN106210700B (en) 2016-07-14 2016-07-14 Acquisition system, display system and the intelligent terminal being applicable in of 3-D image

Country Status (1)

Country Link
CN (1) CN106210700B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021097843A1 (en) * 2019-11-22 2021-05-27 驭势科技(南京)有限公司 Three-dimensional reconstruction method and device, system and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102096901A (en) * 2009-11-17 2011-06-15 精工爱普生株式会社 Context constrained novel view interpolation
US20130070048A1 (en) * 2011-09-21 2013-03-21 National Applied Research Laboratories Formation Apparatus Using Digital Image Correlation
CN104717481A (en) * 2013-12-13 2015-06-17 松下知识产权经营株式会社 Image capturing apparatus, monitoring system, image processing apparatus, and image capturing method
CN105659592A (en) * 2014-09-22 2016-06-08 三星电子株式会社 Camera system for three-dimensional video

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102096901A (en) * 2009-11-17 2011-06-15 精工爱普生株式会社 Context constrained novel view interpolation
US20130070048A1 (en) * 2011-09-21 2013-03-21 National Applied Research Laboratories Formation Apparatus Using Digital Image Correlation
CN104717481A (en) * 2013-12-13 2015-06-17 松下知识产权经营株式会社 Image capturing apparatus, monitoring system, image processing apparatus, and image capturing method
CN105659592A (en) * 2014-09-22 2016-06-08 三星电子株式会社 Camera system for three-dimensional video

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021097843A1 (en) * 2019-11-22 2021-05-27 驭势科技(南京)有限公司 Three-dimensional reconstruction method and device, system and storage medium

Also Published As

Publication number Publication date
CN106210700B (en) 2019-03-05

Similar Documents

Publication Publication Date Title
CN108307675B (en) Multi-baseline camera array system architecture for depth enhancement in VR/AR applications
US9407904B2 (en) Method for creating 3D virtual reality from 2D images
US7983477B2 (en) Method and apparatus for generating a stereoscopic image
US20150002636A1 (en) Capturing Full Motion Live Events Using Spatially Distributed Depth Sensing Cameras
US20160284132A1 (en) Apparatus and method for providing augmented reality-based realistic experience
JP2014522591A (en) Alignment, calibration, and rendering systems and methods for square slice real-image 3D displays
TWI497980B (en) System and method of processing 3d stereoscopic images
WO2011052064A1 (en) Information processing device and method
KR20160135660A (en) Method and apparatus for providing 3-dimension image to head mount display
CN211128024U (en) 3D display device
TWI788739B (en) 3D display device, 3D image display method
US9933626B2 (en) Stereoscopic image
CN102903090A (en) Method, device and system for synthesizing panoramic stereograms, and browsing device for panoramic stereograms
CN109379578A (en) Omnidirectional three-dimensional video-splicing method, apparatus, equipment and storage medium
CN105530503A (en) Depth map creating method and multi-lens camera system
WO2018187635A1 (en) System, method and software for producing virtual three dimensional images that appear to project forward of or above an electronic display
WO2019054304A1 (en) Imaging device
WO2018032841A1 (en) Method, device and system for drawing three-dimensional image
JPH11175762A (en) Light environment measuring instrument and device and method for shading virtual image using same
CN107545537A (en) A kind of method from dense point cloud generation 3D panoramic pictures
CN106210700B (en) Acquisition system, display system and the intelligent terminal being applicable in of 3-D image
CN106231284B (en) The imaging method and system of 3-D image
CN106210694B (en) The imaging method and system of 3-D view
Tan et al. Multiview panoramic cameras using a mirror pyramid
CN110264406B (en) Image processing apparatus and image processing method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20200330

Address after: 215634 north side of Chengang road and west side of Ganghua Road, Jiangsu environmental protection new material industrial park, Zhangjiagang City, Suzhou City, Jiangsu Province

Patentee after: ZHANGJIAGANG KANGDE XIN OPTRONICS MATERIAL Co.,Ltd.

Address before: 201203, room 5, building 690, No. 202 blue wave road, Zhangjiang hi tech park, Shanghai, Pudong New Area

Patentee before: WZ TECHNOLOGY Inc.