CN103188424B - 3D imaging modules and 3D imaging methods - Google Patents

3D imaging modules and 3D imaging methods Download PDF

Info

Publication number
CN103188424B
CN103188424B CN201110444722.2A CN201110444722A CN103188424B CN 103188424 B CN103188424 B CN 103188424B CN 201110444722 A CN201110444722 A CN 201110444722A CN 103188424 B CN103188424 B CN 103188424B
Authority
CN
China
Prior art keywords
image
unit
imaging
object distance
sensed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201110444722.2A
Other languages
Chinese (zh)
Other versions
CN103188424A (en
Inventor
吴定远
郭章纬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fortune culture industry group (Shenzhen) Limited
Original Assignee
Fortune Culture Industry Group (shenzhen) Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fortune Culture Industry Group (shenzhen) Ltd filed Critical Fortune Culture Industry Group (shenzhen) Ltd
Priority to CN201110444722.2A priority Critical patent/CN103188424B/en
Publication of CN103188424A publication Critical patent/CN103188424A/en
Application granted granted Critical
Publication of CN103188424B publication Critical patent/CN103188424B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Studio Devices (AREA)
  • Adjustment Of Camera Lenses (AREA)

Abstract

A kind of 3D imaging modules comprising the first imaging unit, the second imaging unit, memory, color-separated unit, processor, image processing unit, focusing driving unit, optical image stabilization unit and image composing unit.Image of first, second imaging unit for capturing Same Scene with different angle simultaneously.The color-separated unit with Red Green Blue for indicating the image that first imaging unit and second imaging unit are captured respectively.The processor is for determining current shooting pattern.The optical image stabilization unit is for correcting captured preceding shake.The image processing unit in a manner of software computer sim- ulation for carrying out fuzzy revising out of focus.The focusing driving unit focuses to first imaging unit and the second imaging unit.The invention further relates to a kind of 3D imaging methods.

Description

3D imaging modules and 3D imaging methods
Technical field
The present invention relates to a kind of image-forming module more particularly to a kind of 3D imaging modules and 3D with automatic focusing function at Image space method.
Background technology
With the development of science and technology 3D (three-dimensional) imaging modules have been increasingly used in many Field, and be to reach preferable imaging effect, 3D imaging modules are also required to have automatic focusing function.
The Autofocus Technology of image-forming module is divided into mechanical Autofocus Technology and digital Autofocus Technology.Machine Tool formula Autofocus Technology is focused using eyeglass in mechanical structure moving lens.Since mechanically the technology of focusing will use again Miscellaneous mechanical structure, leads to autofocus lens cost increase and volume is larger.
Digital Autofocus Technology by software emulation, calculate at the image that is sensed image sensor Reason so that because on image sensor pixel because image out of focus and fuzzy becomes clear.For example, extended depth-of-field (Extend Depth of Field:EDoF) technology is had by oneself most using the three primary colors (red, green, blue) of light are each in different distance Good MTF curve, object in different distance, can the best primary colors of distance instantly using algorithm Digital Simulation to go out other two former Color, to reach the sharp image number of full width.So, the defect of digital focusing focusing is insufficient for low coverage imaging capability, one As for, if object distance, within 40cm, the focus effects of Digital Autofocus Technology tend not to satisfactory.In addition, If there is shake in shooting process, the EDoF technologies can be caused not play a role completely, cause the image-forming module Imaging effect declines.
Invention content
In view of this, it is necessary to provide a kind of 3D imaging modules and 3D imaging methods avoiding the above problem.
A kind of 3D imaging modules comprising first imaging unit and second imaging unit, one with it is described Memory that first imaging unit and second imaging unit are connected, a color being connected with the memory point From unit, a processor being connected with the color-separated unit, an image procossing being connected with the processor Unit, focusing driving unit, two light for corresponding respectively to first imaging unit and second imaging unit Learn image stabilization unit and an image composing unit.First imaging unit and second imaging unit are for same When with different angle capture Same Scene image.The memory is for storing first imaging unit and described second The image that imaging unit is captured.The color-separated unit is used for first imaging unit and second imaging is single The image that member is captured is indicated with Red Green Blue respectively.The processor is used for single to first imaging respectively The image that first and described second imaging unit is captured carries out MTF operations, and current shooting pattern is determined according to operation result, And described image processing unit or the focusing driving unit are controlled according to identified current shooting model selection.The figure As processing unit is used to capture first imaging unit and second imaging unit by image procossing mode Image carry out fuzzy revising out of focus.The focusing driving unit is used for first imaging unit and second imaging Unit is focused.The optical image stabilization unit is for detecting first imaging unit and the second one-tenth described before shooting As the shake of unit, and according to detected shake, first imaging unit and second imaging unit are carried out Jitter compensation.The warp that described image synthesis unit is used to be captured first imaging unit and second imaging unit Described image processing synthesizes defocused image, obtains 3D rendering.
A kind of 3D imaging methods comprising following steps:
Same Scene is shot with different angle respectively simultaneously with two imaging units;
Shake detecting before being shot to described two imaging units, it is determined whether there is shake;
If there is shake, it is determined that amount of jitter carries out jitter compensation according to the amount of jitter;
The image that the image sensor of described two imaging units is sensed carries out color-separated respectively;
The image-region sensed to each pixel unit of the image sensor of described two imaging units carries out MTF fortune It calculates, obtains the corresponding mtf value of image-region that each pixel unit is sensed;
The mtf value of the image sensed according to each pixel unit determines that each pixel unit is sensed Image object distance;
According to the object distance for the image that each pixel unit is sensed, current shooting pattern is determined;
If current shooting pattern is nearly burnt pattern, the following steps are carried out:
According to the object distance for the image that each pixel unit is sensed, taking for described two imaging units is determined respectively As the best focusing position of camera lens;
Determine the focusing drive volume of the sampling image lens of described two imaging units respectively according to the best focusing position;
The sampling image lens of described two imaging units are respectively driven to best focusing position according to the focusing drive volume;
The warp captured to described two imaging units synthesizes defocused image, obtains 3D rendering;
If current shooting pattern is remote burnt pattern, the following steps are carried out:
According to the corresponding mtf value of image-region that each pixel unit is sensed, respective pixel unit institute is determined The fuzzy quantity of the image sensed;
According to the fuzzy quantity for the image that each pixel unit is sensed, determine what respective pixel unit was sensed The fuzzy revising amount of image;
According to the fuzzy revising amount, the image sensed to each pixel unit carries out fuzzy revising;
The image after fuzzy revising captured to described two imaging units synthesizes, and obtains 3D rendering.
Compared with the existing technology, the 3D imaging modules and and 3D imaging methods in the way of software computer sim- ulation It determines the object distance of subject, and current shooting pattern is determined according to the situation of object distance, calculated according to screening-mode selection software Mode drives the mode of sampling image lens to focus, and can reach no matter under close burnt or remote burnt screening-mode, can Obtain focusing clearly image, the image about Same Scene that two imaging units are shot simultaneously is synthesized, therefore can To obtain relatively sharp 3D rendering.It, can be in addition, carry out shaking detection and jitter compensation before shooting by OIS units The computational accuracy of the image Fuzzy Influence fuzzy revising amount caused by shake in focus process is prevented, therefore imaging can be improved Effect.
Description of the drawings
Fig. 1 is the schematic diagram of the 3D imaging modules of embodiment of the present invention.
Fig. 2 is the structural exploded view of the first imaging unit of the 3D imaging modules of embodiment of the present invention.
Fig. 3 is the structural exploded view of the OIS units of the first imaging unit of Fig. 2.
Fig. 4 is another angular views of the OIS units of Fig. 3.
Fig. 5 A and Fig. 5 B are the flow charts of the 3D imaging methods of embodiment of the present invention.
Main element symbol description
3D imaging modules 100
First imaging unit A
Second imaging unit B
First sampling image lens A1
First image sensor A2
Second sampling image lens B1
Second image sensor B2
Eyeglass 101,102
Pedestal 11
Light hole 111
Accommodating space 112
Protrusion 113
Mounting hole 114
Fixed frame 12
First receiving space 120
First accepting hole 121
Second accepting hole 122
Flange 123
Connecting hole 124
Movable frame 13
Second receiving space 131
Through-hole 132
Notch 133
Internal thread 134
Follower lever 14
First guide rod 15
Second guide rod 16
Memory 20
Color-separated unit 30
Processor 40
MTF computing modules 41
Object distance computing module 42
Object distance judgment module 43
Fuzzy quantity computing module 44
Fuzzy revising amount computing module 45
Focusing position computing module 46
Drive volume computing module 47
Image processing unit 50
Focusing driving unit 60
First focus drive 61
Circuit board 611
Piezo-electric motor 612
Driving chip 613
Second focus drive 62
OIS units 70
Moveable support portion 71
First direction support element 711
First direction guide rod 7111
The first half long guide rods 7112
First direction sliding block 7113
First shaft sleeve part 7113a
First semi-major axis set portion 7113b
First mounting groove 7113c
Second direction support element 712
Second direction guide rod 7121
Second direction sliding block 7123
Second shaft sleeve part 7123a
Second mounting groove 7123b
Eyeglass mounting hole H
Shaking detection portion 72
First Hall element 721
Second Hall element 722
Jitter compensation calculating part 73
Jitter compensation driving portion 74
First direction driving unit 741
First magnet 7411
First coil 7412
Second direction driving unit 742
Second magnet 7421
Second coil 7422
Fixed cylinder 75
External screw thread 751
Image composing unit 80
Following specific implementation mode will be further illustrated the present invention in conjunction with above-mentioned attached drawing.
Specific implementation mode
Make one to the present invention below in conjunction with attached drawing specifically to introduce.
Referring to Fig. 1, it is shown the schematic diagram of the 3D imaging modules 100 of embodiment of the present invention, the 3D imaging modules 100 include an a first imaging unit A and second imaging unit B being arranged side by side with the first imaging unit A.Institute It states the first imaging unit A and the second imaging unit B while shooting Same Scene at slightly different angles.
The first imaging unit A includes a first sampling image lens A1 and one and the first sampling image lens A1 First image sensor A2 of optical axis alignment.The second imaging unit B include a second sampling image lens B1 and one with Second image sensor B2 of the optical axis alignment of the second sampling image lens B1.
The first sampling image lens A1 and the second sampling image lens B1 is used to capture the image of object, and will catch respectively The image grasped focuses the sensing region for being projected to the first image sensor A2 and the second image sensor B2.Institute State the first sampling image lens A1 and the second sampling image lens B1 respectively include at least one eyeglass 101 with positive light coke, 102, the eyeglass 101,102 is aspherical lens.
The first image sensor A2 and the second image sensor B2 are respectively used to sense first capture The image that camera lens A1 and the second sampling image lens B1 are captured.The first image sensor A2 and second shadow As each in sensor B2 includes multiple pixel unit (not shown), the multiple pixel unit is distributed in column-shaped of shaking Effective sensing region of the corresponding image sensor.Wherein, each pixel unit includes three primary colors (red, green, blue) Pixel.Preferably, the first image sensor A2 and the second image sensor B2 include at least 2048 × 1536 A pixel unit.In present embodiment, the first image sensor A2 and the second image sensor B2 can be Charged-coupled Device (CCD) sensors or Complementary Metal Oxide Semiconductor (CMOS) sensor.
The 3D imaging modules 100 further include the color-separated unit 30, one of memory 20, one processor 40, One image processing unit, 50, focusing driving units 60 and two correspond respectively to the first sampling image lens A1 and Optical image stabilization (the Optical Image Stabilizing of the second sampling image lens B1:OIS) unit 70.It is described to deposit Reservoir 20 is connected with the first image sensor A2 and the second image sensor B2, the color-separated unit 30 with Memory is connected, and the processor 40 is connected with the color-separated unit 30, and described image processing unit 50, the focusing are driven Moving cell 60 is connected with the processor 40.
The memory 20 is used to store the first image sensor A2 and the second image sensor B2 is felt The image measured.
The color-separated unit 30 is used for the first image sensor A2 and the second image sensor B2 The image sensed is separated into the image indicated respectively with three primary colors.
The processor 40 includes a variable transfer function (Modulation Transfer Function:MTF it) transports It is fuzzy to calculate the fuzzy quantity computing module 44, one of object distance judgment module 43, one of object distance computing module 42, one of module 41, one The drive volume computing module 47 of focusing position computing module 46, one of correction amount computing module 45, one.The MTF computing modules 41 are connected with the color-separated unit 30, and the object distance computing module 42 is connected with the MTF computing modules 41, the object distance Judgment module 43 is connected with the object distance computing module 42, the focusing position computing module 46 and the fuzzy quantity operation mould Block 44 is connected with the object distance judgment module 43 respectively, the drive volume computing module 47 respectively with the focusing position operation mould Block 46, first focus drive 61 and second focus drive 62 are connected;The fuzzy revising amount computing module 45 are connected with the fuzzy revising amount computing module 45 and described image processing unit 50 respectively.
The MTF computing modules 41 are used for the first image sensor A2 and the second image sensor B2 The image-region that each pixel unit is sensed carries out MTF operations, obtains the mtf value of corresponding region.In present embodiment, institute It states MTF computing modules 41 and mtf value operation is carried out respectively to the corresponding three-primary-color image of each pixel unit.
The object distance computing module 42 determines each pixel unit for the operation result according to the MTF computing modules The object distance of the image sensed.
The object distance judgment module 43 determines current shooting for the operation result according to the object distance computing module 42 Pattern.Specifically, the operation result of the object distance computing module is made comprehensive operation by the object distance judgment module 43, and this is comprehensive The result for closing operation is compared with a preset standard value, and current shooting pattern is determined according to comparison result.Present embodiment In, the comprehensive operation for the object distance to obtained each the sensed image of pixel unit of the object distance computing module 42 into Row sampling, and obtain according to the data operation of sampling the object distance token state of the distance for characterizing current shooting main target object. The preset standard value is nearly burnt pattern or remote burnt pattern for distinguishing current shooting pattern, described in present embodiment Standard value is 40cm, if the object distance token state is more than 40cm, current shooting pattern is remote burnt pattern, if the object distance Token state is less than (being equal to) 40cm, then current shooting pattern is nearly burnt pattern.
The fuzzy quantity computing module 44 determines each picture for the operation result according to the MTF computing modules 41 The plain obtained mtf value of unitary operation and difference of the counterpart away from internal standard mtf value, and each pixel list is determined according to the difference The fuzzy quantity for the image that member is sensed.The standard mtf value is sensed most in counterpart away from interior by each pixel unit The mtf value in clear image region, therefore, the mtf value for each pixel unit that 41 operation of MTF computing modules obtains with it is right Difference between the standard mtf value answered can characterize the fuzzy quantity for the image that each pixel unit is sensed.Present embodiment In, the fuzzy quantity computing module 44 carries out fuzzy quantity operation respectively to the three-primary-color image of each pixel unit.It is described fuzzy Amount computing module 44 determines whether its function opens according to screening-mode determined by the object distance judgment module 43.This In embodiment, when the object distance judgment module 43 judges current shooting pattern for remote burnt pattern, the fuzzy quantity operation mould 44 function of block is opened, when the object distance judgment module 43 judges current shooting pattern for nearly burnt pattern, the fuzzy quantity operation 44 function of module is closed.
The fuzzy revising amount computing module 45 is used for according to 44 obtained fuzzy quantity of the fuzzy quantity computing module, really The fixed image sensed to each pixel unit carries out the correction amount of fuzzy revising.In present embodiment, the fuzzy revising Amount computing module 45 carries out trichromatic fuzzy revising amount operation respectively to the image of each pixel unit.
The focusing position computing module 46 is used for according to the operation result of the object distance computing module 42, determines described the The best focusing position of one sampling image lens A1 and the second sampling image lens B1.The focusing position computing module 46 is according to institute It states screening-mode determined by object distance judgment module 43 and determines whether its function opens.In present embodiment, when described When object distance judgment module 43 judges current shooting pattern for nearly burnt pattern, 46 function of focusing position computing module is opened, when When the object distance judgment module 43 judges current shooting pattern for remote burnt pattern, 46 function of focusing position computing module is closed It closes.
The drive volume computing module 47 is used for according to 42 obtained sampling image lens 10 of the object distance computing module most Good focusing position determines the focusing drive volume of the first sampling image lens A1 and the second sampling image lens B1.
Described image processing unit 50 is used for according to the 45 obtained correction amount of fuzzy revising amount computing module, to every The image that one pixel unit is sensed carries out fuzzy revising, to obtain clear image.In present embodiment, described image processing Unit 50 carries out trichromatic amendment to the image of each pixel unit.The first image sensor A2 and second shadow As sensor B2 is sensed and image after fuzzy revising is stored in the memory 20.
The focusing driving unit 60 includes first focus drive 61 for corresponding to the first sampling image lens A1 And second focus drive 62 for corresponding to the second sampling image lens B1.First focus drive 61 and institute State the second focus drive 62 be respectively used to according to 47 obtained first sampling image lens A1 mono- of the drive volume computing module and The focusing drive volume of second sampling image lens B1 drives the first sampling image lens A1 and the second sampling image lens B1 to best Focusing position.In present embodiment, first focus drive 61 and second focus drive 62 are piezoelectric type horse Reach, certainly, first focus drive 61 and other classes such as second focus drive 62 or voice coil motor The drive component of type.The first sampling image lens A1 and the second sampling image lens B1 by first focus drive 61 with And after the driving to best focusing position of the second focus drive 62, the image captured is stored in the memory 20 It is interior.
Each described OIS unit 70 includes a moveable support portion 71, a shaking detection portion 72, a jitter compensation Calculating part 73 and a jitter compensation driving portion 74.The moveable support portion 71 is used to support the eyeglass 101 (102), and It is movable relative to 10 optical axis of the sampling image lens so that the eyeglass 101 (102) can be moved with the moveable support portion 71 and be transported It is dynamic.The shaking detection portion 72 passes testing result for detecting the offset that the eyeglass 101 (102) is generated by shake It send to the jitter compensation calculating part 73.The jitter compensation calculating part 73 is for the detection knot according to the shaking detection portion 72 Fruit calculates the amount of jitter of the eyeglass 101 (102) and the jitter compensation amount of the jitter compensation driving portion 74.The shake is mended Driving portion 74 is repaid for driving the eyeglass 101 according to the 73 calculated jitter compensation amount of institute of the jitter compensation calculating part (102) jitter compensation is carried out.
In shooting process, the OIS units 70 and 60 collective effect of focusing driving unit are in the eyeglass 101 (102), therefore it can prevent shake in shooting process from being had an impact to focusing.
Please see Fig. 2 to Fig. 4, the first sampling image lens A1, first focus drive 61 and the OIS units 70 include the structure that can be formed integral with one another as follows:
The first sampling image lens A1 includes the 13, follower levers of movable frame of fixed frame 12, one of pedestal 11, one 14, first guide rod 15 and second guide rod 16.
The pedestal 11 is generally rectangular shaped and offers round light hole 111 and arc accommodating space 112.The accommodating sky Between 112 be connected to the light hole 111.Four corners on 11 surface of the pedestal are respectively formed one to the fixed frame 12 The protrusion 113 of extension.Two adjacent corners of 11 surface of the pedestal are opened up respectively there are one mounting hole 114.It is described two Mounting hole 114 is located between the protrusion 113 of the light hole 111 and the correspondence corner of the pedestal 11.
12 generally rectangular shaped mount structure of the fixed frame and internal the first rectangular receiving space 120 is formed with collecting post State movable frame 13.First accepting hole 121 and second accepting hole there are one being opened up on the side side wall of the fixed frame 12 122, first accepting hole 121 is arranged with second accepting hole 122 interval.Two corners of 12 upper surface of the fixed frame Place opens up that there are one connecting holes to a flange 123 is extended with respectively inside first receiving space 120 on each flange 123 124, described two connecting holes 124 are corresponding with two mounting holes 114 of the pedestal 11 respectively.
13 generally rectangular shaped frame of the movable frame and offer circular second receiving space 131.In the movable frame 13 Internal thread 134 is formed on side wall.One corner of the movable frame 13 is opened up there are one perforative through-hole 132, another phase Adjacent corner opens up that there are one perforative notches 133.The through-hole 132 and one of connecting hole 124 and one of them is solid Determine the correspondence of hole 114, the notch 133 is corresponding with another connecting hole 124 and another mounting hole 114.
When assembling, the movable frame 13 is contained in the first receiving space 120 of the fixed frame 12.First guide rod 15 are arranged in the through-hole 132 and one end of first guide rod 15 is fixedly arranged on a connecting hole 124 of the fixed frame 12 Interior, second guide rod 16 is arranged in the notch 133 and one end of second guide rod 16 is fixedly arranged on another connecting hole In 124.The follower lever 14 is fixedly arranged on the corner of the movable frame 13 close to second guide rod 16.The pedestal 11 and institute It states 12 lower surface of fixed frame to be connected, four protrusions 113 are respectively embedded into the lower surface of the fixed frame 12.Described first leads The other end of bar 15 is fixed in a mounting hole 114 of the pedestal 11.One end of second guide rod 16 is fixed on described In another mounting hole 114 of pedestal 11.One end of the follower lever 14 towards the pedestal 11 is contained in the accommodating space In 112.
First focus drive 61 is fixed on the side side wall of the fixed frame 12, in present embodiment, First focus drive 61 includes a circuit board 611, a piezo-electric motor 612 and a driving chip 613.Institute It states piezo-electric motor 612 and the driving chip 613 and is fixed on the circuit board with being electrically connected with the circuit board 611 respectively 611 surfaces, and the driving chip 613 is electrically connected with each other with the piezo-electric motor 612 by the circuit board 611.Wherein, The fixation of the circuit board 611 is attached on the side wall of the fixed frame 12, and the piezo-electric motor 612 passes through first accepting hole It 121 and is in contact with the follower lever 14, the driving chip 613 is contained in second accepting hole 122.
The OIS units 70 of the corresponding first sampling image lens A1 are set to the second receiving space of the movable frame 13 In 131.In present embodiment, the OIS units 70 include a round fixed cylinder 75 with central opening.The fixed cylinder External screw thread compatible with 13 internal thread 134 of the movable frame 751 is formed on 75 lateral walls.
The moveable support portion 71 includes a first direction support element 711 and a second direction support element 712.
The first direction support element 711 include the first half long guide rod 7112 of first direction guide rod 7111, one with An and first direction sliding block 7113.The first half long guide rod 7112 is mutually parallel with the first direction guide rod 7111.Institute State first direction sliding block 7113 substantially and be in square frame-shaped, 7113 side wall outer surface of the first direction sliding block forms that there are one the One shaft sleeve part 7113a, there are one the first semi-major axis set portion 7113b for opposite other side wall outer surface formation.The first party It is also formed with a first mounting groove 7113c to the side side wall of sliding block 7113.
The second direction support element 712 includes a second direction guide rod 7121 and a second direction sliding block 7123.The second direction sliding block 7123 is substantially in square frame-shaped, and 7123 side wall outer surface of the second direction sliding block is formed There are one the second shaft sleeve part 7123a.The side side wall of the second direction sliding block 7123 is also formed with second mounting groove 7123b.7123 inside of the second direction sliding block is formed there are one eyeglass mounting hole H, and the eyeglass 101 is fixed on the eyeglass In mounting hole H.
The shaking detection portion 72 includes first Hall element 721 and second Hall element 722.Described One Hall element 721 is fixedly installed in the fixed cylinder 75, is produced because of shake for detecting the first direction sliding block 7113 Raw offset.Second Hall element 722 is fixedly installed on the first direction sliding block 7113, for detecting described second Direction sliding block 7123 draws shake and the offset of generation.
The jitter compensation driving portion 74 includes that a first direction driving unit 741 and a second direction driving are single Member 742.The first direction driving unit 741 includes first magnet 7411 and a first coil 7412, and described the One coil 7412 is fixedly installed on close to first Hall element 721 in the fixed cylinder 75.The second direction driving is single Member 742 include second magnet 7421 and second coil 7422, second coil 7422 close to described second suddenly You are fixedly installed on the first direction sliding block 7113 element 722.
When assembling, first magnet 7411 is fixed in the first mounting groove 7113c, and the first direction is slided First shaft sleeve part 7113a of block 7113 is slideably sheathed on the first direction guide rod 7111, the first semi-major axis set Portion 7113b is slideably sheathed on the first half long guide rod 7112, the first direction guide rod 7111 and described first Half long guide rod 7112 is separately fixed in the fixed cylinder 75, and the position of first magnet 7411 corresponds to the first Hall member Part 721 and the first coil 7412;Second magnet 7421 is fixed in the second mounting groove 7123b, institute The the second shaft sleeve part 7123a for stating second direction sliding block 7123 is slideably sheathed on the second direction guide rod 7121.It is described Second direction guide rod 7121 is fixedly installed on the first direction sliding block 7113.The position of second magnet 7421 corresponds to institute State the second Hall element 722 and second coil 7422, the center of the optical axis and the fixed cylinder 75 that 101 group of the eyeglass The center of opening is aligned.
OIS units 70 after being completed are contained in the first receiving space 120 of the movable frame 13, the fixed cylinder 75 with the movable frame 13 by the external screw thread 751 and internal thread 134 be mutually threadedly coupled.
In shooting process, if the eyeglass 101 generates offset because of shake, first Hall element 721 can be felt The offset of the eyeglass 101 in a first direction is measured, second Hall element 722 can sense when that the eyeglass 101 exists Offset in second direction.According to Hall effect, when electric current passes through conductor perpendicular to external magnetic field, in conductor perpendicular to magnetic field It will appear potential difference between two end faces of current direction.In present embodiment, first magnet 7411 is described first 721 surrounding of Hall element forms a magnetic field, and second magnet 7421 forms one around second Hall element 722 Magnetic field, when the first direction sliding block 7113 generates offset because of shake, the magnetic field at 721 position of the first Hall element Therefore intensity can change, therefore therefore the potential difference that first Hall element 721 senses changes.According to the potential difference Variation, the offset of the first direction sliding block 7113 can be calculated.It as a same reason, can be according to second Hall member The variation for the potential difference that part 722 is sensed can calculate the offset of the second direction sliding block 7123.According to described The offset of one direction sliding block 7113 and the second direction sliding block 7123 can obtain required jitter compensation amount.Root According to required jitter compensation amount, it can be passed through corresponding electric current in the first coil 7412 and/or second coil 7422, To drive the first direction sliding block 721 and/or the second direction sliding block 722 to move, achieve the purpose that optical image stabilization.
In addition, according to 47 obtained focusing drive volume of the drive volume computing module, the driving chip 613 can be controlled It makes the piezo-electric motor 612 and moves the movable frame 13, the eyeglass 101 is driven to best focusing position, reaches accurate The purpose of focusing.
The second sampling image lens B1 has the structure similar with first sampling image lens, second focus drive With the structure similar with first focus drive, corresponding to the second sampling image lens B1 OIS units 70 have with It is above-mentioned to correspond to the similar structures of the first sampling image lens A1.
The 3D imaging modules 100 include an image composing unit 80, and described image synthesis unit 80 is for reading institute It states in memory 20 through first focus drive 61 and second focus drive 62 to defocused image or warp Described image processing unit 50 carries out the image after fuzzy revising, and is synthesized to described image, obtains 3D rendering.Specifically Ground, described image synthesis unit 80 read the first imaging unit A after focusing or fuzzy revising and institute every time The image that the second imaging unit B is at the same time captured Same Scene with different angle is stated, and according to the first imaging unit A And the second imaging unit B restores the depth of object and far and near letter in the scene to the different shooting angles of Same Scene Breath obtains the image with space or depth perception and distance.
Please refer to Fig. 5 A and Fig. 5 B, the 3D described in the 3D imaging method application the above embodiments of embodiment of the present invention at As module, which includes the following steps:
Same Scene is shot with different angle respectively simultaneously with two imaging units;
Shake detecting before being shot to described two imaging units, it is determined whether there is shake;
If there is shake, it is determined that amount of jitter carries out jitter compensation according to the amount of jitter;
The image that the image sensor of described two imaging units is sensed carries out color-separated, described image respectively It is expressed as Red Green Blue image;
The image-region sensed to each pixel unit of the image sensor of described two imaging units carries out MTF fortune It calculates, obtains the corresponding mtf value of image-region that each pixel unit is sensed;
The mtf value of the image sensed according to each pixel unit determines that each pixel unit is sensed Image object distance;
According to the object distance for the image that each pixel unit is sensed, current shooting pattern is determined;
If current shooting pattern is nearly burnt pattern, the following steps are carried out:
According to the object distance for the image that each pixel unit is sensed, taking for described two imaging units is determined respectively As the best focusing position of camera lens;
Determine the focusing drive volume of the sampling image lens of described two imaging units respectively according to the best focusing position;
Described two imaging units are respectively driven to the best focusing position of sampling image lens according to the focusing drive volume;
The warp captured to described two imaging units synthesizes defocused image, obtains 3D rendering;
If current shooting pattern is remote burnt pattern, the following steps are carried out:
According to the corresponding mtf value of image-region that each pixel unit is sensed, respective pixel unit institute is determined The fuzzy quantity of the image sensed;
According to the fuzzy quantity for the image that each pixel unit is sensed, determine what respective pixel unit was sensed The fuzzy revising amount of image;
According to the fuzzy revising amount, the image sensed to each pixel unit carries out fuzzy revising;
The image after fuzzy revising captured to described two imaging units synthesizes, and obtains 3D rendering.
The 3D imaging modules and and 3D imaging methods determine subject in the way of software computer sim- ulation Object distance, and current shooting pattern is determined according to the situation of object distance, it is taken according to screening-mode selection software calculation or driving As the mode of camera lens is focused, it can reach no matter under close burnt or remote burnt screening-mode, focusing can be accessed clearly The image about Same Scene that two imaging units are shot simultaneously is synthesized, therefore can obtained relatively sharp by image 3D rendering.In addition, carrying out shaking detection and jitter compensation before shooting by OIS units, can prevent in focus process The computational accuracy of the middle image Fuzzy Influence fuzzy revising amount caused by shake, therefore the effect of imaging can be improved.
In addition, those skilled in the art can also do other variations in spirit of that invention, certainly, these are smart according to the present invention The variation that god is done all should include within scope of the present invention.

Claims (7)

1. a kind of 3D imaging modules comprising first imaging unit and second imaging unit, one and described the Memory that one imaging unit and second imaging unit are connected, a color-separated being connected with the memory Unit, a processor being connected with the color-separated unit, an image procossing list being connected with the processor Member, focusing driving unit, two optics for corresponding respectively to first imaging unit and second imaging unit Image stabilization unit and an image composing unit;First imaging unit and second imaging unit are for simultaneously The image of Same Scene is captured with different angle;The memory is for storing first imaging unit and the second one-tenth described The image captured as unit;The color-separated unit is used for first imaging unit and second imaging unit The image captured is indicated with Red Green Blue respectively;The processor is used for respectively to first imaging unit And the image that second imaging unit is captured carries out MTF operations, and current shooting pattern is determined according to operation result, and Described image processing unit or the focusing driving unit are controlled according to identified current shooting model selection;Described image What processing unit was used to capture first imaging unit and second imaging unit by image procossing mode Image carries out fuzzy revising out of focus;The focusing driving unit is used for single to first imaging unit and second imaging Member is focused;The optical image stabilization unit is for detecting preceding first imaging unit of shooting and second imaging The shake of unit, and according to detected shake, first imaging unit and second imaging unit are trembled Dynamic compensation;Described image synthesis unit be used for by first imaging unit and second imaging unit captured through institute It states image procossing or defocused image is synthesized, obtain 3D rendering;
First imaging unit includes that first sampling image lens and one are aligned with the optical axis of first sampling image lens The first image sensor, second imaging unit includes second sampling image lens and one and second image capture lens Second image sensor of the optical axis alignment of head, in first sampling image lens and second sampling image lens each Include at least one aspherical lens with positive angular;The processor includes a MTF computing module, an object distance operation Module, an object distance judgment module, a fuzzy quantity computing module, a fuzzy revising amount computing module, a focusing position Computing module and a drive volume computing module;The MTF computing modules are used for first image sensor and institute It states the image-region that each pixel unit is sensed on the second image sensor and carries out MTF operations, obtain the MTF of corresponding region Value;The object distance computing module determines that each pixel unit is sensed for the operation result according to the MTF computing modules Image object distance;The object distance judgment module determines current bat for the operation result according to the object distance computing module Take the photograph pattern;The fuzzy quantity computing module determines each pixel unit for the operation result according to the MTF computing modules The obtained mtf value of operation and difference of the counterpart away from internal standard mtf value, and determine that each pixel unit is felt according to the difference The fuzzy quantity of the image measured;The fuzzy revising amount computing module is used for according to the obtained mould of fuzzy quantity computing module Paste amount determines that the image sensed to each pixel unit carries out the correction amount of fuzzy revising;The focusing position operation mould Block is used for the operation result according to the object distance computing module, determines first sampling image lens and second capture respectively The best focusing position of camera lens;The drive volume computing module is used for according to the object distance computing module obtained described first The best focusing position of sampling image lens and second sampling image lens determines first sampling image lens and described second takes As the focusing drive volume of camera lens.
2. 3D imaging modules as described in claim 1, it is characterised in that:The MTF computing modules are to each pixel unit pair The three-primary-color image answered carries out mtf value operation respectively.
3. 3D imaging modules as described in claim 1, it is characterised in that:The object distance judgment module is by the object distance operation mould The operation result of block makees comprehensive operation, and the result of the synthesis operation is compared with a preset standard value, according to comparing As a result current shooting pattern is determined.
4. 3D imaging modules as claimed in claim 3, it is characterised in that:The comprehensive operation is to the object distance computing module The object distance of obtained the sensed image of each pixel unit is sampled, and is obtained for table according to the data operation of sampling Levy the object distance token state of the distance of current shooting main target object.
5. 3D imaging modules as claimed in claim 4, it is characterised in that:The preset standard value is 40cm, if described Object distance token state is more than 40cm, then the object distance judgment module judges current shooting pattern for remote burnt pattern, if the object distance Token state is less than or equal to 40cm, then the object distance judgment module judges current shooting pattern for nearly burnt pattern.
6. 3D imaging modules as claimed in claim 5, it is characterised in that:When current shooting pattern is remote burnt pattern, the mould Paste amount computing module function is opened, and when current shooting pattern is nearly burnt pattern, the focusing position computing module function is opened.
7. a kind of 3D imaging methods comprising following steps:
Same Scene is shot with different angle respectively simultaneously with two imaging units;
Shake detecting before being shot to described two imaging units, it is determined whether there is shake;
If there is shake, it is determined that amount of jitter carries out jitter compensation according to the amount of jitter;
The image that the image sensor of described two imaging units is sensed carries out color-separated respectively;
The image-region sensed to each pixel unit of the image sensor of described two imaging units carries out MTF operations, Obtain the corresponding mtf value of image-region that each pixel unit is sensed;
The mtf value of the image sensed according to each pixel unit determines the figure that each pixel unit is sensed The object distance of picture;
According to the object distance for the image that each pixel unit is sensed, current shooting pattern is determined;
If current shooting pattern is nearly burnt pattern, the following steps are carried out:
According to the object distance for the image that each pixel unit is sensed, the image capture lens of described two imaging units are determined respectively Best focusing position;
Determine the focusing drive volume of the sampling image lens of described two imaging units respectively according to the best focusing position;
The sampling image lens of described two imaging units are respectively driven to best focusing position according to the focusing drive volume;
The warp captured to described two imaging units synthesizes defocused image, obtains 3D rendering;
If current shooting pattern is remote burnt pattern, the following steps are carried out:
According to the corresponding mtf value of image-region that each pixel unit is sensed, determine that respective pixel unit is sensed The fuzzy quantity of the image arrived;
According to the fuzzy quantity for the image that each pixel unit is sensed, the image that respective pixel unit is sensed is determined Fuzzy revising amount;
According to the fuzzy revising amount, the image sensed to each pixel unit carries out fuzzy revising;
The image after fuzzy revising captured to described two imaging units synthesizes, and obtains 3D rendering.
CN201110444722.2A 2011-12-27 2011-12-27 3D imaging modules and 3D imaging methods Expired - Fee Related CN103188424B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110444722.2A CN103188424B (en) 2011-12-27 2011-12-27 3D imaging modules and 3D imaging methods

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110444722.2A CN103188424B (en) 2011-12-27 2011-12-27 3D imaging modules and 3D imaging methods

Publications (2)

Publication Number Publication Date
CN103188424A CN103188424A (en) 2013-07-03
CN103188424B true CN103188424B (en) 2018-10-09

Family

ID=48679376

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110444722.2A Expired - Fee Related CN103188424B (en) 2011-12-27 2011-12-27 3D imaging modules and 3D imaging methods

Country Status (1)

Country Link
CN (1) CN103188424B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103841404A (en) * 2014-03-18 2014-06-04 江西省一元数码科技有限公司 Novel three-dimensional image shooting module
DE102016208210A1 (en) 2016-05-12 2017-11-16 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. 3D MULTI-PAPER PICTURE DEVICES, MULTI-PAPER IMAGING DEVICE, METHOD FOR PROVIDING AN OUTPUT SIGNAL OF A 3D MULTI-PAPER IMAGING DEVICE AND METHOD FOR DETECTING A TOTAL FACE
US10313654B1 (en) * 2018-03-19 2019-06-04 Htc Corporation Image processing method, electronic device, and non-transitory computer readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010071001A1 (en) * 2008-12-15 2010-06-24 Canon Kabushiki Kaisha Image processing apparatus and image processing method
CN102262331A (en) * 2010-05-25 2011-11-30 鸿富锦精密工业(深圳)有限公司 Image acquisition module and image acquisition method thereof

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005189654A (en) * 2003-12-26 2005-07-14 Konica Minolta Photo Imaging Inc Camera equipped with camera-shake correction mechanism
JP4859625B2 (en) * 2006-10-27 2012-01-25 Hoya株式会社 Camera with image stabilization device
JP4692770B2 (en) * 2006-12-27 2011-06-01 富士フイルム株式会社 Compound eye digital camera
CN102103320A (en) * 2009-12-22 2011-06-22 鸿富锦精密工业(深圳)有限公司 Three-dimensional imaging camera module

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010071001A1 (en) * 2008-12-15 2010-06-24 Canon Kabushiki Kaisha Image processing apparatus and image processing method
CN102262331A (en) * 2010-05-25 2011-11-30 鸿富锦精密工业(深圳)有限公司 Image acquisition module and image acquisition method thereof

Also Published As

Publication number Publication date
CN103188424A (en) 2013-07-03

Similar Documents

Publication Publication Date Title
CN103748493B (en) Camera head and its control method
CN101324709B (en) Imaging apparatus
CN103782586B (en) Imaging device
CN108737734A (en) Image compensation method and device, computer readable storage medium and electronic equipment
CN107465867A (en) Camera device and image capture method
US9113155B2 (en) 3D camera module and 3D imaging method using same
US20120026297A1 (en) Imaging apparatus and imaging method
CN105407271B (en) Image processing equipment and method, picture pick-up device and image forming apparatus
CN101390381B (en) Blur detecting device, blur correcting device, imaging device, and blur detecting method
TWI514869B (en) Autofocus imaging module and autofocus method
CN108769528A (en) Image compensation method and device, computer readable storage medium and electronic equipment
CN108024053A (en) Camera device, focus adjusting method and recording medium
CN106488116B (en) Photographic device
CN105593738B (en) Focus-regulating device, camera and focus adjusting method
CN106954007A (en) Camera device and image capture method
CN104243863B (en) Filming apparatus, image pickup method
CN107135338A (en) Camera system and its control method, picture pick-up device and lens assembly
CN101324708B (en) Imaging apparatus
CN102318331A (en) Stereoscopic image pick-up apparatus
CN107135349A (en) Picture pick-up device, lens unit, camera system and its control method
CN108322650A (en) Video capture method and apparatus, electronic equipment, computer readable storage medium
CN102572235A (en) Imaging device, image processing method and computer program
CN101350888B (en) Device for viewfinding image and automatic focusing method thereof
Ben-Ezra A digital gigapixel large-format tile-scan camera
CN103118226A (en) Light source estimation device, light source estimation method, light source estimation program, and imaging apparatus

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20170922

Address after: Guangdong province Shenzhen city Longhua District Dalang street community of Longsheng gold dragon road e-commerce incubator exhibition Tao Commercial Plaza E block 706

Applicant after: Shenzhen step Technology Transfer Center Co., Ltd.

Address before: 518109 Guangdong city of Shenzhen province Baoan District Longhua Town Industrial Zone tabulaeformis tenth East Ring Road No. 2 two

Applicant before: Hongfujin Precise Industry (Shenzhen) Co., Ltd.

Applicant before: Hon Hai Precision Industry Co., Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20180809

Address after: 518000 12 floor, Tianjian business building, Hongli West Road, Futian District, Shenzhen, Guangdong.

Applicant after: Fortune culture industry group (Shenzhen) Limited

Address before: 518000 Guangdong Shenzhen Longhua New District big wave street Longsheng community Tenglong road gold rush e-commerce incubation base exhibition hall E commercial block 706

Applicant before: Shenzhen step Technology Transfer Center Co., Ltd.

GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20181009

Termination date: 20201227