WO2018233373A1 - Image processing method and apparatus, and device - Google Patents

Image processing method and apparatus, and device Download PDF

Info

Publication number
WO2018233373A1
WO2018233373A1 PCT/CN2018/084518 CN2018084518W WO2018233373A1 WO 2018233373 A1 WO2018233373 A1 WO 2018233373A1 CN 2018084518 W CN2018084518 W CN 2018084518W WO 2018233373 A1 WO2018233373 A1 WO 2018233373A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
sub
region
preset
Prior art date
Application number
PCT/CN2018/084518
Other languages
French (fr)
Chinese (zh)
Inventor
李安
王庆平
王提政
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201710488848.7A external-priority patent/CN107295256A/en
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP18820572.8A priority Critical patent/EP3629569A4/en
Publication of WO2018233373A1 publication Critical patent/WO2018233373A1/en
Priority to US16/723,554 priority patent/US11095812B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present invention relates to the field of terminal technologies, and in particular, to an image processing method, apparatus, and device.
  • Aperture value (FNO) focal length / entrance pupil diameter
  • FNO focal length / entrance pupil diameter
  • a conventional photographing lens has a structure of six lenses, which ensures ideal imaging in the entire field of view (FOV) of the lens.
  • the lens design system obtains the number of lenses according to the user's aperture value according to the empirical value, and then calculates the positional relationship, focal length, shape and other parameters of the lens according to the physical principle and mathematical principle, thereby performing process production and configuration into finished products.
  • the embodiment of the invention provides a camera module and a terminal, which realizes a large aperture FNO ⁇ 1.6 by a dual camera or a multi-camera, and simultaneously takes into consideration the level of the production process, so that it becomes a simple and effective implementation method for realizing a large aperture.
  • an embodiment of the present invention provides an image processing method, which is applied to a photographing apparatus including a first camera and a second camera.
  • the optical axes of the first camera and the second camera are parallel to each other, and the first camera and the second The distance between the cameras is less than the preset distance; the aperture values of the first camera and the second camera are all less than 1.6;
  • the method includes: acquiring a first camera to capture a first image of the object to be photographed; and acquiring a second camera to capture the object to be photographed And acquiring a first sub-image of the first image according to a first preset rule, where a first field image corresponds to an angle of view of the first camera as [0, ⁇ 1 ], according to a second preset rule, acquiring a second sub-image of the second image, where the second sub-image corresponds to a field of view of the second camera as [ ⁇ 2 , ⁇ 3 ]; wherein, 0 ⁇ 2 ⁇ 1 ⁇ 3 , the first sub-image and the second
  • an embodiment of the present invention provides an image processing apparatus, where the apparatus is applied to a photographing apparatus including a first camera and a second camera, wherein optical axes of the first camera and the second camera are parallel to each other, and the first camera and the second camera are The spacing between the first camera and the second camera is less than 1.6;
  • the device includes: a first acquiring module, configured to acquire a first image of the first camera to capture the object to be photographed; and a second acquiring module And acquiring a second image of the second image capturing the object to be photographed;
  • the third acquiring module is configured to acquire the first sub image of the first image according to the first preset rule, where the first sub image corresponds to the first camera
  • the range of the field of view is [0, ⁇ 1 ]
  • the fourth acquiring module is configured to acquire the second sub-image of the second image according to the second preset rule, and the second sub-image corresponds to the range of the field of view of the second camera [ ⁇ 2 , ⁇ 3 ]; wherein
  • the task of imaging an oversized aperture can be shared by two cameras, and the high-definition area under the condition of super-large aperture is obtained by the first camera and according to a certain algorithm. , that is, the first sub-image; then, through the second camera and acquiring a high-definition region under the condition of super-large aperture, that is, the second sub-image according to a certain algorithm; splicing and merging the first sub-image and the second sub-image to obtain The entire field of view area satisfies the high definition target image.
  • the difficulty in designing and manufacturing the camera is reduced, and the design cost and the manufacturing cost are saved.
  • acquiring the first sub-image of the first image according to the first preset rule comprises: acquiring first parameter information of the first camera;
  • the first parameter information expresses: the image captured by the first camera in the field of view angle range [0, ⁇ 1 ], and the modulation transfer function MTF value corresponding to the preset spatial frequency is greater than the first preset threshold; , ⁇ 1 is smaller than 1/2 of the angle of view of the first camera; acquiring an image receiving area P of the image sensor in the first camera; determining that the first image has a field of view angle range of [0, ⁇ The image of the intersection area of 1 ] and the intersection area of the P is taken as the first sub-image.
  • this technical implementation can be invoked by a processor to call a program in memory and an instruction to perform a corresponding operation.
  • acquiring the second sub-image of the second image according to the second preset rule comprises: acquiring second parameter information of the second camera;
  • the second parameter information expresses: the image captured by the second camera in the field of view angle range [ ⁇ 2 , ⁇ 3 ], and the modulation transfer function MTF value corresponding to the preset spatial frequency is greater than the second preset threshold;
  • 0 ⁇ 2 ⁇ 1 , ⁇ 3 is less than or equal to 1/2 of the angle of view of the second camera;
  • this technical implementation can be invoked by a processor to call a program in memory and an instruction to perform a corresponding operation.
  • the obtaining the target image by using the first sub-image and the second sub-image according to a preset splicing algorithm comprises: determining the S1 and An image of the intersection region S3 of the S2; an image of the complement region S32 of the S3 in the S2; an image of the S1 and the image of the S32 are obtained according to a first preset stitching algorithm.
  • this technical implementation can be invoked by a processor to call a program in memory and an instruction to perform a corresponding operation.
  • the obtaining the target image by using the first sub-image and the second sub-image according to a preset splicing algorithm comprises: determining the S1 and An image of the intersection region S3 of the S2; an image of the complement region S31 of the S3 in the S1; an image of the image of the S31 and the image of the S2 according to a second preset stitching algorithm.
  • this technical implementation can be invoked by a processor to call a program in memory and an instruction to perform a corresponding operation.
  • the obtaining the target image by using the first sub-image and the second sub-image according to a preset splicing algorithm comprises: determining the S1 and An image of the intersection region S3 of the S2; an image of the complement region S31 of the S3 in the S1; an image of the complement region S32 of the S3 in the S2; an image of the S1 And performing an enhancement process on the image of the S3 according to a preset enhancement algorithm to obtain an image of S4; and splicing the image of the S31, the image of the S32, and the image of the S4 according to a third preset The algorithm gets the target image.
  • this technical implementation can be invoked by a processor to call a program in memory and an instruction to perform a corresponding operation.
  • the first camera comprises a first imaging lens; the second camera comprises a second imaging lens; the first imaging lens is according to the first imaging
  • the second imaging lens is designed according to the second preset requirement; the first preset requirement corresponds to the first parameter information, and the second preset requirement corresponds to the Second parameter information.
  • these design requirements are self-property of the imaging lens, which is stored in advance in the photographing device or in the server, and can be called by the subsequent processor for image processing so as to be able to determine from the first image.
  • a first sub-image is derived, and a second sub-image is determined from the second image.
  • the imaging lenses of the first camera and the second camera comprise 4, 5 or 6 lens segments.
  • the aperture values of the first camera and the second camera are equal.
  • the image sensors of the first camera and the second camera are the same. Therefore, the above P and Q are also the same.
  • the focal length and the (maximum) field of view of the first camera and the second camera are the same.
  • the first preset threshold and the second preset threshold are greater than or equal to 0.25.
  • the predetermined spatial frequency is greater than 400 pairs of lines/mm.
  • the larger the spatial frequency the more refined the corresponding image.
  • the larger the spatial frequency the more the MTF is greater than the preset threshold, indicating that the resolution of the image is better.
  • S1 is a circular area.
  • the above-mentioned field of view angle ranges [0, ⁇ 1 ], [ ⁇ 2 , ⁇ 3 ], etc.
  • the corresponding image area is not necessarily a regular circle or a circle, and may be an approximate circle or a ring, or some irregular pattern; but as long as all the high-resolution sub-images finally obtained by the camera are obtained. Under the premise of taking the same image sensor as a reference, it can cover the area where the image sensor is located. Seamless stitching is possible. A target image that satisfies the high definition of the super large aperture is formed.
  • the image processing program in the process of determining the sub-image, in the above-mentioned field of view angle range [0, ⁇ 1 ], [ ⁇ 2 , ⁇ 3 ], can also Take the image of the above two areas, such as a square or ellipse, such as a non-circular area, and perform the corresponding stitching. As long as the union of these sub-images can cover the area where the image sensor is located.
  • the photographing apparatus further includes an adjusting device, the method further comprising: controlling the adjusting device, adjusting the first camera and the second camera Pitch. If the object to be photographed is closer to the lens, the pitch of the two cameras needs to be smaller to ensure that the acquired sub-image areas can be overlapped. If the object to be photographed is farther from the lens, the pitch of the two cameras needs to be slightly larger, so that the acquired sub-images are overlapped and the overlapping area is not excessive.
  • the photographing apparatus further includes a third camera, the optical axis of the third camera and the optical axis of the first camera are parallel to each other;
  • the distance between the three cameras and the first camera is less than a preset distance;
  • the distance between the third camera and the second camera is less than a preset distance;
  • the method further includes: acquiring the third camera to capture the a third image of the object to be photographed; acquiring a third sub-image of the third image according to a third preset rule, the third sub-image corresponding to the third camera having a field of view angle range of [ ⁇ 4 , ⁇ 5 ]; wherein ⁇ 2 ⁇ 4 ⁇ 3 ⁇ 5 , the second sub-image and the third sub-image have overlapping images; ⁇ 5 is smaller than 1/2 of the angle of view of the third camera And obtaining, by the first sub-image and the second sub-image, the target image according to the preset splicing algorithm, including: the first sub-image, the second
  • acquiring the third sub-image of the third image according to the third preset rule comprises:
  • the first parameter information expresses: an image captured by the first camera in a range of viewing angles [ ⁇ 4 , ⁇ 5 ], corresponding to a preset spatial frequency
  • the modulation transfer function MTF value is greater than the first predetermined threshold; wherein ⁇ 5 is less than 1/2 of the field of view of the first camera;
  • An image of the intersection of the region of the third image having the field of view angle range [ ⁇ 4 , ⁇ 5 ] and the intersection of the R is determined as the third sub image.
  • an embodiment of the present invention provides a terminal device, where the terminal device includes a first camera and a second camera, a memory, a processor, and a bus; the first camera, the second camera, the memory, and the processor are connected by a bus; The optical axes of the first camera and the second camera are parallel to each other, and the distance between the first camera and the second camera is less than a preset distance; the first camera and the second camera The aperture values are each less than 1.6; the camera is for acquiring image signals under the control of the processor; the memory is for storing computer programs and instructions; the processor is for calling the computer stored in the memory Programs and instructions that perform any of the possible implementations described above.
  • the terminal device further includes an antenna system, and the antenna system transmits and receives wireless communication signals under the control of the processor to implement wireless communication with the mobile communication network;
  • the mobile communication network includes the following one Or multiple: GSM network, CDMA network, 3G network, FDMA, TDMA, PDC, TACS, AMPS, WCDMA, TDSCDMA, WIFI and LTE networks.
  • an embodiment of the present invention provides an image processing method, where the method is applied to a photographing device including a first camera and a second camera, and optical axes of the first camera and the second camera are parallel to each other.
  • the spacing between the first camera and the second camera is less than a preset distance; the aperture values of the first camera and the second camera are both less than 1.6, and the first camera and the second camera are The number of lenses is not more than 6;
  • the method includes: acquiring a first image of the first camera to capture the object to be photographed; acquiring a second image of the object to be photographed by the second camera; acquiring the first sub of the first image An image; wherein a resolution of the first sub-image satisfies a preset definition standard; acquiring a second sub-image of the second image; wherein a resolution of the second sub-image satisfies the preset definition a standard; and the first sub-image and the second sub-image have an image intersection, and the image integration of the first sub-image and the first
  • an embodiment of the present invention provides an image processing apparatus, where the apparatus is applied to a photographing apparatus including a first camera and a second camera, wherein optical axes of the first camera and the second camera are parallel to each other, The distance between the first camera and the second camera is less than a preset distance; the aperture values of the first camera and the second camera are all less than 1.6, and the lenses of the first camera and the second camera
  • the device includes: a first acquiring module, configured to acquire a first image of the first image to be photographed, and a second acquiring module, configured to acquire a second camera to capture the object to be photographed a second acquisition module, configured to acquire a first sub-image of the first image, wherein a resolution of the first sub-image satisfies a preset definition standard; and a fourth acquisition module, configured to acquire the image a second sub-image of the second image; wherein a resolution of the second sub-image satisfies the preset definition standard; and the first sub-image and the second sub-
  • acquiring the first sub-image of the first image comprises: acquiring a first physical design parameter of the first camera; wherein the first physical design parameter Expressing that in any of the images captured by the first camera, the sharpness of the image of the first region is higher than the sharpness of the image of the second region, and the preset sharpness criterion is satisfied, and the second region is the first Complementing an area in any image captured by the first camera; acquiring a first area of the first image according to the first physical design parameter; acquiring an image receiving area of the image sensor in the first camera P: determining an image of the intersection area S1 of the first area of the first image and the P as the first sub-image.
  • This technical feature can be implemented by a third acquisition module.
  • the first area and the second area may be any graphics, which is not limited in the embodiment of the present invention.
  • this technical implementation can be invoked by the processor to call the memory or the program in the cloud and the corresponding operation.
  • the acquiring the second sub-image of the second image comprises: acquiring a second physical design parameter of the second camera; wherein the second physics The design parameter expresses that in any of the images captured by the second camera, the sharpness of the image of the third region is higher than the sharpness of the image of the fourth region, and the predetermined sharpness criterion is met, and the fourth region is Complementing a third region in any image captured by the second camera; acquiring a third region of the second image according to the second physical design parameter; acquiring an image of the image sensor in the second camera Receiving region Q; determining an image of the third region of the second image and the intersection region S2 of the Q as a second sub-image.
  • This technical feature can be implemented by the fourth acquisition module.
  • the third area and the fourth area may be any graphics, which is not limited in the embodiment of the present invention.
  • this technical implementation can be invoked by the processor to call the memory or the program in the cloud and the corresponding operation.
  • the first physical design parameter comprises: the image captured by the first camera in a field of view angle range [0, ⁇ 1 ], in a preset space
  • the modulation transfer function MTF value corresponding to the frequency is greater than the first preset threshold; wherein ⁇ 1 is smaller than 1/2 of the field of view of the first camera, and the image captured by the first camera in other field of view angles,
  • the modulation transfer function MTF value corresponding to the preset spatial frequency is not greater than the first preset threshold.
  • the second physical design parameter comprises: an image captured by the second camera within a viewing angle range of [ ⁇ 2 , ⁇ 3 ],
  • the modulation transfer function MTF value corresponding to the preset spatial frequency is greater than a second preset threshold; wherein ⁇ 3 is less than 1/2 of the field angle of the second camera, and 0 ⁇ 2 ⁇ 1 ⁇ 3
  • the image captured by the second camera in the range of other viewing angles has a modulation transfer function MTF value corresponding to the preset spatial frequency that is not greater than the second predetermined threshold.
  • This information can be stored in memory or in the network cloud.
  • the performing the fusion processing on the first sub-image and the second sub-image to obtain the target image includes any one of the following three manners; And can be implemented by the image stitching module:
  • Method 1 determining an image of the intersection region S3 of the S1 and the S2; determining an image of the complement region S32 of the S3 in the S2; and fusing the image of the S1 and the image of the S32 Process to get the target image; or,
  • Mode 2 determining an image of the intersection region S3 of the S1 and the S2; determining an image of the complement region S31 of the S3 in the S1; and fusing the image of the S31 and the image of the S2 Process to get the target image; or,
  • Mode 3 determining an image of the intersection region S3 of the S1 and the S2; determining an image of the complement region S31 of the S3 in the S1; determining a complement region S32 of the S3 in the S2 And performing an enhancement process on the S3 according to a preset enhancement algorithm to obtain an image of S4; and performing fusion processing on the image of the S31, the image of the S32, and the image of the S4; , get the target image.
  • this technical implementation can be invoked by a processor to call a program in memory and an instruction to perform a corresponding operation.
  • an adjustment module/module is further included for adjusting the spacing between the first camera and the second camera.
  • the photographing apparatus further includes a third camera, an optical axis of the third camera and an optical axis of the first camera are parallel to each other;
  • the distance between the three cameras and the first camera is less than a preset distance;
  • the distance between the third camera and the second camera is less than a preset distance;
  • the method further includes: acquiring the third camera to capture the a third image of the object to be photographed; acquiring third parameter information of the third camera, wherein the third camera is designed according to the third parameter information;
  • the third parameter information expresses: The image captured by the third camera in the field of view angle range [ ⁇ 4 , ⁇ 5 ], the modulation transfer function MTF value corresponding to the preset spatial frequency is greater than the third preset threshold; wherein ⁇ 2 ⁇ 4 ⁇ 3 ⁇ ⁇ 5 , ⁇ 5 is smaller than 1/2 of the angle of view of the third camera; acquiring a third sub-image of the third image according to the third parameter information; wherein the third sub-image is
  • the device further includes: a fifth acquiring module, configured to acquire a third image of the third image capturing the object to be photographed; and a sixth acquiring module, configured to acquire a third parameter of the third camera Information, wherein the third camera is designed according to the third parameter information; the third parameter information indicates that the third camera is photographed within a range of viewing angles [ ⁇ 4 , ⁇ 5 ]
  • the image, the modulation transfer function MTF value corresponding to the preset spatial frequency is greater than a third preset threshold; wherein ⁇ 2 ⁇ 4 ⁇ 3 ⁇ 5 , ⁇ 5 is less than 1 of the angle of view of the third camera /2;
  • the sixth obtaining module is further configured to acquire a third sub-image of the third image according to the third parameter information; wherein, the third sub-image has higher definition than the third complementary image Sharpness, the third complement image is a complement of the third sub image in the third image; the second sub image and the third sub image have an image intersection; and the first An image of the sub image, the second sub image
  • the imaging lenses of the first camera and the second camera comprise 4, 5 or 6 lens segments.
  • the aperture values of the first camera and the second camera are equal.
  • the image sensors of the first camera and the second camera are the same. Therefore, the above P and Q are also the same.
  • the focal length and the (maximum) field of view of the first camera and the second camera are the same.
  • the first preset threshold and the second preset threshold are greater than or equal to 0.25.
  • the predetermined spatial frequency is greater than 400 pairs of lines/mm.
  • the larger the spatial frequency the more refined the corresponding image.
  • the larger the spatial frequency the more the MTF is greater than the preset threshold, indicating that the resolution of the image is better.
  • S1 is a circular area.
  • the above-mentioned field of view angle ranges [0, ⁇ 1 ], [ ⁇ 2 , ⁇ 3 ], etc.
  • the corresponding image area is not necessarily a regular circle or a circle, and may be an approximate circle or a ring, or some irregular pattern; but as long as all the high-resolution sub-images finally obtained by the camera are obtained. Under the premise of taking the same image sensor as a reference, it can cover the area where the image sensor is located. Seamless stitching is possible. A target image that satisfies the high definition of the super large aperture is formed.
  • the image processing program in the process of determining the sub-image, in the above-mentioned field of view angle range [0, ⁇ 1 ], [ ⁇ 2 , ⁇ 3 ], can also Take the image of the above two areas, such as a square or ellipse, such as a non-circular area, and perform the corresponding stitching. As long as the union of these sub-images can cover the area where the image sensor is located, it is possible to express a high-definition subject.
  • the patterns of the first region, the second region, the third region, and the fourth region are not limited.
  • an embodiment of the present invention provides a terminal device, where the terminal device includes a first camera and a second camera, a memory, a processor, and a bus; the first camera, the second camera, the memory, and the processor are connected by a bus; The optical axes of the first camera and the second camera are parallel to each other, and the distance between the first camera and the second camera is less than a preset distance; the first camera and the second camera The aperture values are all less than 1.6 and the number of lenses of the first camera and the second camera are not greater than 6; the camera is configured to acquire an image signal under the control of the processor; the memory is used to store a computer Programs and instructions; the processor for invoking the computer program and instructions stored in the memory to perform any of the possible implementation methods described above.
  • the terminal device further includes an antenna system, and the antenna system transmits and receives wireless communication signals under the control of the processor to implement wireless communication with the mobile communication network;
  • the mobile communication network includes the following one Or multiple: GSM network, CDMA network, 3G network, FDMA, TDMA, PDC, TACS, AMPS, WCDMA, TDSCDMA, WIFI and LTE networks.
  • the above method, device and device can be applied to a scene in which the camera software provided by the terminal is used for shooting; or can be applied to a scene in which a third-party camera software is used for shooting in the terminal; the shooting includes normal shooting, self-timer, video telephony, and video conference. , VR shooting, aerial photography and other shooting methods.
  • Figure 1 is a schematic view showing the structure of a lens
  • FIG. 2 is a schematic structural view of a terminal
  • FIG. 3 is a flowchart of an image processing method according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of hardware of a camera according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a first camera in an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of image quality evaluation of a first camera in an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of a second camera according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of image quality evaluation of a second camera according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of acquiring an image by a dual lens module according to an embodiment of the present invention.
  • FIG. 10 is another schematic diagram of acquiring an image according to an embodiment of the present invention.
  • FIG. 11 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention.
  • the terminal may be a device that provides photographing and/or data connectivity to the user, a handheld device with a wireless connection function, or other processing device connected to the wireless modem, such as a digital camera, a SLR camera, Mobile phones (or “cellular" phones) can be portable, pocket-sized, handheld, wearable devices (such as smart watches, etc.), tablets, personal computers (PCs, Personal Computers), PDAs (Personal Digital Assistants, Personal digital assistant), POS (Point of Sales), on-board computer, drone, aerial camera, etc.
  • a digital camera a SLR camera
  • Mobile phones or “cellular” phones
  • PCs personal computers
  • PDAs Personal Digital Assistants
  • POS Point of Sales
  • on-board computer drone
  • aerial camera etc.
  • FIG. 2 shows an alternative hardware structure diagram of the terminal 100.
  • the terminal 100 may include a radio frequency unit 110, a memory 120, an input unit 130, a display unit 140, a camera 150, an audio circuit 160, a speaker 161, a microphone 162, a processor 170, an external interface 180, a power supply 190, and the like.
  • the camera 150 has at least two.
  • the camera 150 is used for capturing images or videos, and can be triggered by an application instruction to realize a photographing or photographing function.
  • the camera includes an imaging lens, a filter, an image sensor, a focus anti-shake motor and the like.
  • the light emitted or reflected by the object enters the imaging lens, passes through the filter, and finally converges on the image sensor.
  • the imaging lens is mainly used for collecting and reflecting light emitted or reflected by all objects in the photographing angle of view;
  • the filter is mainly used for filtering out unnecessary light waves in the light (for example, light waves other than visible light, such as infrared); image sensor It is mainly used for photoelectric conversion of the received optical signal, conversion into an electrical signal, and input to the process 170 for subsequent processing.
  • FIG. 2 is merely an example of a portable multi-function device, and does not constitute a limitation of the portable multi-function device, and may include more or less components than those illustrated, or may combine some components, or different. Parts.
  • the input unit 130 can be configured to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the portable multifunction device.
  • input unit 130 can include touch screen 131 as well as other input devices 132.
  • the touch screen 131 can collect touch operations on or near the user (such as the user's operation on the touch screen or near the touch screen using any suitable object such as a finger, a joint, a stylus, etc.), and drive the corresponding according to a preset program. Connection device.
  • the touch screen can detect a user's touch action on the touch screen, convert the touch action into a touch signal and send the signal to the processor 170, and can receive and execute a command sent by the processor 170; the touch signal includes at least a touch Point coordinate information.
  • the touch screen 131 can provide an input interface and an output interface between the terminal 100 and a user.
  • touch screens can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • the input unit 130 may also include other input devices.
  • other input devices 132 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control button 132, switch button 133, etc.), trackball, mouse, joystick, and the like.
  • the display unit 140 can be used to display information input by a user or information provided to a user and various menus of the terminal 100.
  • the display unit is further configured to display an image acquired by the device using the camera 150, including a preview image, an initial image captured, and a target image processed by a certain algorithm after the shooting.
  • the touch screen 131 may cover the display panel 141.
  • the touch screen 131 detects a touch operation on or near it, the touch screen 131 transmits to the processor 170 to determine the type of the touch event, and then the processor 170 displays the panel according to the type of the touch event.
  • a corresponding visual output is provided on 141.
  • the touch screen and the display unit can be integrated into one component to implement the input, output, and display functions of the terminal 100.
  • the touch display screen represents the function set of the touch screen and the display unit; In some embodiments, the touch screen and the display unit can also function as two separate components.
  • the memory 120 can be used to store instructions and data, the memory 120 can mainly include a storage instruction area and a storage data area, the storage data area can store an association relationship between the joint touch gesture and the application function; the storage instruction area can store an operating system, an application, Software units such as instructions required for at least one function, or their subsets, extension sets.
  • a non-volatile random access memory can also be included; providing hardware, software, and data resources in the management computing device to the processor 170, supporting the control software and applications. Also used for the storage of multimedia files, as well as the storage of running programs and applications.
  • the processor 170 is a control center of the terminal 100, and connects various parts of the entire mobile phone by various interfaces and lines, and executes various kinds of the terminal 100 by operating or executing an instruction stored in the memory 120 and calling data stored in the memory 120. Function and process data to monitor the phone as a whole.
  • the processor 170 may include one or more processing units; preferably, the processor 170 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like.
  • the modem processor primarily handles wireless communications. It can be understood that the above modem processor may not be integrated into the processor 170.
  • the processors, memories can be implemented on a single chip, and in some embodiments, they can also be implemented separately on separate chips.
  • the processor 170 can also be configured to generate corresponding operational control signals, send to corresponding components of the computing processing device, read and process data in the software, and in particular read and process the data and programs in the memory 120 to enable Each function module performs the corresponding function, thereby controlling the corresponding component to act according to the requirements of the instruction.
  • the radio frequency unit 110 can be used for receiving and transmitting signals during transmission and reception of information or during a call. Specifically, after receiving the downlink information of the base station, the processing is performed by the processor 170. In addition, the uplink data is designed to be sent to the base station.
  • RF circuits include, but are not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like.
  • the radio unit 110 can also communicate with network devices and other devices through wireless communication.
  • the wireless communication may use any communication standard or protocol, including but not limited to Global System of Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code). Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), E-mail, Short Messaging Service (SMS), etc.
  • GSM Global System of Mobile communication
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access
  • the audio circuit 160, the speaker 161, and the microphone 162 can provide an audio interface between the user and the terminal 100.
  • the audio circuit 160 can transmit the converted electrical data of the received audio data to the speaker 161 for conversion to the sound signal output by the speaker 161; on the other hand, the microphone 162 is used to collect the sound signal, and can also convert the collected sound signal.
  • the electrical signal is received by the audio circuit 160 and converted into audio data, and then processed by the audio data output processor 170, transmitted to the terminal, for example, via the radio frequency unit 110, or outputted to the memory 120 for further processing.
  • the audio circuit can also include a headphone jack 163 for providing a connection interface between the audio circuit and the earphone.
  • the terminal 100 also includes a power source 190 (such as a battery) for powering various components.
  • a power source 190 such as a battery
  • the power source can be logically coupled to the processor 170 through a power management system to manage functions such as charging, discharging, and power management through the power management system.
  • the terminal 100 further includes an external interface 180, which may be a standard Micro USB interface, or a multi-pin connector, which may be used to connect the terminal 100 to communicate with other devices, or may be used to connect the charger to the terminal 100. Charging.
  • an external interface 180 which may be a standard Micro USB interface, or a multi-pin connector, which may be used to connect the terminal 100 to communicate with other devices, or may be used to connect the charger to the terminal 100. Charging.
  • the terminal 100 may further include a flash, a wireless fidelity (WiFi) module, a Bluetooth module, various sensors, and the like, and details are not described herein.
  • WiFi wireless fidelity
  • Bluetooth wireless fidelity
  • an embodiment of the present invention provides an image processing method, which may be applied to a terminal having at least two cameras.
  • the two cameras are respectively referred to as a first camera and a second camera;
  • the first, second, and the like, as used in the present application are used only for the distinction, and there is no order or performance limitation;
  • the first camera and the second camera are positioned such that the optical axes of the two are parallel to each other;
  • the aperture values of both terminals are less than 1.6 (the super-large aperture referred to in this application refers to the aperture value is less than 1.6); the minimum limit value of the aperture value can be infinitely close to 0;
  • the terminal can be
  • the terminal 100 shown in FIG. 2 can also be a simple camera device or the like as shown in FIG.
  • the specific processing method process includes the following steps:
  • Step 31 Acquire a first camera to capture a first image of an object to be photographed
  • Step 32 Acquire a second camera to capture a second image of the object to be photographed
  • the object to be photographed can be understood as an object that the user expects to photograph; it can also be understood that when the user adjusts the shooting position of the terminal, the terminal displays the imaged object in the screen, for example, the common image portion of the two camera framing; it should be understood that The first camera and the second camera are not in the same position. Therefore, the image content obtained by the first camera and the second camera when shooting the object to be photographed is not completely the same, and most of the image areas are the same, and there are some differences in the edges.
  • the theoretical approximation is that the imaging of the two cameras is the same.
  • the existing correction techniques to correct the images captured by the two cameras, such as the correction of the positional offset factor, to obtain the first image and the second image.
  • the images are made approximately the same; or a common image area in the two-part image may be taken as the first image and the second image, making them approximately the same.
  • the geometric center of the first image and the geometric center of the second image can be corrected to overlap; that is, if the two images are compared for content, if the geometric centers are coincident, the contents of the two images The same parts can overlap.
  • Step 33 Acquire a first sub-image of the first image according to a first preset rule, where a first field image corresponds to an angle of view of the first camera as [0, ⁇ 1 ],
  • Step 34 Acquire a second sub-image of the second image according to a second preset rule, where the second sub-image corresponds to a field of view of the second camera as [ ⁇ 2 , ⁇ 3 ]; 0 ⁇ 2 ⁇ 1 ⁇ 3 , the first sub image and the second sub image have overlapping images;
  • This field of view can be specified in advance, or it can be obtained by taking the camera parameters and then determining them.
  • intersection of images refers to the same area of content of the two images.
  • the alignments of the first sub-image and the second sub-image are placed such that their contents overlap in the same area; if the first sub-image and the second sub-image geometric center coincide, the first sub- The intersection of the image and the second sub-image is a circular area, and the outer ring of the circular area is completely within the second sub-image, and the inner ring of the ring is completely within the first sub-image, so that the first sub-image and The second sub-image can constitute a complete imaging of the subject.
  • the alignments of the first sub-image and the second sub-image are placed such that their contents overlap the same area; if the collection centers of the first sub-image and the second sub-image do not coincide, At this time, the intersection of the first sub-image and the second sub-image is no longer a ring, and may be a closed area surrounded by an inner closed curve and an outer closed curve; the outer curve of the closed area is completely within the second sub-image. The internal curve of the enclosed area is completely within the first sub-image.
  • Step 35 The first sub-image and the second sub-image are obtained according to a preset splicing algorithm.
  • the alignments of the first sub-image and the second sub-image are placed such that their contents overlap the same area; if the collection centers of the first sub-image and the second sub-image do not coincide, At this time, the intersection of the first sub-image and the second sub-image is no longer a ring, and may be a non-closed area enclosed by an inner closed curve and an outer closed curve; if the image content of the non-closed area does not affect the object to be photographed
  • the expression, or the image content of the non-closed region corresponds to a certain image quality standard corresponding to the image quality in the first image or the second image, and then the subsequent sub-image, the second sub-image, and the non-closed region may be The corresponding image in the first image or the second image is subjected to fusion processing to obtain a target image.
  • the imaging lenses of the first camera and the second camera are specially manufactured according to certain special requirements in advance, that is, according to certain physical design parameters.
  • the lens manufacturing system can determine the number of shots according to the empirical value according to the user's target parameter requirements, and formulate corresponding specific hardware configuration parameters according to the number of pieces, such as the focal length of each lens and the relative position between each lens. Due to the difficulty of setting the large aperture, the large aperture cannot achieve a clear image within the entire field of view without increasing the number of lenses. Therefore, in a specific design, there is a compromise between the aperture value and the size of the field of view of the high-quality image, and the smaller the aperture value, the smaller the range of the angle of view that satisfies the required image clarity.
  • the imaging lens 201 in the first camera is composed of five lenses, but the number of lenses is not limited to five, and may be four to six.
  • the aperture value of the invention is designed to be small, for example, FNO 1 (FNO 1 is less than 1.6), and the imaging quality design weight of the field of view angle range of [0, ⁇ 1 ] is increased;
  • the imaging quality in the range of the angle of view of [0, ⁇ 1 ] satisfies the expected; that is, the quality of the imaging range corresponding to the portion of the field of view of [0, ⁇ 1 ] is in accordance with FNO 1
  • the image quality of the range of angles of view greater than ⁇ 1 is not of concern, even if the quality is poor.
  • the existing process can produce a corresponding imaging lens without increasing the difficulty. It can be seen that the FNO 1 of the first camera achieving a smaller value is at the expense of the image quality corresponding to the range of angles of view other than [0, ⁇ 1 ].
  • the imaging lens of the first camera has the image quality corresponding to the field of view angle range [0, ⁇ 1 ] in the object to be photographed obtained by the first camera due to the above special design requirements, and can meet the requirements of FNO 1 . .
  • FNO 1 can be measured by MTF.
  • MTF the value of MTF can still reach the preset standard.
  • ⁇ 1 is not more than 1/2 of the angle of view of the first camera.
  • the field of view is an inherent property of the camera after it is fixed at the terminal, and is the maximum field of view that the camera can image when the terminal is at a fixed position. It is well known in the industry that the angle at which the object image of the object to be measured can pass through the two edges of the largest range of the lens is called the angle of view.
  • an evaluation standard can refer to FIG. 6.
  • the specific parameters of the first camera are: FNO is 1.4; focal length is 3.95 mm; FOV is 75°, and can also be expressed as [0, 37.5°] (Note: in the present application, the field of view angle range is It can be expressed as Indicates that the lens is the starting point, centered on the optical axis, and the optical axis The area of the cone formed by all the rays of the angle, the angle range of projection to the plane is The MTF performance is shown in Figure 6. The figure shows the MTF of the sagittal direction of the imaging lens. (The contrast of the output image through the optical system is always worse than the contrast of the input image. The contrast and spatial frequency of this contrast.
  • the characteristics are closely related.
  • the ratio of the contrast of the output image to the input image can be defined as the modulation transfer function MTF), and the angle of the annotation is the corresponding half angle of view.
  • the different lines represent the different 1/2 FOV MTF curves, and the abscissa represents the spatial frequency. The larger the value, the finer the resolution of the image.
  • the ordinate represents the MTF, which characterizes the contrast of the image, and the greater the contrast, the clearer the image.
  • the dashed line in the figure represents the limit of the contrast of the system, and the closer it is to the better the image quality.
  • the MTF of the imaging lens in the meridional direction behaves like the sagittal direction.
  • the MTF value of the center FOV is about [0°, 32°]
  • the MTF of the obtained image is still maintained when the image in the range of the angle of view is 500 line pairs/mm.
  • the image quality of this region can reach a very good level under the condition that the FNO is 1.4.
  • the image acquired by the first camera is in the range of FOV about [32°, 37.5°]
  • the corresponding MTF value is relatively poor, and the high quality imaging of this field of view area will be borne by the second camera. It should be understood that due to factors of the manufacturing process, the boundaries of the high quality image area and the low quality image area are not necessarily strictly circular, and the actual limit may be an irregular pattern.
  • the imaging lens 204 in the second camera is composed of five lenses, but the number of lenses is not limited to five, and may be four to six.
  • the aperture value of the invention is designed to be small, for example, FNO 2 (FNO 2 is less than 1.6), and the imaging quality design weight of the field of view angle range [ ⁇ 2 , ⁇ 3 ] is increased;
  • the imaging quality in the range of the angle of view of [ ⁇ 2 , ⁇ 3 ] satisfies the expectation; that is, the quality of the imaging range corresponding to the portion of the field of view of [ ⁇ 2 , ⁇ 3 ] is Meets the requirements of FNO 2 ; and does not pay attention to image quality for a range of angles of view less than ⁇ 2 , even if the quality is poor.
  • the existing process can produce a corresponding imaging lens without increasing the difficulty. It can be seen that the second camera achieves a smaller value of FNO 2 at the expense of the image quality corresponding to the range of field angles of [0, ⁇ 2 ].
  • the imaging lens of the second camera has the image quality corresponding to the field of view angle range [ ⁇ 2 , ⁇ 3 ] in the object to be photographed obtained by the second camera due to the above special design requirements, and can meet the requirements of FNO 2 .
  • FNO 2 can be measured by MTF.
  • ⁇ 3 is not more than 1/2 of the angle of view of the second camera.
  • an evaluation standard can refer to FIG. 8.
  • the specific parameters of the second camera are: FNO is 1.4; focal length is 3.95 mm; FOV is 75°, which can also be expressed as [0, 37.5°]; MTF performance is shown in FIG. 8 , and the imaging lens is shown in the figure.
  • the MTF of the sagittal direction (the contrast of the output image through the optical system is always worse than the contrast of the input image.
  • the amount of change in contrast is closely related to the spatial frequency characteristics.
  • the contrast between the output image and the input image is The ratio can be defined as the modulation transfer function MTF), and the angle of the annotation is the corresponding half angle of view.
  • the different lines represent the different 1/2 FOV MTF curves, and the abscissa represents the spatial frequency. The larger the value, the finer the resolution of the image.
  • the ordinate represents the MTF, which characterizes the contrast of the image, and the greater the contrast, the clearer the image.
  • the dashed line in the figure represents the limit of the contrast of the system, and the closer it is to the better the image quality.
  • the MTF of the imaging lens in the meridional direction behaves like the sagittal direction.
  • the image has an FOV of [28°, 37.5°].
  • the lens is used as the starting point, the optical axis is centered, and the optical axis is The cone region C1 formed by all the rays of the angle is formed with the optical axis
  • the cone area C2 formed by all the rays of the angle is
  • the MTF value in the range of the field of view between C1 and C2 is high.
  • the image in the range of the field angle is in the range of 500 line pairs/mm, and the MTF can still be maintained at 0.25.
  • the image quality of the region can reach a good level under the condition that the FNO is 1.4.
  • the image acquired at the second camera is in the range of FOV about [0°, 28°], and the corresponding MTF value is relatively poor.
  • the parameter information of the lens is pre-stored locally in the photographing device or the cloud server; therefore, in the process of performing subsequent image processing; the processor may acquire the image acquired by the camera according to the local parameter information, and obtain the portion in which the sharpness of the super-large aperture is satisfied.
  • the area is used for subsequent splicing and fusion processing (in the present invention, splicing and merging are both image splicing, but the naming is different, referring to the prior art of processing a plurality of partial pictures into one complete picture).
  • the first camera includes an imaging lens 201 , a filter 202 , and an image sensor 203 ; and the second camera includes an imaging lens 204 , a filter 205 , and an image sensor 206 .
  • the optical axes of the first camera and the second camera are parallel to each other, and the optical axis spacing is a preset distance. Due to the special design of the first camera and the second camera; the first camera is capable of obtaining a sharp image in the field of view angle range [0, ⁇ 1 ]; the first camera can be in the field of view angle range [ ⁇ 2 , ⁇ 3 Get a good-resolution image inside.
  • the image that can be acquired by the imaging lens should theoretically be a circular area if projected onto the plane of the image sensor.
  • the size of the circle depends on the complete field of view of the imaging lens.
  • the image sensing is designed to be square, so that the image finally acquired by the first camera and the second camera, that is, the image finally received by the image sensor, is square.
  • the image difference obtained by the two sensors is within the tolerance range of the subsequent processing algorithm, it may be directly used as the first image and the second image; if the image difference obtained by the two sensors exceeds the allowable difference range of the subsequent processing algorithm, Then, the first image and the second image need to be obtained by using a correction technique in the prior art or intercepting the same content region. Therefore, how to perform subsequent processing based on the square images acquired by the two cameras is particularly important. That is, steps 33, 34, and 35.
  • Step 33 Acquire first parameter information of the first camera; the first parameter information includes: manufacturing parameters, performance parameters, and the like of the first camera, such as, in what angle of view the first camera can obtain the resolution under a large aperture.
  • the requested image For example, the image captured by the first camera in the field of view angle range [0, ⁇ 1 ] may be obtained, and the modulation transfer function MTF value corresponding to the preset spatial frequency is greater than the first preset threshold; wherein ⁇ 1 is smaller than 1/2 of the angle of view of the first camera.
  • Acquiring an image receiving area P of the image sensor in the first camera, that is, an image received by the image sensor in the first camera may be acquired
  • an image of the first image in the region of the angle of view angle [0, ⁇ 1 ] and the intersection S1 of the P is determined as the first sub-image.
  • the area where the square 203 is located represents the image sensor receiving image area in the first camera; the different circles represent different angles of view, for example, the area where the circle 301 is located corresponds to the field of view angle range [0] Image area of ⁇ 1 ].
  • the area where the circle 302 is located corresponds to the image area of the full camera angle of the first camera. Therefore, in this example, the intersection area of 301 and 203 is the first sub-image.
  • the first camera is specifically designed to have the same geometric center of 301 and 203, and the diameter of 301 is smaller than the diameter of the circumcircle of square 203. Due to the special design of the first camera and the above-described method of obtaining the first sub-image, the sharpness of the obtained first sub-image satisfies the super-large aperture FNO 1 .
  • Step 34 Acquire second parameter information of the second camera; the second parameter information includes: manufacturing parameters, performance parameters, and the like of the second camera, such as, in what angle of view the second camera can obtain the resolution under a large aperture.
  • the requested image For example, an image captured by the second camera in a range of viewing angles [ ⁇ 2 , ⁇ 3 ] may be obtained, and a modulation transfer function MTF value corresponding to the preset spatial frequency is greater than a second preset threshold; wherein, 0 ⁇ 2 ⁇ ⁇ 1 , ⁇ 3 is less than or equal to 1/2 of the angle of view of the second camera.
  • Acquiring an image receiving area Q of the image sensor in the second camera, that is, an image received by the image sensor in the second camera may be acquired
  • the area where the square 206 is located indicates that the image sensor in the second camera receives the image area; the different circles represent different angles of view, for example, the area where the circle 303 is located corresponds to the field angle range [0] The image area of ⁇ 2 ], the area where the circle 304 is located corresponds to the image area of the field of view angle range [0, ⁇ 3 ]; the area of the circle 306 sandwiched between the circle 303 and the circle 304 corresponds to the field of view angle range The image area of [ ⁇ 2 , ⁇ 3 ]. Therefore, in this example, the intersection of the ring 306 and the square 206 is the second sub-image.
  • the geometric centers of the circle 303, the circle 304 and the square 206 are the same, and the diameter of the circle 303 is smaller than the diameter of the circumscribed circle of the square 206, and smaller than the diameter of the circle 301, and the circle 304
  • the diameter is larger than the diameter of the circle 301 described above to achieve seamless stitching of the image; in general, the diameter of 304 can also be larger than the diameter of the outer circle of the square 206 to ensure subsequent formation of a complete image. Due to the special design of the second camera and the above-described method of obtaining the second sub-image, the sharpness of the obtained second sub-image satisfies the super-large aperture FNO 2 .
  • the overlapping area of 305 in FIG. 10 is formed, and the area where 301 is located and the area where 306 is located just have the condition that can be spliced into one complete image.
  • the overlapping portion is the area where the ring 305 is located.
  • the first sub-image is S1
  • the second sub-image is S2
  • step 35 may include the following forms:
  • the image of S1 and the image of S32 are obtained according to a first preset stitching algorithm.
  • the image of S31 and the image of S2 are obtained according to a second preset stitching algorithm.
  • the image of S1 and the image of S2 are enhanced by a preset enhancement algorithm to obtain an image of S4;
  • the image of S31, the image of S32, and the image of S4 are obtained according to a third preset stitching algorithm.
  • the image quality of the area inside the circle 301 satisfies the high definition under the large aperture; the image quality of the area between the rings 306 satisfies the high definition under the large aperture. Therefore, the image quality of the target image formed by splicing also satisfies the high definition under the large aperture.
  • the second camera should be approximated by the main parameters of the first camera. Including but not limited to the aperture value, the overall range of the camera's field of view, the number of lenses, the imaging focal length, the overall size of the imaging lens, the performance and size of the sensor. It should be understood that it is difficult to obtain exactly the same result in any manufacturing method, and that some error in the actual parameters is allowed, as long as the error range is insufficient to change the technical realization of the essence, it should fall within the scope of protection of the present invention.
  • the acquiring a first camera to capture a first image of the object to be photographed, and acquiring the second image of the object to be photographed by the second camera may be triggered by the same trigger signal; Triggered by two different trigger signals.
  • the spacing between the first camera and the second camera is less than a preset distance to ensure that the two cameras are The pictures taken when shooting the same object are as identical as possible; it should be understood that in the dual-camera scene, the distance between the two cameras is set in relation to the image area to be obtained, and the determination of the image area is determined by the subsequent image area.
  • the processing algorithm in the present invention, the obtained images of the two cameras are to be subsequently spliced, so the larger the overlapping area of the images obtained by the two cameras, the better; alternatively, the spacing of the two cameras is less than 1.5.
  • Cm some designs can also be less than or equal to 1cm.
  • the distance between the first camera and the second camera to capture the object to be photographed also has a certain influence on the acquisition field of the image. For example, the closer the camera is to the object to be photographed, the smaller the field of view deviation is; the camera distance is to be photographed. The farther the object is, the larger the field of view deviation.
  • the photographing device may further comprise an adjusting device, adjusting the spacing between the first camera and the second camera, and flexibly adjusting the spacing of the two cameras according to different distances of the object to be photographed, so as to ensure that the camera is to be photographed for different distances.
  • Objects can obtain the same image as possible (such as content similarity greater than 90%, or the ratio of the image of the two images to the single image is greater than 90%, etc.), and can ensure the first sub-image of the first camera And the second sub-image of the second camera can have an overlapping area.
  • the two cameras cannot obtain a clear image of the super-large aperture of the entire viewing angle; in the above embodiment, the diameter of 304 is not larger than the diameter of the outer circle of the square 206, and some partial area images are missed. Meet the clarity of the large aperture.
  • the photographing device may further comprise a third camera, the optical axis of the third camera and the optical axis of the first camera are parallel to each other; the distance between the third camera and the first camera is less than a preset distance; the third camera and the second Obtaining a distance between the cameras is less than a preset distance; acquiring a third image of the third image by the third camera; acquiring a third sub image of the third image according to a third preset rule, where the third sub image corresponds to the third camera
  • the angle of view of the field of view is [ ⁇ 4 , ⁇ 5 ]; wherein ⁇ 2 ⁇ ⁇ 4 ⁇ ⁇ 3 ⁇ ⁇ 5 , there is an overlapping image of the second sub-image and the third sub-image; ⁇ 5 is smaller than the field of view of the third camera 1/2 of the angle.
  • the first sub image, the second sub image, and the third sub image are obtained according to a fourth preset splicing algorithm.
  • the acquired first word image and the second sub-image itself have splicing conditions, the first preset splicing algorithm, the second preset splicing algorithm, and the third preset mentioned in the above embodiments.
  • the splicing algorithm and the fourth preset splicing algorithm can all be implemented by using existing technologies. This article will not go into details.
  • the present invention provides an image processing method, which is applied to a photographing apparatus including a first camera and a second camera; the optical axes of the second camera of the first camera are parallel to each other with a pitch smaller than a preset distance; and their aperture values are smaller than Obtaining a first image of the first image of the object to be photographed; acquiring a second image of the object to be photographed; acquiring a first sub image of the first image according to the first preset rule, corresponding to the first sub image
  • the field of view of the first camera is in the range of [0, ⁇ 1 ]
  • the second sub-image of the second image is acquired according to the second preset rule, and the field of view of the second sub-image corresponding to the second camera is [ ⁇ 2 , ⁇ 3 ]; wherein ⁇ 2 ⁇ 1 , the first sub-image and the second sub-image have overlapping images; and the first sub-image and the second sub-image are obtained according to a preset splicing algorithm.
  • an embodiment of the present invention provides an image processing apparatus 700, where the apparatus 700 is applied to a photographing apparatus including a first camera and a second camera, the first camera, the second The optical axes of the cameras are parallel to each other, and the distance between the first camera and the second camera is less than a preset distance; the aperture values of the first camera and the second camera are all less than 1.6;
  • the apparatus 700 includes a first obtaining module 701, a second obtaining module 702, a third obtaining module 703, a fourth obtaining module 704, and an image stitching module 705, where:
  • the first obtaining module 701 is configured to acquire a first image of the first camera to take a subject to be photographed.
  • the first obtaining module 701 can be implemented by the processor by calling the first camera to acquire an image.
  • the first obtaining module 702 is configured to acquire a second image that the second camera captures an object to be photographed.
  • the second obtaining module 702 can be implemented by the processor by calling the first camera to acquire an image.
  • the third obtaining module 703 is configured to obtain a first sub-image of the first image according to the first preset rule, where the first sub-image corresponds to the field of view of the first camera, and the third field is [0, ⁇ 1 ].
  • the module 703 can be implemented by a processor, and can perform a corresponding calculation by calling data and an algorithm in a local storage or a cloud server to obtain a first sub-image from the first image.
  • a fourth obtaining module 704 configured to acquire a second sub-image of the second image according to the second preset rule, where the second sub-image corresponds to a second camera having a field angle range of [ ⁇ 2 , ⁇ 3 ]; wherein ⁇ 2 ⁇ ⁇ 1 , the fourth obtaining module 704 can be implemented by a processor, and can perform corresponding calculation by calling data and an algorithm in the local storage or the cloud server to obtain a second sub-image from the second image.
  • the image splicing module 705 is configured to obtain the target image according to a preset splicing algorithm by using the first sub-image and the second sub-image.
  • the image splicing module 705 can be implemented by a processor, and can perform corresponding calculation by calling data in a local memory or a cloud server and a splicing fusion algorithm, and splicing the first sub image and the second sub image into a complete target image.
  • the target image still has high definition under a large aperture.
  • the first obtaining module 701 is specifically configured to perform the method mentioned in step 31 and the method that can be replaced by the same; the second obtaining module 702 is specifically configured to perform the method mentioned in step 32 and may be equivalent.
  • An alternative method; the third obtaining module 703 is specifically configured to perform the method mentioned in step 33 and the method that can be replaced by the same; the fourth obtaining module 704 is specifically configured to perform the method mentioned in step 34 and can be replaced equally Method;
  • the image splicing module 705 is specifically configured to perform the method mentioned in the step 35 and a method that can be equivalently replaced.
  • the photographing device may further include a third camera, the optical axis of the third camera and the optical axis of the first camera are parallel to each other; and the spacing between the third camera and the first camera is less than a preset distance.
  • the distance between the third camera and the second camera is less than the preset distance; the device further includes: a fifth acquisition module 706 (not shown) for acquiring a third image of the third camera to capture the object to be photographed a sixth acquisition module 707 (not shown), configured to acquire a third sub-image of the third image according to a third preset rule, where the third sub-image corresponds to a range of field angles of the third camera [ ⁇ 4 , ⁇ 5 ]; wherein ⁇ 2 ⁇ 4 ⁇ 3 ⁇ 5 , the second sub-image and the third sub-image have overlapping images, and ⁇ 5 is smaller than the angle of view of the third camera
  • the image splicing module 705 is specifically configured to obtain the target image according to the fourth preset splicing algorithm by using the first sub image, the second sub image, and the third sub image.
  • the present invention provides an image processing apparatus, which is applied to a photographing apparatus including a first camera and a second camera; the optical axes of the second camera of the first camera are parallel to each other with a pitch smaller than a preset distance; and their aperture values are smaller than 1.6: acquiring, by the first camera, a first image of the object to be photographed; acquiring a second image of the second camera to capture the object to be photographed; acquiring the first sub-image of the first image according to the first preset rule, the first sub-image
  • the image corresponding to the first camera has an angle of view of [0, ⁇ 1 ], and the second sub-image of the second image is obtained according to the second preset rule, and the second sub-image corresponds to the range of the second camera.
  • each module in the above device 700 is only a division of a logical function, and the actual implementation may be integrated into one physical entity in whole or in part, or may be physically separated.
  • each of the above modules may be a separately set processing component, or may be integrated in one chip of the terminal, or may be stored in a storage element of the controller in the form of program code, and processed by one of the processors.
  • the component calls and executes the functions of each of the above modules.
  • the individual modules can be integrated or implemented independently.
  • the processing elements described herein can be an integrated circuit chip with signal processing capabilities.
  • each step of the above method or each of the above modules may be completed by an integrated logic circuit of hardware in the processor element or an instruction in a form of software.
  • the processing element may be a general purpose processor, such as a central processing unit (CPU), or may be one or more integrated circuits configured to implement the above method, for example: one or more specific integrations Circuit (English: application-specific integrated circuit, ASIC for short), or one or more microprocessors (English: digital signal processor, referred to as: DSP), or one or more field programmable gate arrays (English: Field-programmable gate array, referred to as: FPGA).
  • CPU central processing unit
  • DSP digital signal processor
  • FPGA Field-programmable gate array
  • embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

Disclosed in the present invention are an image processing method and apparatus, and a device. The method is applied to a terminal having two specially manufactured cameras, the first camera and the second camera being specifically customized, for realizing a super large aperture under the existing process at the cost of the definition of an image acquired within a certain range of viewing angle, so that some photographed area images can satisfy the quality requirements of the super large aperture. The method comprises: acquiring a first sub-image of an object to be photographed by a first camera, having a corresponding range of viewing angle of [0, θ1]; acquiring a second sub-image of an object to be photographed by a second camera, having a corresponding range of viewing angle of [θ2, θ3], the quality of the first sub-image and the quality of the second sub-image satisfying the definition requirements of a super large aperture; and stitching and fusing the first sub-image and the second sub-image, so as to obtain a target image which satisfies the requirements of the super large aperture within a larger range of viewing angle.

Description

一种图像处理方法、装置与设备Image processing method, device and device 技术领域Technical field
本发明涉及终端技术领域,尤其涉及一种图像处理方法、装置与设备。The present invention relates to the field of terminal technologies, and in particular, to an image processing method, apparatus, and device.
背景技术Background technique
光圈值(FNO)=焦距/入瞳直径,是拍摄镜头的***参数,决定镜头的进光量,FNO数值越小,光圈越大,而进光量也就越多;反之,FNO数值越大,光圈越小,而进光量也就越少。同时,光圈也决定了镜头的极限分辨率,光圈越大,分辨率越高,反之,光圈越小,分辨率越低。镜头的光圈值直接关系着拍照的质量,例如照度、解析力和暗环境成像能力等。Aperture value (FNO) = focal length / entrance pupil diameter, is the system parameter of the shooting lens, determines the amount of light entering the lens, the smaller the FNO value, the larger the aperture, and the more the amount of light entering; otherwise, the larger the FNO value, the aperture The smaller, the less the amount of light entering. At the same time, the aperture also determines the limit resolution of the lens. The larger the aperture, the higher the resolution. Conversely, the smaller the aperture, the lower the resolution. The aperture value of the lens is directly related to the quality of the photograph, such as illumination, resolution and dark environment imaging capabilities.
因此,为了获得更好的拍照质量和用户体验,超大光圈镜头在手机和平板电脑等便携式终端设备中的需求越来越强烈。业界所提到的超大光圈是指光圈值尽可能地做小。当前高端的手机镜头一般采用约6片透镜的设计,而且光圈值一般FNO>1.7。如图1所示,一个传统的拍照镜头,具有6片透镜的结构,它保证了在镜头的整个视场角FOV(field of view)内都能比较理想地成像。镜头设计***根据用户的光圈值需求按照经验值得到镜片的数量,再根据物理原理和数学原理计算出来镜片的位置关系、焦距、形状等参数,从而进行工艺生产配置成成品。Therefore, in order to obtain better picture quality and user experience, the demand for large aperture lenses in portable terminal devices such as mobile phones and tablet computers is increasing. The large aperture mentioned in the industry means that the aperture value is as small as possible. Currently, high-end mobile phone lenses generally use about 6 lenses, and the aperture value is generally FNO>1.7. As shown in Fig. 1, a conventional photographing lens has a structure of six lenses, which ensures ideal imaging in the entire field of view (FOV) of the lens. The lens design system obtains the number of lenses according to the user's aperture value according to the empirical value, and then calculates the positional relationship, focal length, shape and other parameters of the lens according to the physical principle and mathematical principle, thereby performing process production and configuration into finished products.
按照现有技术的经验,光圈越大,成像光束的像差就越大,需要更多的镜片来校正像差,镜片数增多,装配难度带来的光学偏差的程度也就越大,光学***会变得非常敏感,进而无法保证整个视场角范围内都能拍到清晰的图像。因此,为了保证整个视场角内范围都能拍到清晰的图像,多个镜片之间的光学设计难度和硬件装配难度都会成指数型地增长,也就越难制造,这就对生产工艺的管控要求非常高。因此为了获得大光圈,同时为了使单颗镜头在拍摄镜头的整个视场角内都成像清晰,当前工艺能够实现的镜片数一般不超过6。目前业界工艺水平可以达到FNO大于1.6。对于大光圈(FNO小于等于1.6)镜头,以当前技术水平来看,制造的良品率非常低,成本很高,不利于其集成于手机和平板电脑等便携式终端设备中。According to the experience of the prior art, the larger the aperture, the larger the aberration of the imaging beam, the more lenses are needed to correct the aberrations, the number of lenses is increased, and the degree of optical deviation caused by the assembly difficulty is greater, the optical system It becomes very sensitive and there is no guarantee that a clear image will be taken across the entire field of view. Therefore, in order to ensure a clear image within the entire field of view, the optical design difficulty and hardware assembly difficulty between multiple lenses will grow exponentially, and the more difficult it is to manufacture, this is the production process. The control requirements are very high. Therefore, in order to obtain a large aperture, and in order to make a single lens clear in the entire field of view of the lens, the number of lenses that can be achieved by the current process generally does not exceed 6. At present, the industry's technological level can reach FNO greater than 1.6. For large apertures (FNO less than or equal to 1.6), at the current state of the art, the manufacturing yield is very low and the cost is high, which is not conducive to its integration in portable terminal devices such as mobile phones and tablets.
因此,如何在兼顾生产工艺的前提下,还能够使得镜头实现超大光圈,是当前亟需解决的问题。Therefore, how to achieve a large aperture of the lens under the premise of taking into account the production process is an urgent problem to be solved.
发明内容Summary of the invention
本发明实施例提供一种拍照模组以及终端,通过双摄像头或者多摄像头来实现大光圈FNO<1.6,同时兼顾生产工艺的水平,使之成为实现大光圈的一种简单有效的实现方式。The embodiment of the invention provides a camera module and a terminal, which realizes a large aperture FNO<1.6 by a dual camera or a multi-camera, and simultaneously takes into consideration the level of the production process, so that it becomes a simple and effective implementation method for realizing a large aperture.
本发明实施例提供的具体技术方案如下:The specific technical solutions provided by the embodiments of the present invention are as follows:
第一方面,本发明实施例提供一种图像处理方法,该方法应用于包含第一摄像头 和第二摄像头的拍照设备,第一摄像头、第二摄像头的光轴互相平行,第一摄像头和第二摄像头之间的间距小于预设距离;第一摄像头、第二摄像头的光圈值均小于1.6;方法包括:获取第一摄像头拍摄待拍摄对象的第一图像;获取第二摄像头拍摄所述待拍摄对象的第二图像;根据第一预设规则,获取所述第一图像的第一子图像,所述第一子图像对应所述第一摄像头的视场角范围为[0,θ 1],根据第二预设规则,获取所述第二图像的第二子图像,所述第二子图像对应所述第二摄像头的视场角范围为[θ 2,θ 3];其中,0<θ 213,所述第一子图像和所述第二子图像存在重叠图像;将所述第一子图像、所述第二子图像,按照预设拼接算法得到目标图像。 In a first aspect, an embodiment of the present invention provides an image processing method, which is applied to a photographing apparatus including a first camera and a second camera. The optical axes of the first camera and the second camera are parallel to each other, and the first camera and the second The distance between the cameras is less than the preset distance; the aperture values of the first camera and the second camera are all less than 1.6; the method includes: acquiring a first camera to capture a first image of the object to be photographed; and acquiring a second camera to capture the object to be photographed And acquiring a first sub-image of the first image according to a first preset rule, where a first field image corresponds to an angle of view of the first camera as [0, θ 1 ], according to a second preset rule, acquiring a second sub-image of the second image, where the second sub-image corresponds to a field of view of the second camera as [θ 2 , θ 3 ]; wherein, 0<θ 213 , the first sub-image and the second sub-image have overlapping images; and the first sub-image and the second sub-image are obtained according to a preset splicing algorithm.
第二方面,本发明实施例提供一种图像处理装置,装置应用于包含第一摄像头和第二摄像头的拍照设备,第一摄像头、第二摄像头的光轴互相平行,第一摄像头和第二摄像头之间的间距小于预设距离;第一摄像头、第二摄像头的光圈值均小于1.6;装置包括:第一获取模块,用于获取第一摄像头拍摄待拍摄对象的第一图像;第二获取模块,用于获取第二摄像头拍摄待拍摄对象的第二图像;第三获取模块,用于根据第一预设规则,获取第一图像的第一子图像,第一子图像对应所述第一摄像头的视场角范围为[0,θ 1],第四获取模块,用于根据第二预设规则,获取第二图像的第二子图像,第二子图像对应第二摄像头的视场角范围为[θ 2,θ 3];其中,0<θ 213,第一子图像和第二子图像存在重叠图像;图像拼接模块,用于将第一子图像、第二子图像,按照预设拼接算法得到目标图像。 In a second aspect, an embodiment of the present invention provides an image processing apparatus, where the apparatus is applied to a photographing apparatus including a first camera and a second camera, wherein optical axes of the first camera and the second camera are parallel to each other, and the first camera and the second camera are The spacing between the first camera and the second camera is less than 1.6; the device includes: a first acquiring module, configured to acquire a first image of the first camera to capture the object to be photographed; and a second acquiring module And acquiring a second image of the second image capturing the object to be photographed; the third acquiring module is configured to acquire the first sub image of the first image according to the first preset rule, where the first sub image corresponds to the first camera The range of the field of view is [0, θ 1 ], the fourth acquiring module is configured to acquire the second sub-image of the second image according to the second preset rule, and the second sub-image corresponds to the range of the field of view of the second camera [θ 2 , θ 3 ]; wherein, 0<θ 213 , the first sub-image and the second sub-image have overlapping images; the image splicing module is configured to use the first sub-image and the second sub- image According to a preset target image stitching algorithm.
(注:本申请中,若视场角范围为
Figure PCTCN2018084518-appb-000001
(
Figure PCTCN2018084518-appb-000002
可以为任意角度),可以表示为
Figure PCTCN2018084518-appb-000003
表示以镜头为起始点,以光轴为中心,与光轴成
Figure PCTCN2018084518-appb-000004
夹角的所有射线所形成的圆锥体区域,投影到平面的角度范围是
Figure PCTCN2018084518-appb-000005
(Note: In this application, if the field of view is in the range of
Figure PCTCN2018084518-appb-000001
(
Figure PCTCN2018084518-appb-000002
Can be any angle), can be expressed as
Figure PCTCN2018084518-appb-000003
Indicates that the lens is the starting point, centered on the optical axis, and the optical axis
Figure PCTCN2018084518-appb-000004
The area of the cone formed by all the rays of the angle, the angle range of projection to the plane is
Figure PCTCN2018084518-appb-000005
根据本发明实施例提供的上述方法和装置的技术方案,可以将一个超大光圈成像的任务由两个摄像头共同承担,通过第一摄像头并按照一定的算法获取在超大光圈条件下的高清晰度区域,即第一子图像;再通过第二摄像头并按照一定的算法获取在超大光圈条件下的高清晰度区域,即第二子图像;将第一子图像和第二子图像进行拼接融合,得到整个视场区域都满足高清晰度的目标图像。如此一来,在实现超大光圈的前提下,减小了摄像头设计和制造的难度,节约设计成本和制造成本。According to the technical solution of the above method and apparatus provided by the embodiments of the present invention, the task of imaging an oversized aperture can be shared by two cameras, and the high-definition area under the condition of super-large aperture is obtained by the first camera and according to a certain algorithm. , that is, the first sub-image; then, through the second camera and acquiring a high-definition region under the condition of super-large aperture, that is, the second sub-image according to a certain algorithm; splicing and merging the first sub-image and the second sub-image to obtain The entire field of view area satisfies the high definition target image. In this way, under the premise of achieving a large aperture, the difficulty in designing and manufacturing the camera is reduced, and the design cost and the manufacturing cost are saved.
根据第一方面或者第二方面,在一种可能的设计中,根据第一预设规则,获取所述第一图像的第一子图像包括:获取所述第一摄像头的第一参数信息;所述第一参数信息表达了:所述第一摄像头在视场角范围为[0,θ 1]内拍摄的图像,在预设空间频率对应的调制传递函数MTF值大于第一预设阈值;其中,θ 1小于所述第一摄像头的视场角的1/2;获取所述第一摄像头中图像传感器的图像接收区域P;确定出所述第一图像在视场角范围为[0,θ 1]的区域与所述P的交集区域S1的图像,作为第一子图像。 According to the first aspect or the second aspect, in a possible design, acquiring the first sub-image of the first image according to the first preset rule comprises: acquiring first parameter information of the first camera; The first parameter information expresses: the image captured by the first camera in the field of view angle range [0, θ 1 ], and the modulation transfer function MTF value corresponding to the preset spatial frequency is greater than the first preset threshold; , θ 1 is smaller than 1/2 of the angle of view of the first camera; acquiring an image receiving area P of the image sensor in the first camera; determining that the first image has a field of view angle range of [0, θ The image of the intersection area of 1 ] and the intersection area of the P is taken as the first sub-image.
更具体地,这个技术实现可以由处理器调用存储器中的程序与指令进行相应的运算。More specifically, this technical implementation can be invoked by a processor to call a program in memory and an instruction to perform a corresponding operation.
根据第一方面或者第二方面,在一种可能的设计中,根据第二预设规则,获取所述第二图像的第二子图像包括:获取所述第二摄像头的第二参数信息;所述第二参数信息 表达了:所述第二摄像头在视场角范围为[θ 2,θ 3]内拍摄的图像,在预设空间频率对应的调制传递函数MTF值大于第二预设阈值;其中,0<θ 21,θ 3小于等于所述第二摄像头的视场角的1/2;获取所述第二摄像头中图像传感器的第二图像接收区域Q;确定出所述第二图像在视场角范围为[θ 2,θ 3]的区域与所述Q的交集区域S2的图像,作为第二子图像。 According to the first aspect or the second aspect, in a possible design, acquiring the second sub-image of the second image according to the second preset rule comprises: acquiring second parameter information of the second camera; The second parameter information expresses: the image captured by the second camera in the field of view angle range [θ 2 , θ 3 ], and the modulation transfer function MTF value corresponding to the preset spatial frequency is greater than the second preset threshold; Wherein, 0<θ 21 , θ 3 is less than or equal to 1/2 of the angle of view of the second camera; acquiring a second image receiving area Q of the image sensor in the second camera; determining the first The image of the second image in the intersection of the region having the angle of view of [θ 2 , θ 3 ] and the intersection of the Q is the second sub-image.
更具体地,这个技术实现可以由处理器调用存储器中的程序与指令进行相应的运算。More specifically, this technical implementation can be invoked by a processor to call a program in memory and an instruction to perform a corresponding operation.
根据第一方面或者第二方面,在一种可能的设计中,所述将所述第一子图像、所述第二子图像,按照预设拼接算法得到目标图像包括:确定出所述S1与所述S2的交集区域S3的图像;确定所述S3在所述S2中的补集区域S32的图像;将所述S1的图像和所述S32的图像按照第一预设拼接算法得到目标图像。According to the first aspect or the second aspect, in a possible design, the obtaining the target image by using the first sub-image and the second sub-image according to a preset splicing algorithm comprises: determining the S1 and An image of the intersection region S3 of the S2; an image of the complement region S32 of the S3 in the S2; an image of the S1 and the image of the S32 are obtained according to a first preset stitching algorithm.
更具体地,这个技术实现可以由处理器调用存储器中的程序与指令进行相应的运算。More specifically, this technical implementation can be invoked by a processor to call a program in memory and an instruction to perform a corresponding operation.
根据第一方面或者第二方面,在一种可能的设计中,所述将所述第一子图像、所述第二子图像,按照预设拼接算法得到目标图像包括:确定出所述S1与所述S2的交集区域S3的图像;确定所述S3在所述S1中的补集区域S31的图像;根据所述S31的图像和所述S2的图像按照第二预设拼接算法得到目标图像。According to the first aspect or the second aspect, in a possible design, the obtaining the target image by using the first sub-image and the second sub-image according to a preset splicing algorithm comprises: determining the S1 and An image of the intersection region S3 of the S2; an image of the complement region S31 of the S3 in the S1; an image of the image of the S31 and the image of the S2 according to a second preset stitching algorithm.
更具体地,这个技术实现可以由处理器调用存储器中的程序与指令进行相应的运算。More specifically, this technical implementation can be invoked by a processor to call a program in memory and an instruction to perform a corresponding operation.
根据第一方面或者第二方面,在一种可能的设计中,所述将所述第一子图像、所述第二子图像,按照预设拼接算法得到目标图像包括:确定出所述S1与所述S2的交集区域S3的图像;确定所述S3在所述S1中的补集区域S31的图像;确定所述S3在所述S2中的补集区域S32的图像;将所述S1的图像和所述S2的图像对所述S3的图像按照预设增强算法进行增强处理,得到S4的图像;将所述S31的图像、所述S32的图像以及所述S4的图像按照第三预设拼接算法得到目标图像。According to the first aspect or the second aspect, in a possible design, the obtaining the target image by using the first sub-image and the second sub-image according to a preset splicing algorithm comprises: determining the S1 and An image of the intersection region S3 of the S2; an image of the complement region S31 of the S3 in the S1; an image of the complement region S32 of the S3 in the S2; an image of the S1 And performing an enhancement process on the image of the S3 according to a preset enhancement algorithm to obtain an image of S4; and splicing the image of the S31, the image of the S32, and the image of the S4 according to a third preset The algorithm gets the target image.
更具体地,这个技术实现可以由处理器调用存储器中的程序与指令进行相应的运算。More specifically, this technical implementation can be invoked by a processor to call a program in memory and an instruction to perform a corresponding operation.
根据第一方面或者第二方面,在一种可能的设计中,所述第一摄像头包括第一成像镜头;所述第二摄像头包括第二成像镜头;所述第一成像镜头是按照第一预设需求设计得到的;所述第二成像镜头是按照第二预设需求设计得到的;所述第一预设需求对应于所述第一参数信息,所述第二预设需求对应于所述第二参数信息。According to the first aspect or the second aspect, in a possible design, the first camera comprises a first imaging lens; the second camera comprises a second imaging lens; the first imaging lens is according to the first imaging The second imaging lens is designed according to the second preset requirement; the first preset requirement corresponds to the first parameter information, and the second preset requirement corresponds to the Second parameter information.
更具体地,这些设计需求所制造出来的成像镜头的自身属性,这些数据会预先存储在拍照设备中或者服务器中,后续处理器进行图像处理时能够调用该数据,以便能够从第一图像中确定出第一子图像,以及从第二图像中确定出第二子图像。More specifically, these design requirements are self-property of the imaging lens, which is stored in advance in the photographing device or in the server, and can be called by the subsequent processor for image processing so as to be able to determine from the first image. A first sub-image is derived, and a second sub-image is determined from the second image.
根据第一方面或者第二方面,在一种可能的设计中,所述获取第一摄像头拍摄待拍摄对象的第一图像,与所述获取第二摄像头拍摄所述待拍摄对象的第二图像;由同一个触发信号触发。或者,由两个不同的触发信号分别触发。According to the first aspect or the second aspect, in a possible design, the acquiring the first camera to capture a first image of the object to be photographed, and the acquiring the second camera to capture the second image of the object to be photographed; Triggered by the same trigger signal. Alternatively, it is triggered by two different trigger signals.
根据第一方面或者第二方面,在一种可能的设计中,所述第一摄像头和所述第二摄像头的成像镜头所包含的透镜片数为4、5或6。According to the first aspect or the second aspect, in one possible design, the imaging lenses of the first camera and the second camera comprise 4, 5 or 6 lens segments.
根据第一方面或者第二方面,在一种可能的设计中,所述第一摄像头和所述第二摄像头的光圈值相等。According to the first aspect or the second aspect, in one possible design, the aperture values of the first camera and the second camera are equal.
根据第一方面或者第二方面,在一种可能的设计中,所述第一摄像头和所述第二摄像头的图像传感器相同。因此上述P、Q也对应相同。According to the first aspect or the second aspect, in one possible design, the image sensors of the first camera and the second camera are the same. Therefore, the above P and Q are also the same.
根据第一方面或者第二方面,在一种可能的设计中,所述第一摄像头和所述第二摄像头的焦距和(最大)视场角相同。According to the first aspect or the second aspect, in one possible design, the focal length and the (maximum) field of view of the first camera and the second camera are the same.
根据第一方面或者第二方面,在一种可能的设计中,所述第一预设阈值和所述第二预设阈值大于等于0.25。According to the first aspect or the second aspect, in a possible design, the first preset threshold and the second preset threshold are greater than or equal to 0.25.
根据第一方面或者第二方面,在一种可能的设计中,所述预设空间频率大于400对线/mm。通常来讲,对于同一个图像来说,空间频率越大,对应的图像的精细程度越大。空间频率越大,同时还能满足MTF大于预设阈值,表示该图像的清晰度越好。According to the first or second aspect, in one possible design, the predetermined spatial frequency is greater than 400 pairs of lines/mm. Generally speaking, for the same image, the larger the spatial frequency, the more refined the corresponding image. The larger the spatial frequency, the more the MTF is greater than the preset threshold, indicating that the resolution of the image is better.
根据第一方面或者第二方面,在一种可能的设计中,S1为圆形区域。According to the first or second aspect, in one possible design, S1 is a circular area.
更具体地,在本发明具体的实现过程中,由于器件制造和摄像头所处环境带来的误差,上述提到的视场角范围[0,θ 1]、[θ 2,θ 3]等,所对应的图像区域并不一定是规则的圆或着圆环,有可能是近似的圆形或圆环,也有可能是一些不规则图形;但是只要摄像头最终获取到的所有高清晰度的子图像,在以同一个图像传感器为参考的前提下,能够覆盖到图像传感器所在的区域即可。就可以实现无缝拼接。形成满足超大光圈的高清晰度的目标图像。 More specifically, in the specific implementation process of the present invention, due to the error caused by the device manufacturing and the environment in which the camera is located, the above-mentioned field of view angle ranges [0, θ 1 ], [θ 2 , θ 3 ], etc. The corresponding image area is not necessarily a regular circle or a circle, and may be an approximate circle or a ring, or some irregular pattern; but as long as all the high-resolution sub-images finally obtained by the camera are obtained. Under the premise of taking the same image sensor as a reference, it can cover the area where the image sensor is located. Seamless stitching is possible. A target image that satisfies the high definition of the super large aperture is formed.
更具体地,在本发明具体的实现过程中,确定子图像的过程中,在上述提到的视场角范围[0,θ 1]、[θ 2,θ 3]中,图像处理程序还可以取以上两个区域中的局部,比如方形或者椭圆等非圆形区域的图像,进行相应的拼接。只要这些子图像的并集能够覆盖图像传感器所在的区域即可。 More specifically, in the specific implementation process of the present invention, in the process of determining the sub-image, in the above-mentioned field of view angle range [0, θ 1 ], [θ 2 , θ 3 ], the image processing program can also Take the image of the above two areas, such as a square or ellipse, such as a non-circular area, and perform the corresponding stitching. As long as the union of these sub-images can cover the area where the image sensor is located.
根据第一方面或者第二方面,在一种可能的设计中,所述拍照设备还包含调节装置,所述方法还包括:控制所述调节装置,调整所述第一摄像头和所述第二摄像头的间距。如果待拍摄对象距离镜头越近,则需要两个摄像头的间距变得越小,以保证获取的到的子图像区域能够实现重叠。如果待拍摄对象距离镜头越远,则需要两个摄像头的间距变得稍大,使得获取的子图像实现重叠且重叠区域不会过大。According to the first aspect or the second aspect, in a possible design, the photographing apparatus further includes an adjusting device, the method further comprising: controlling the adjusting device, adjusting the first camera and the second camera Pitch. If the object to be photographed is closer to the lens, the pitch of the two cameras needs to be smaller to ensure that the acquired sub-image areas can be overlapped. If the object to be photographed is farther from the lens, the pitch of the two cameras needs to be slightly larger, so that the acquired sub-images are overlapped and the overlapping area is not excessive.
根据第一方面或者第二方面,在一种可能的设计中,所述拍照设备还包含第三摄像头,所述第三摄像头的光轴与所述第一摄像头的光轴互相平行;所述第三摄像头与所述第一摄像头之间的间距小于预设距离;所述第三摄像头与所述第二摄像头之间的间距小于预设距离;所述方法还包括:获取第三摄像头拍摄所述待拍摄对象的第三图像;根据第三预设规则,获取所述第三图像的第三子图像,所述第三子图像对应所述第三摄像头的视场角范围为[θ 4,θ 5];其中,θ 2435,所述第二子图像和所述第三子图像存在重叠图像;θ 5小于所述第三摄像头的视场角的1/2;所述将所述第一子图像、所述第二子图像,按照预设拼接算法得到目标图像包括:将所述第一子图像、所述第二子图像、所述第三子图像,按照第四预设拼接算法得到目标图像。 According to the first aspect or the second aspect, in a possible design, the photographing apparatus further includes a third camera, the optical axis of the third camera and the optical axis of the first camera are parallel to each other; The distance between the three cameras and the first camera is less than a preset distance; the distance between the third camera and the second camera is less than a preset distance; the method further includes: acquiring the third camera to capture the a third image of the object to be photographed; acquiring a third sub-image of the third image according to a third preset rule, the third sub-image corresponding to the third camera having a field of view angle range of [θ 4 , θ 5 ]; wherein θ 2435 , the second sub-image and the third sub-image have overlapping images; θ 5 is smaller than 1/2 of the angle of view of the third camera And obtaining, by the first sub-image and the second sub-image, the target image according to the preset splicing algorithm, including: the first sub-image, the second sub-image, and the third sub-image, The target image is obtained according to the fourth preset splicing algorithm.
根据第一方面或者第二方面,在一种可能的设计中,根据第三预设规则,获取所述第三图像的第三子图像包括:According to the first aspect or the second aspect, in a possible design, acquiring the third sub-image of the third image according to the third preset rule comprises:
获取所述第三摄像头的第三参数信息;所述第一参数信息表达了:所述第一摄像头在视场角范围为[θ 4,θ 5]内拍摄的图像,在预设空间频率对应的调制传递函数MTF值大于第一预设阈值;其中,θ 5小于所述第一摄像头的视场角的1/2; Obtaining third parameter information of the third camera; the first parameter information expresses: an image captured by the first camera in a range of viewing angles [θ 4 , θ 5 ], corresponding to a preset spatial frequency The modulation transfer function MTF value is greater than the first predetermined threshold; wherein θ 5 is less than 1/2 of the field of view of the first camera;
获取所述第三摄像头中图像传感器的图像接收区域R;Obtaining an image receiving area R of the image sensor in the third camera;
确定出所述第三图像在视场角范围为[θ 4,θ 5]的区域与所述R的交集区域S1的图 像,作为第三子图像。 An image of the intersection of the region of the third image having the field of view angle range [θ 4 , θ 5 ] and the intersection of the R is determined as the third sub image.
第三方面,本发明实施例提供一种终端设备,所述终端设备包含第一摄像头和第二摄像头,存储器、处理器、总线;第一摄像头、第二摄像头、存储器以及处理器通过总线相连;其中,所述第一摄像头、所述第二摄像头的光轴互相平行,所述第一摄像头和所述第二摄像头之间的间距小于预设距离;所述第一摄像头、所述第二摄像头的光圈值均小于1.6;所述摄像头用于在所述处理器的控制下采集图像信号;所述存储器用于存储计算机程序和指令;所述处理器用于调用所述存储器中存储的所述计算机程序和指令,执行上述任一一种可能的实现方法。In a third aspect, an embodiment of the present invention provides a terminal device, where the terminal device includes a first camera and a second camera, a memory, a processor, and a bus; the first camera, the second camera, the memory, and the processor are connected by a bus; The optical axes of the first camera and the second camera are parallel to each other, and the distance between the first camera and the second camera is less than a preset distance; the first camera and the second camera The aperture values are each less than 1.6; the camera is for acquiring image signals under the control of the processor; the memory is for storing computer programs and instructions; the processor is for calling the computer stored in the memory Programs and instructions that perform any of the possible implementations described above.
根据第三方面,在一种可能的设计中,终端设备还包括天线***、天线***在处理器的控制下,收发无线通信信号实现与移动通信网络的无线通信;移动通信网络包括以下的一种或多种:GSM网络、CDMA网络、3G网络、FDMA、TDMA、PDC、TACS、AMPS、WCDMA、TDSCDMA、WIFI以及LTE网络。According to the third aspect, in a possible design, the terminal device further includes an antenna system, and the antenna system transmits and receives wireless communication signals under the control of the processor to implement wireless communication with the mobile communication network; the mobile communication network includes the following one Or multiple: GSM network, CDMA network, 3G network, FDMA, TDMA, PDC, TACS, AMPS, WCDMA, TDSCDMA, WIFI and LTE networks.
第四方面,本发明实施例提供一种图像处理方法,所述方法应用于包含第一摄像头和第二摄像头的拍照设备,所述第一摄像头、所述第二摄像头的光轴互相平行,所述第一摄像头和所述第二摄像头之间的间距小于预设距离;所述第一摄像头、所述第二摄像头的光圈值均小于1.6,且所述第一摄像头、所述第二摄像头的镜片数均不大于6;所述方法包括:获取第一摄像头拍摄待拍摄对象的第一图像;获取第二摄像头拍摄所述待拍摄对象的第二图像;获取所述第一图像的第一子图像;其中,所述第一子图像的清晰度满足预设清晰度标准;获取所述第二图像的第二子图像;其中,所述第二子图像的清晰度满足所述预设清晰度标准;且所述第一子图像和所述第二子图像存在图像交集,所述第一子图像和所述第二子图像的图像并集能够表达所述待拍摄对象;对所述第一子图像、所述第二子图像进行融合处理,得到目标图像。In a fourth aspect, an embodiment of the present invention provides an image processing method, where the method is applied to a photographing device including a first camera and a second camera, and optical axes of the first camera and the second camera are parallel to each other. The spacing between the first camera and the second camera is less than a preset distance; the aperture values of the first camera and the second camera are both less than 1.6, and the first camera and the second camera are The number of lenses is not more than 6; the method includes: acquiring a first image of the first camera to capture the object to be photographed; acquiring a second image of the object to be photographed by the second camera; acquiring the first sub of the first image An image; wherein a resolution of the first sub-image satisfies a preset definition standard; acquiring a second sub-image of the second image; wherein a resolution of the second sub-image satisfies the preset definition a standard; and the first sub-image and the second sub-image have an image intersection, and the image integration of the first sub-image and the second sub-image can express the object to be photographed; A first sub-image, the second sub-image are fused to give a target image.
第五方面,本发明实施例提供一种图像处理装置,该装置应用于包含第一摄像头和第二摄像头的拍照设备,所述第一摄像头、所述第二摄像头的光轴互相平行,所述第一摄像头和所述第二摄像头之间的间距小于预设距离;所述第一摄像头、所述第二摄像头的光圈值均小于1.6,且所述第一摄像头、所述第二摄像头的镜片数均不大于6;所述装置包括:第一获取模块,用于获取第一摄像头拍摄待拍摄对象的第一图像;第二获取模块,用于获取第二摄像头拍摄所述待拍摄对象的第二图像;第三获取模块,用于获取所述第一图像的第一子图像;其中,所述第一子图像的清晰度满足预设清晰度标准;第四获取模块,用于获取所述第二图像的第二子图像;其中,所述第二子图像的清晰度满足所述预设清晰度标准;且所述第一子图像和所述第二子图像存在图像交集,且所述第一子图像和所述第二子图像的图像并集能够表达所述待拍摄对象;图像拼接模块,用于对所述第一子图像、所述第二子图像进行融合处理,得到目标图像。In a fifth aspect, an embodiment of the present invention provides an image processing apparatus, where the apparatus is applied to a photographing apparatus including a first camera and a second camera, wherein optical axes of the first camera and the second camera are parallel to each other, The distance between the first camera and the second camera is less than a preset distance; the aperture values of the first camera and the second camera are all less than 1.6, and the lenses of the first camera and the second camera The device includes: a first acquiring module, configured to acquire a first image of the first image to be photographed, and a second acquiring module, configured to acquire a second camera to capture the object to be photographed a second acquisition module, configured to acquire a first sub-image of the first image, wherein a resolution of the first sub-image satisfies a preset definition standard; and a fourth acquisition module, configured to acquire the image a second sub-image of the second image; wherein a resolution of the second sub-image satisfies the preset definition standard; and the first sub-image and the second sub-image have an image intersection And the image of the first sub-image and the second sub-image is capable of expressing the object to be photographed; the image splicing module is configured to perform fusion processing on the first sub-image and the second sub-image , get the target image.
根据第四方面或第五方面,在一种可能的设计中,获取所述第一图像的第一子图像包括:获取第一摄像头的第一物理设计参数;其中,所述第一物理设计参数表达了在所述第一摄像头拍摄得到的任一图像中,第一区域图像的清晰度高于第二区域图像的清晰度,且满足预设清晰度标准,所述第二区域为所述第一区域在所述第一摄像头拍摄的任一图像中的补集;根据所述第一物理设计参数获取所述第一图像的第一区域;获取所述第一摄像头中图像传感器的图像接收区域P;确定出所述第一图像的第一区域与所述 P的交集区域S1的图像,作为第一子图像。这一技术特征可以由第三获取模块实施。本发明实施例中,第一区域和第二区域可以是任意图形,本发明实施例中不予以限定。According to the fourth or fifth aspect, in a possible design, acquiring the first sub-image of the first image comprises: acquiring a first physical design parameter of the first camera; wherein the first physical design parameter Expressing that in any of the images captured by the first camera, the sharpness of the image of the first region is higher than the sharpness of the image of the second region, and the preset sharpness criterion is satisfied, and the second region is the first Complementing an area in any image captured by the first camera; acquiring a first area of the first image according to the first physical design parameter; acquiring an image receiving area of the image sensor in the first camera P: determining an image of the intersection area S1 of the first area of the first image and the P as the first sub-image. This technical feature can be implemented by a third acquisition module. In the embodiment of the present invention, the first area and the second area may be any graphics, which is not limited in the embodiment of the present invention.
更具体地,这个技术实现可以由处理器调用存储器或云端中的程序与指令进行相应的运算。More specifically, this technical implementation can be invoked by the processor to call the memory or the program in the cloud and the corresponding operation.
根据第四方面或第五方面,在一种可能的设计中,所述获取所述第二图像的第二子图像包括:获取第二摄像头的第二物理设计参数;其中,所述第二物理设计参数表达了在所述第二摄像头拍摄得到的任一图像中,第三区域图像的清晰度高于第四区域图像的清晰度,且满足预设清晰度标准,所述第四区域为所述第三区域在所述第二摄像头拍摄的任一图像中的补集;根据所述第二物理设计参数获取所述第二图像的第三区域;获取所述第二摄像头中图像传感器的图像接收区域Q;确定出所述第二图像的第三区域与所述Q的交集区域S2的图像,作为第二子图像。这一技术特征可以由第四获取模块实施。本发明实施例中,第三区域和第四区域可以是任意图形,本发明实施例中不予以限定。According to the fourth aspect or the fifth aspect, in a possible design, the acquiring the second sub-image of the second image comprises: acquiring a second physical design parameter of the second camera; wherein the second physics The design parameter expresses that in any of the images captured by the second camera, the sharpness of the image of the third region is higher than the sharpness of the image of the fourth region, and the predetermined sharpness criterion is met, and the fourth region is Complementing a third region in any image captured by the second camera; acquiring a third region of the second image according to the second physical design parameter; acquiring an image of the image sensor in the second camera Receiving region Q; determining an image of the third region of the second image and the intersection region S2 of the Q as a second sub-image. This technical feature can be implemented by the fourth acquisition module. In the embodiment of the present invention, the third area and the fourth area may be any graphics, which is not limited in the embodiment of the present invention.
更具体地,这个技术实现可以由处理器调用存储器或云端中的程序与指令进行相应的运算。More specifically, this technical implementation can be invoked by the processor to call the memory or the program in the cloud and the corresponding operation.
根据第四方面或第五方面,在一种可能的设计中,第一物理设计参数包括:所述第一摄像头在视场角范围为[0,θ 1]内拍摄的图像,在预设空间频率对应的调制传递函数MTF值大于第一预设阈值;其中,θ 1小于所述第一摄像头的视场角的1/2,所述第一摄像头在其它视场角范围内拍摄的图像,在预设空间频率对应的调制传递函数MTF值不大于第一预设阈值。该信息可以存储在存储器中或网络云端。 According to the fourth aspect or the fifth aspect, in a possible design, the first physical design parameter comprises: the image captured by the first camera in a field of view angle range [0, θ 1 ], in a preset space The modulation transfer function MTF value corresponding to the frequency is greater than the first preset threshold; wherein θ 1 is smaller than 1/2 of the field of view of the first camera, and the image captured by the first camera in other field of view angles, The modulation transfer function MTF value corresponding to the preset spatial frequency is not greater than the first preset threshold. This information can be stored in memory or in the network cloud.
根据第四方面或第五方面,在一种可能的设计中,所述第二物理设计参数包括:所述第二摄像头在视场角范围为[θ 2,θ 3]内拍摄的图像,在预设空间频率对应的调制传递函数MTF值大于第二预设阈值;其中,θ 3小于所述第二摄像头的视场角的1/2,且0<θ 213,所述第二摄像头在其它视场角范围内拍摄的图像,在预设空间频率对应的调制传递函数MTF值不大于第二预设阈值。该信息可以存储在存储器中或网络云端。 According to the fourth aspect or the fifth aspect, in a possible design, the second physical design parameter comprises: an image captured by the second camera within a viewing angle range of [θ 2 , θ 3 ], The modulation transfer function MTF value corresponding to the preset spatial frequency is greater than a second preset threshold; wherein θ 3 is less than 1/2 of the field angle of the second camera, and 0<θ 213 The image captured by the second camera in the range of other viewing angles has a modulation transfer function MTF value corresponding to the preset spatial frequency that is not greater than the second predetermined threshold. This information can be stored in memory or in the network cloud.
根据第四方面或第五方面,在一种可能的设计中,所述对所述第一子图像、所述第二子图像进行融合处理,得到目标图像包括以下三种方式中的任意一种,并可以由图像拼接模块实现:According to the fourth aspect or the fifth aspect, in a possible design, the performing the fusion processing on the first sub-image and the second sub-image to obtain the target image includes any one of the following three manners; And can be implemented by the image stitching module:
方式1:确定出所述S1与所述S2的交集区域S3的图像;确定所述S3在所述S2中的补集区域S32的图像;对所述S1的图像和所述S32的图像进行融合处理,得到目标图像;或,Method 1: determining an image of the intersection region S3 of the S1 and the S2; determining an image of the complement region S32 of the S3 in the S2; and fusing the image of the S1 and the image of the S32 Process to get the target image; or,
方式2:确定出所述S1与所述S2的交集区域S3的图像;确定所述S3在所述S1中的补集区域S31的图像;对所述S31的图像和所述S2的图像进行融合处理,得到目标图像;或,Mode 2: determining an image of the intersection region S3 of the S1 and the S2; determining an image of the complement region S31 of the S3 in the S1; and fusing the image of the S31 and the image of the S2 Process to get the target image; or,
方式3:确定出所述S1与所述S2的交集区域S3的图像;确定所述S3在所述S1中的补集区域S31的图像;确定所述S3在所述S2中的补集区域S32的图像;将所述S1和所述S2对所述S3按照预设增强算法进行增强处理,得到S4的图像;对所述S31的图像、所述S32的图像以及所述S4的图像进行融合处理,得到目标图像。Mode 3: determining an image of the intersection region S3 of the S1 and the S2; determining an image of the complement region S31 of the S3 in the S1; determining a complement region S32 of the S3 in the S2 And performing an enhancement process on the S3 according to a preset enhancement algorithm to obtain an image of S4; and performing fusion processing on the image of the S31, the image of the S32, and the image of the S4; , get the target image.
更具体地,这个技术实现可以由处理器调用存储器中的程序与指令进行相应的运算。More specifically, this technical implementation can be invoked by a processor to call a program in memory and an instruction to perform a corresponding operation.
根据第四方面或第五方面,在一种可能的设计中,还包含调节模块/模块,用于调整所述第一摄像头和所述第二摄像头的间距。According to the fourth or fifth aspect, in a possible design, an adjustment module/module is further included for adjusting the spacing between the first camera and the second camera.
根据第四方面或第五方面,在一种可能的设计中,所述拍照设备还包含第三摄像头,所述第三摄像头的光轴与所述第一摄像头的光轴互相平行;所述第三摄像头与所述第一摄像头之间的间距小于预设距离;所述第三摄像头与所述第二摄像头之间的间距小于预设距离;所述方法还包括:获取第三摄像头拍摄所述待拍摄对象的第三图像;获取所述第三摄像头的第三参数信息,其中,所述第三摄像头是按照所述第三参数信息设计得到的;所述第三参数信息表达了:所述第三摄像头在视场角范围为[θ 4,θ 5]内拍摄的图像,在预设空间频率对应的调制传递函数MTF值大于第三预设阈值;其中,θ 2435,θ 5小于所述第三摄像头的视场角的1/2;根据所述第三参数信息获取所述第三图像的第三子图像;其中,所述第三子图像的清晰度高于第三补集图像的清晰度,所述第三补集图像为所述第三子图像在所述第三图像中的补集;所述第二子图像和所述第三子图像存在图像交集;且所述第一子图像、所述第二子图像和所述第三子图像的图像并集能够表达所述待拍摄对象;对所述目标图像、所述第三子图像进行融合处理,得到新的目标图像。对于装置而言,所述装置还包括:第五获取模块,用于获取第三摄像头拍摄所述待拍摄对象的第三图像;第六获取模块,用于获取所述第三摄像头的第三参数信息,其中,所述第三摄像头是按照所述第三参数信息设计得到的;所述第三参数信息表达了:所述第三摄像头在视场角范围为[θ 4,θ 5]内拍摄的图像,在预设空间频率对应的调制传递函数MTF值大于第三预设阈值;其中,θ 2435,θ 5小于所述第三摄像头的视场角的1/2;所述第六获取模块还用于根据所述第三参数信息获取所述第三图像的第三子图像;其中,所述第三子图像的清晰度高于第三补集图像的清晰度,所述第三补集图像为所述第三子图像在所述第三图像中的补集;所述第二子图像和所述第三子图像存在图像交集;且所述第一子图像、所述第二子图像和所述第三子图像的图像并集能够表达所述待拍摄对象;所述图像拼接模块还用于将所述目标图像、所述第三子图像进行融合处理,得到新的目标图像。 According to the fourth or fifth aspect, in a possible design, the photographing apparatus further includes a third camera, an optical axis of the third camera and an optical axis of the first camera are parallel to each other; The distance between the three cameras and the first camera is less than a preset distance; the distance between the third camera and the second camera is less than a preset distance; the method further includes: acquiring the third camera to capture the a third image of the object to be photographed; acquiring third parameter information of the third camera, wherein the third camera is designed according to the third parameter information; the third parameter information expresses: The image captured by the third camera in the field of view angle range [θ 4 , θ 5 ], the modulation transfer function MTF value corresponding to the preset spatial frequency is greater than the third preset threshold; wherein θ 243 < θ 5 , θ 5 is smaller than 1/2 of the angle of view of the third camera; acquiring a third sub-image of the third image according to the third parameter information; wherein the third sub-image is clear Higher than the third complement image a degree of clarity, the third complement image is a complement of the third sub-image in the third image; the second sub-image and the third sub-image have an image intersection; and the first The image of the sub-image, the second sub-image, and the image of the third sub-image can express the object to be photographed; and fuse the target image and the third sub-image to obtain a new target image. For the device, the device further includes: a fifth acquiring module, configured to acquire a third image of the third image capturing the object to be photographed; and a sixth acquiring module, configured to acquire a third parameter of the third camera Information, wherein the third camera is designed according to the third parameter information; the third parameter information indicates that the third camera is photographed within a range of viewing angles [θ 4 , θ 5 ] The image, the modulation transfer function MTF value corresponding to the preset spatial frequency is greater than a third preset threshold; wherein θ 2435 , θ 5 is less than 1 of the angle of view of the third camera /2; the sixth obtaining module is further configured to acquire a third sub-image of the third image according to the third parameter information; wherein, the third sub-image has higher definition than the third complementary image Sharpness, the third complement image is a complement of the third sub image in the third image; the second sub image and the third sub image have an image intersection; and the first An image of the sub image, the second sub image, and the third sub image Set to be capable of expressing the subject; the image stitching module is further configured to image the target, the third sub-image fusion process results in a new target image.
对于第三摄像头的相关特征,同样适用于如上所述第一摄像头和第二摄像头的相关特征。The same features for the third camera are equally applicable to the related features of the first camera and the second camera as described above.
根据第四方面或者第五方面,在一种可能的设计中,所述获取第一摄像头拍摄待拍摄对象的第一图像,与所述获取第二摄像头拍摄所述待拍摄对象的第二图像;由同一个触发信号触发。或者,由两个不同的触发信号分别触发。According to the fourth aspect or the fifth aspect, in a possible design, the acquiring the first camera to capture a first image of the object to be photographed, and the acquiring the second camera to capture the second image of the object to be photographed; Triggered by the same trigger signal. Alternatively, it is triggered by two different trigger signals.
根据第四方面或者第五方面,在一种可能的设计中,所述第一摄像头和所述第二摄像头的成像镜头所包含的透镜片数为4、5或6。According to the fourth or fifth aspect, in one possible design, the imaging lenses of the first camera and the second camera comprise 4, 5 or 6 lens segments.
根据第四方面或者第五方面,在一种可能的设计中,所述第一摄像头和所述第二摄像头的光圈值相等。According to the fourth or fifth aspect, in one possible design, the aperture values of the first camera and the second camera are equal.
根据第四方面或者第五方面,在一种可能的设计中,所述第一摄像头和所述第二摄像头的图像传感器相同。因此上述P、Q也对应相同。According to the fourth or fifth aspect, in one possible design, the image sensors of the first camera and the second camera are the same. Therefore, the above P and Q are also the same.
根据第四方面或者第五方面,在一种可能的设计中,所述第一摄像头和所述第二摄像头的焦距和(最大)视场角相同。According to the fourth or fifth aspect, in one possible design, the focal length and the (maximum) field of view of the first camera and the second camera are the same.
根据第四方面或者第五方面,在一种可能的设计中,所述第一预设阈值和所述第二预设阈值大于等于0.25。According to the fourth aspect or the fifth aspect, in a possible design, the first preset threshold and the second preset threshold are greater than or equal to 0.25.
根据第四方面或者第五方面,在一种可能的设计中,所述预设空间频率大于400对线/mm。通常来讲,对于同一个图像来说,空间频率越大,对应的图像的精细程度越大。空间频率越大,同时还能满足MTF大于预设阈值,表示该图像的清晰度越好。According to the fourth or fifth aspect, in one possible design, the predetermined spatial frequency is greater than 400 pairs of lines/mm. Generally speaking, for the same image, the larger the spatial frequency, the more refined the corresponding image. The larger the spatial frequency, the more the MTF is greater than the preset threshold, indicating that the resolution of the image is better.
根据第四方面或者第五方面,,在一种可能的设计中,S1为圆形区域。According to the fourth or fifth aspect, in one possible design, S1 is a circular area.
更具体地,在本发明具体的实现过程中,由于器件制造和摄像头所处环境带来的误差,上述提到的视场角范围[0,θ 1]、[θ 2,θ 3]等,所对应的图像区域并不一定是规则的圆或着圆环,有可能是近似的圆形或圆环,也有可能是一些不规则图形;但是只要摄像头最终获取到的所有高清晰度的子图像,在以同一个图像传感器为参考的前提下,能够覆盖到图像传感器所在的区域即可。就可以实现无缝拼接。形成满足超大光圈的高清晰度的目标图像。 More specifically, in the specific implementation process of the present invention, due to the error caused by the device manufacturing and the environment in which the camera is located, the above-mentioned field of view angle ranges [0, θ 1 ], [θ 2 , θ 3 ], etc. The corresponding image area is not necessarily a regular circle or a circle, and may be an approximate circle or a ring, or some irregular pattern; but as long as all the high-resolution sub-images finally obtained by the camera are obtained. Under the premise of taking the same image sensor as a reference, it can cover the area where the image sensor is located. Seamless stitching is possible. A target image that satisfies the high definition of the super large aperture is formed.
更具体地,在本发明具体的实现过程中,确定子图像的过程中,在上述提到的视场角范围[0,θ 1]、[θ 2,θ 3]中,图像处理程序还可以取以上两个区域中的局部,比如方形或者椭圆等非圆形区域的图像,进行相应的拼接。只要这些子图像的并集能够覆盖图像传感器所在的区域即可,且能表达出高清晰度的被拍摄对象。 More specifically, in the specific implementation process of the present invention, in the process of determining the sub-image, in the above-mentioned field of view angle range [0, θ 1 ], [θ 2 , θ 3 ], the image processing program can also Take the image of the above two areas, such as a square or ellipse, such as a non-circular area, and perform the corresponding stitching. As long as the union of these sub-images can cover the area where the image sensor is located, it is possible to express a high-definition subject.
也就再次强调了,上述第一区域、第二区域、第三区域和第四区域的图形是不限定的。It is again emphasized that the patterns of the first region, the second region, the third region, and the fourth region are not limited.
第六方面,本发明实施例提供一种终端设备,所述终端设备包含第一摄像头和第二摄像头,存储器、处理器、总线;第一摄像头、第二摄像头、存储器以及处理器通过总线相连;其中,所述第一摄像头、所述第二摄像头的光轴互相平行,所述第一摄像头和所述第二摄像头之间的间距小于预设距离;所述第一摄像头、所述第二摄像头的光圈值均小于1.6且所述第一摄像头、所述第二摄像头的镜片数均不大于6;所述摄像头用于在所述处理器的控制下采集图像信号;所述存储器用于存储计算机程序和指令;所述处理器用于调用所述存储器中存储的所述计算机程序和指令,执行上述任一一种可能的实现方法。According to a sixth aspect, an embodiment of the present invention provides a terminal device, where the terminal device includes a first camera and a second camera, a memory, a processor, and a bus; the first camera, the second camera, the memory, and the processor are connected by a bus; The optical axes of the first camera and the second camera are parallel to each other, and the distance between the first camera and the second camera is less than a preset distance; the first camera and the second camera The aperture values are all less than 1.6 and the number of lenses of the first camera and the second camera are not greater than 6; the camera is configured to acquire an image signal under the control of the processor; the memory is used to store a computer Programs and instructions; the processor for invoking the computer program and instructions stored in the memory to perform any of the possible implementation methods described above.
根据第六方面,在一种可能的设计中,终端设备还包括天线***、天线***在处理器的控制下,收发无线通信信号实现与移动通信网络的无线通信;移动通信网络包括以下的一种或多种:GSM网络、CDMA网络、3G网络、FDMA、TDMA、PDC、TACS、AMPS、WCDMA、TDSCDMA、WIFI以及LTE网络。According to a sixth aspect, in a possible design, the terminal device further includes an antenna system, and the antenna system transmits and receives wireless communication signals under the control of the processor to implement wireless communication with the mobile communication network; the mobile communication network includes the following one Or multiple: GSM network, CDMA network, 3G network, FDMA, TDMA, PDC, TACS, AMPS, WCDMA, TDSCDMA, WIFI and LTE networks.
此外,上述方法、装置与设备也可以应用于更多个摄像头的场景中。In addition, the above methods, devices and devices can also be applied to scenes of more cameras.
上述方法、装置与设备既可以应用于终端自带的拍照软件进行拍摄的场景;也可以应用于终端中运行第三方拍照软件进行拍摄的场景;拍摄包括普通拍摄,自拍,以及视频电话、视频会议、VR拍摄、航拍等多种拍摄方式。The above method, device and device can be applied to a scene in which the camera software provided by the terminal is used for shooting; or can be applied to a scene in which a third-party camera software is used for shooting in the terminal; the shooting includes normal shooting, self-timer, video telephony, and video conference. , VR shooting, aerial photography and other shooting methods.
采用本发明的技术方案;可以在不提高生产工艺复杂度的前提下,实现超大光圈且清晰度较高的图像拍摄。By adopting the technical scheme of the invention, it is possible to realize image shooting with super large aperture and high definition without increasing the complexity of the production process.
附图说明DRAWINGS
图1为一种透镜的结构示意图;Figure 1 is a schematic view showing the structure of a lens;
图2为终端的结构示意图;2 is a schematic structural view of a terminal;
图3为本发明实施例中一种图像处理方法的流程图;3 is a flowchart of an image processing method according to an embodiment of the present invention;
图4为本发明实施例中一种相机的硬件结构示意图;4 is a schematic structural diagram of hardware of a camera according to an embodiment of the present invention;
图5为本发明实施例中第一摄像头的一种示意图;FIG. 5 is a schematic diagram of a first camera in an embodiment of the present invention; FIG.
图6为本发明实施例中第一摄像头的一种图像质量评价示意图;6 is a schematic diagram of image quality evaluation of a first camera in an embodiment of the present invention;
图7为本发明实施例中第二摄像头的一种示意图;FIG. 7 is a schematic diagram of a second camera according to an embodiment of the present invention; FIG.
图8为本发明实施例中第二摄像头的一种图像质量评价示意图;FIG. 8 is a schematic diagram of image quality evaluation of a second camera according to an embodiment of the present invention; FIG.
图9为本发明实施例中一种双镜头模组获取图像的原理图;FIG. 9 is a schematic diagram of acquiring an image by a dual lens module according to an embodiment of the present invention; FIG.
图10为本发明实施例中获取图像的另一种原理图;FIG. 10 is another schematic diagram of acquiring an image according to an embodiment of the present invention; FIG.
图11为本发明实施例中的一种图像处理装置的结构示意图。FIG. 11 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,并不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention are clearly and completely described in the following with reference to the accompanying drawings in the embodiments of the present invention. It is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present invention without creative efforts are within the scope of the present invention.
本发明实施例中,终端,可以是向用户提供拍照和/或数据连通性的设备,具有无线连接功能的手持式设备、或连接到无线调制解调器的其他处理设备,比如:数码相机、单反相机、移动电话(或称为“蜂窝”电话),可以是便携式、袖珍式、手持式、可穿戴设备(如智能手表等)、平板电脑、个人电脑(PC,Personal Computer)、PDA(Personal Digital Assistant,个人数字助理)、POS(Point of Sales,销售终端)、车载电脑、无人机、航拍器等。In the embodiment of the present invention, the terminal may be a device that provides photographing and/or data connectivity to the user, a handheld device with a wireless connection function, or other processing device connected to the wireless modem, such as a digital camera, a SLR camera, Mobile phones (or "cellular" phones) can be portable, pocket-sized, handheld, wearable devices (such as smart watches, etc.), tablets, personal computers (PCs, Personal Computers), PDAs (Personal Digital Assistants, Personal digital assistant), POS (Point of Sales), on-board computer, drone, aerial camera, etc.
图2示出了终端100的一种可选的硬件结构示意图。FIG. 2 shows an alternative hardware structure diagram of the terminal 100.
参考图2所示,终端100可以包括射频单元110、存储器120、输入单元130、显示单元140、摄像头150、音频电路160、扬声器161、麦克风162、处理器170、外部接口180、电源190等部件,在本发明实施例中,所述摄像头150至少存在两个。Referring to FIG. 2, the terminal 100 may include a radio frequency unit 110, a memory 120, an input unit 130, a display unit 140, a camera 150, an audio circuit 160, a speaker 161, a microphone 162, a processor 170, an external interface 180, a power supply 190, and the like. In the embodiment of the present invention, the camera 150 has at least two.
摄像头150用于采集图像或视频,可以通过应用程序指令触发开启,实现拍照或者摄像功能。摄像头包括成像镜头,滤光片,图像传感器,对焦防抖马达等部件。物体发出或反射的光线进入成像镜头,通过滤光片,最终汇聚在图像传感器上。成像镜头主要是用于对拍照视角中的所有物体发出或反射的光汇聚成像;滤光片主要是用于将光线中的多余光波(例如除可见光外的光波,如红外)滤去;图像传感器主要是用于对接收到的光信号进行光电转换,转换成电信号,并输入到处理170进行后续处理。The camera 150 is used for capturing images or videos, and can be triggered by an application instruction to realize a photographing or photographing function. The camera includes an imaging lens, a filter, an image sensor, a focus anti-shake motor and the like. The light emitted or reflected by the object enters the imaging lens, passes through the filter, and finally converges on the image sensor. The imaging lens is mainly used for collecting and reflecting light emitted or reflected by all objects in the photographing angle of view; the filter is mainly used for filtering out unnecessary light waves in the light (for example, light waves other than visible light, such as infrared); image sensor It is mainly used for photoelectric conversion of the received optical signal, conversion into an electrical signal, and input to the process 170 for subsequent processing.
本领域技术人员可以理解,图2仅仅是便携式多功能装置的举例,并不构成对便携式多功能装置的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件。It will be understood by those skilled in the art that FIG. 2 is merely an example of a portable multi-function device, and does not constitute a limitation of the portable multi-function device, and may include more or less components than those illustrated, or may combine some components, or different. Parts.
所述输入单元130可用于接收输入的数字或字符信息,以及产生与所述便携式多功能装置的用户设置以及功能控制有关的键信号输入。具体地,输入单元130可包括触摸 屏131以及其他输入设备132。所述触摸屏131可收集用户在其上或附近的触摸操作(比如用户使用手指、关节、触笔等任何适合的物体在触摸屏上或在触摸屏附近的操作),并根据预先设定的程序驱动相应的连接装置。触摸屏可以检测用户对触摸屏的触摸动作,将所述触摸动作转换为触摸信号发送给所述处理器170,并能接收所述处理器170发来的命令并加以执行;所述触摸信号至少包括触点坐标信息。所述触摸屏131可以提供所述终端100和用户之间的输入界面和输出界面。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触摸屏。除了触摸屏131,输入单元130还可以包括其他输入设备。具体地,其他输入设备132可以包括但不限于物理键盘、功能键(比如音量控制按键132、开关按键133等)、轨迹球、鼠标、操作杆等中的一种或多种。The input unit 130 can be configured to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the portable multifunction device. In particular, input unit 130 can include touch screen 131 as well as other input devices 132. The touch screen 131 can collect touch operations on or near the user (such as the user's operation on the touch screen or near the touch screen using any suitable object such as a finger, a joint, a stylus, etc.), and drive the corresponding according to a preset program. Connection device. The touch screen can detect a user's touch action on the touch screen, convert the touch action into a touch signal and send the signal to the processor 170, and can receive and execute a command sent by the processor 170; the touch signal includes at least a touch Point coordinate information. The touch screen 131 can provide an input interface and an output interface between the terminal 100 and a user. In addition, touch screens can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves. In addition to the touch screen 131, the input unit 130 may also include other input devices. Specifically, other input devices 132 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control button 132, switch button 133, etc.), trackball, mouse, joystick, and the like.
所述显示单元140可用于显示由用户输入的信息或提供给用户的信息以及终端100的各种菜单。在本发明实施例中,显示单元还用于显示设备利用摄像头150获取到的图像,包括预览图像、拍摄的初始图像以及拍摄后经过一定算法处理后的目标图像。The display unit 140 can be used to display information input by a user or information provided to a user and various menus of the terminal 100. In the embodiment of the present invention, the display unit is further configured to display an image acquired by the device using the camera 150, including a preview image, an initial image captured, and a target image processed by a certain algorithm after the shooting.
进一步的,触摸屏131可覆盖显示面板141,当触摸屏131检测到在其上或附近的触摸操作后,传送给处理器170以确定触摸事件的类型,随后处理器170根据触摸事件的类型在显示面板141上提供相应的视觉输出。在本实施例中,触摸屏与显示单元可以集成为一个部件而实现终端100的输入、输出、显示功能;为便于描述,本发明实施例以触摸显示屏代表触摸屏和显示单元的功能集合;在某些实施例中,触摸屏与显示单元也可以作为两个独立的部件。Further, the touch screen 131 may cover the display panel 141. When the touch screen 131 detects a touch operation on or near it, the touch screen 131 transmits to the processor 170 to determine the type of the touch event, and then the processor 170 displays the panel according to the type of the touch event. A corresponding visual output is provided on 141. In this embodiment, the touch screen and the display unit can be integrated into one component to implement the input, output, and display functions of the terminal 100. For convenience of description, the touch display screen represents the function set of the touch screen and the display unit; In some embodiments, the touch screen and the display unit can also function as two separate components.
所述存储器120可用于存储指令和数据,存储器120可主要包括存储指令区和存储数据区,存储数据区可存储关节触摸手势与应用程序功能的关联关系;存储指令区可存储操作***、应用、至少一个功能所需的指令等软件单元,或者他们的子集、扩展集。还可以包括非易失性随机存储器;向处理器170提供包括管理计算处理设备中的硬件、软件以及数据资源,支持控制软件和应用。还用于多媒体文件的存储,以及运行程序和应用的存储。The memory 120 can be used to store instructions and data, the memory 120 can mainly include a storage instruction area and a storage data area, the storage data area can store an association relationship between the joint touch gesture and the application function; the storage instruction area can store an operating system, an application, Software units such as instructions required for at least one function, or their subsets, extension sets. A non-volatile random access memory can also be included; providing hardware, software, and data resources in the management computing device to the processor 170, supporting the control software and applications. Also used for the storage of multimedia files, as well as the storage of running programs and applications.
处理器170是终端100的控制中心,利用各种接口和线路连接整个手机的各个部分,通过运行或执行存储在存储器120内的指令以及调用存储在存储器120内的数据,执行终端100的各种功能和处理数据,从而对手机进行整体监控。可选的,处理器170可包括一个或多个处理单元;优选的,处理器170可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作***、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器170中。在一些实施例中,处理器、存储器、可以在单一芯片上实现,在一些实施例中,他们也可以在独立的芯片上分别实现。处理器170还可以用于产生相应的操作控制信号,发给计算处理设备相应的部件,读取以及处理软件中的数据,尤其是读取和处理存储器120中的数据和程序,以使其中的各个功能模块执行相应的功能,从而控制相应的部件按指令的要求进行动作。The processor 170 is a control center of the terminal 100, and connects various parts of the entire mobile phone by various interfaces and lines, and executes various kinds of the terminal 100 by operating or executing an instruction stored in the memory 120 and calling data stored in the memory 120. Function and process data to monitor the phone as a whole. Optionally, the processor 170 may include one or more processing units; preferably, the processor 170 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like. The modem processor primarily handles wireless communications. It can be understood that the above modem processor may not be integrated into the processor 170. In some embodiments, the processors, memories, can be implemented on a single chip, and in some embodiments, they can also be implemented separately on separate chips. The processor 170 can also be configured to generate corresponding operational control signals, send to corresponding components of the computing processing device, read and process data in the software, and in particular read and process the data and programs in the memory 120 to enable Each function module performs the corresponding function, thereby controlling the corresponding component to act according to the requirements of the instruction.
所述射频单元110可用于收发信息或通话过程中信号的接收和发送,特别地,将基站的下行信息接收后,给处理器170处理;另外,将设计上行的数据发送给基站。通常,RF电路包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器(Low Noise Amplifier,LNA)、双工器等。此外,射频单元110还可以通过无线通信与网络 设备和其他设备通信。所述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯***(Global System of Mobile communication,GSM)、通用分组无线服务(General Packet Radio Service,GPRS)、码分多址(Code Division Multiple Access,CDMA)、宽带码分多址(Wideband Code Division Multiple Access,WCDMA)、长期演进(Long Term Evolution,LTE)、电子邮件、短消息服务(Short Messaging Service,SMS)等。The radio frequency unit 110 can be used for receiving and transmitting signals during transmission and reception of information or during a call. Specifically, after receiving the downlink information of the base station, the processing is performed by the processor 170. In addition, the uplink data is designed to be sent to the base station. Generally, RF circuits include, but are not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the radio unit 110 can also communicate with network devices and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System of Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code). Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), E-mail, Short Messaging Service (SMS), etc.
音频电路160、扬声器161、麦克风162可提供用户与终端100之间的音频接口。音频电路160可将接收到的音频数据转换后的电信号,传输到扬声器161,由扬声器161转换为声音信号输出;另一方面,麦克风162用于收集声音信号,还可以将收集的声音信号转换为电信号,由音频电路160接收后转换为音频数据,再将音频数据输出处理器170处理后,经射频单元110以发送给比如另一终端,或者将音频数据输出至存储器120以便进一步处理,音频电路也可以包括耳机插孔163,用于提供音频电路和耳机之间的连接接口。The audio circuit 160, the speaker 161, and the microphone 162 can provide an audio interface between the user and the terminal 100. The audio circuit 160 can transmit the converted electrical data of the received audio data to the speaker 161 for conversion to the sound signal output by the speaker 161; on the other hand, the microphone 162 is used to collect the sound signal, and can also convert the collected sound signal. The electrical signal is received by the audio circuit 160 and converted into audio data, and then processed by the audio data output processor 170, transmitted to the terminal, for example, via the radio frequency unit 110, or outputted to the memory 120 for further processing. The audio circuit can also include a headphone jack 163 for providing a connection interface between the audio circuit and the earphone.
终端100还包括给各个部件供电的电源190(比如电池),优选的,电源可以通过电源管理***与处理器170逻辑相连,从而通过电源管理***实现管理充电、放电、以及功耗管理等功能。The terminal 100 also includes a power source 190 (such as a battery) for powering various components. Preferably, the power source can be logically coupled to the processor 170 through a power management system to manage functions such as charging, discharging, and power management through the power management system.
终端100还包括外部接口180,所述外部接口可以是标准的Micro USB接口,也可以使多针连接器,可以用于连接终端100与其他装置进行通信,也可以用于连接充电器为终端100充电。The terminal 100 further includes an external interface 180, which may be a standard Micro USB interface, or a multi-pin connector, which may be used to connect the terminal 100 to communicate with other devices, or may be used to connect the charger to the terminal 100. Charging.
尽管未示出,终端100还可以包括闪光灯、无线保真(wireless fidelity,WiFi)模块、蓝牙模块、各种传感器等,在此不再赘述。Although not shown, the terminal 100 may further include a flash, a wireless fidelity (WiFi) module, a Bluetooth module, various sensors, and the like, and details are not described herein.
参阅图3所示,本发明实施例提供一种图像处理方法,所述方法可以应用在具有至少两个摄像头的终端上,为方便说明,两个摄像头分别称为第一摄像头和第二摄像头;应理解,本申请中所提到的第一、第二等类似词语仅用于区分,并无次序或性能限定;第一摄像头和第二摄像头位置设置成两者光轴互相平行;且为了实现超大光圈的设置,两个终端的光圈值均小于1.6(本申请中所指的超大光圈指的是光圈值小于1.6);光圈值的最小极限值可以无限趋近于0;所述终端可以是图2所示的终端100,也可以是简单的相机设备等如图4所示结构。具体处理方法流程包括如下步骤:Referring to FIG. 3, an embodiment of the present invention provides an image processing method, which may be applied to a terminal having at least two cameras. For convenience of description, the two cameras are respectively referred to as a first camera and a second camera; It should be understood that the first, second, and the like, as used in the present application, are used only for the distinction, and there is no order or performance limitation; the first camera and the second camera are positioned such that the optical axes of the two are parallel to each other; With the setting of the large aperture, the aperture values of both terminals are less than 1.6 (the super-large aperture referred to in this application refers to the aperture value is less than 1.6); the minimum limit value of the aperture value can be infinitely close to 0; the terminal can be The terminal 100 shown in FIG. 2 can also be a simple camera device or the like as shown in FIG. The specific processing method process includes the following steps:
步骤31:获取第一摄像头拍摄待拍摄对象的第一图像;Step 31: Acquire a first camera to capture a first image of an object to be photographed;
步骤32:获取第二摄像头拍摄所述待拍摄对象的第二图像;Step 32: Acquire a second camera to capture a second image of the object to be photographed;
其中,待拍摄对象可以理解为用户期待拍摄的物体;也可以理解为当用户调整好终端的拍摄位置时,终端显示屏幕中的成像物体,例如两个摄像头取景的共同图像部分;应理解,由于第一摄像头和第二摄像头并非同一位置,因此,第一摄像头和第二摄像头在拍摄待拍摄对象时得到的图像内容并非完全一样,绝大部分图像区域是相同的,边缘会存在些许差异。但本领域技术人员应当理解,在双摄场景下,终端位置固定时,理论上近似认为两个摄像头的成像相同。对于两个摄像头拍摄出来的图像差异,本领域技术人员还可以采用现有的校正技术,将两个摄像头拍摄到的图像进行修正,如考虑位置偏移因素的修正,得到第一图像和第二图像使其近似相同;或者也可以取两部分图像中的共同图像区域,作为第一图像和第二图像,使其近似相同。为了后续更准确地处理图像,第 一图像的几何中心和第二图像的几何中心可以修正为重叠;即假设将两个图像进行内容比对时,若将其几何中心重合,两个图片中内容相同的部分能够重叠。The object to be photographed can be understood as an object that the user expects to photograph; it can also be understood that when the user adjusts the shooting position of the terminal, the terminal displays the imaged object in the screen, for example, the common image portion of the two camera framing; it should be understood that The first camera and the second camera are not in the same position. Therefore, the image content obtained by the first camera and the second camera when shooting the object to be photographed is not completely the same, and most of the image areas are the same, and there are some differences in the edges. However, those skilled in the art should understand that in the dual-camera scene, when the terminal position is fixed, the theoretical approximation is that the imaging of the two cameras is the same. For the difference of the images captured by the two cameras, those skilled in the art can also use the existing correction techniques to correct the images captured by the two cameras, such as the correction of the positional offset factor, to obtain the first image and the second image. The images are made approximately the same; or a common image area in the two-part image may be taken as the first image and the second image, making them approximately the same. In order to process the image more accurately, the geometric center of the first image and the geometric center of the second image can be corrected to overlap; that is, if the two images are compared for content, if the geometric centers are coincident, the contents of the two images The same parts can overlap.
步骤33:根据第一预设规则,获取所述第一图像的第一子图像,所述第一子图像对应所述第一摄像头的视场角范围为[0,θ 1], Step 33: Acquire a first sub-image of the first image according to a first preset rule, where a first field image corresponds to an angle of view of the first camera as [0, θ 1 ],
步骤34:根据第二预设规则,获取所述第二图像的第二子图像,所述第二子图像对应所述第二摄像头的视场角范围为[θ 2,θ 3];其中,0<θ 213,所述第一子图像和所述第二子图像存在重叠图像; Step 34: Acquire a second sub-image of the second image according to a second preset rule, where the second sub-image corresponds to a field of view of the second camera as [θ 2 , θ 3 ]; 0<θ 213 , the first sub image and the second sub image have overlapping images;
这个视场角可以是事先已经规定好的,也可以是获取摄像头参数然后再确定出来的。This field of view can be specified in advance, or it can be obtained by taking the camera parameters and then determining them.
应理解,图像交集,即重叠图像指两个图像的内容相同区域。在一种具体实现过程中,假设将所述第一子图像和第二子图像的对齐放置,使得它们的内容相同区域重合;若第一子图像和第二子图像几何中心重合,第一子图像和第二子图像的交集为圆环区域,圆环区域的外环完全位于第二子图像之内,圆环的内环完全位于第一子图像之内,使得所述第一子图像和所述第二子图像能够构成被拍摄对象的完整成像。在另一种具体实现过程中,假设将所述第一子图像和第二子图像的对齐放置,使得它们的内容相同区域重合;若第一子图像和第二子图像的集合中心不重合,此时第一子图像和第二子图像的交集不再是圆环,可以是一个内部封闭曲线与外部封闭曲线共同围成的封闭区域;封闭区域的的外部曲线完全位于第二子图像之内,封闭区域的的内部曲线完全位于第一子图像之内。It should be understood that the intersection of images, ie, overlapping images, refers to the same area of content of the two images. In a specific implementation process, it is assumed that the alignments of the first sub-image and the second sub-image are placed such that their contents overlap in the same area; if the first sub-image and the second sub-image geometric center coincide, the first sub- The intersection of the image and the second sub-image is a circular area, and the outer ring of the circular area is completely within the second sub-image, and the inner ring of the ring is completely within the first sub-image, so that the first sub-image and The second sub-image can constitute a complete imaging of the subject. In another specific implementation process, it is assumed that the alignments of the first sub-image and the second sub-image are placed such that their contents overlap the same area; if the collection centers of the first sub-image and the second sub-image do not coincide, At this time, the intersection of the first sub-image and the second sub-image is no longer a ring, and may be a closed area surrounded by an inner closed curve and an outer closed curve; the outer curve of the closed area is completely within the second sub-image. The internal curve of the enclosed area is completely within the first sub-image.
步骤35:将所述第一子图像、所述第二子图像,按照预设拼接算法得到目标图像。Step 35: The first sub-image and the second sub-image are obtained according to a preset splicing algorithm.
在另一种具体实现过程中,假设将所述第一子图像和第二子图像的对齐放置,使得它们的内容相同区域重合;若第一子图像和第二子图像的集合中心不重合,此时第一子图像和第二子图像的交集不再是圆环,可以是一个内部封闭曲线与外部封闭曲线共同围成的非封闭区域;如果非封闭区域的图像内容并不影响待拍摄对象的表达,或者非封闭区域的图像内容对应在第一图像或第二图像中的图像质量也符合一定的图像质量标准,那么后续还可以根据第一子图像、第二子图像以及非封闭区域在第一图像或第二图像中对应的图像,进行融合处理,得到目标图像。In another specific implementation process, it is assumed that the alignments of the first sub-image and the second sub-image are placed such that their contents overlap the same area; if the collection centers of the first sub-image and the second sub-image do not coincide, At this time, the intersection of the first sub-image and the second sub-image is no longer a ring, and may be a non-closed area enclosed by an inner closed curve and an outer closed curve; if the image content of the non-closed area does not affect the object to be photographed The expression, or the image content of the non-closed region corresponds to a certain image quality standard corresponding to the image quality in the first image or the second image, and then the subsequent sub-image, the second sub-image, and the non-closed region may be The corresponding image in the first image or the second image is subjected to fusion processing to obtain a target image.
首先先对步骤31、32进行说明。First, steps 31 and 32 will be described first.
具体实现过程中,第一摄像头和第二摄像头的成像镜头都是预先根据一定的特殊需求进行特殊制造的,即遵照一定的物理设计参数。镜头制造***可以根据用户的目标参数需求,根据经验值确定镜头的片数,并且根据片数制定出相应的具体硬件配置参数,如每个镜片的焦距以及每个镜片之间的相对位置等。由于超大光圈的设置难度,在镜片数量不增加的前提下,超大光圈无法实现在全部的视场角范围内拍摄到清晰的图像。因此,在具体的设计中,光圈值和高质量图像的视场角范围区域大小存在折中,光圈值越小,获得图像的清晰度满足要求的视场角范围就越小。In the specific implementation process, the imaging lenses of the first camera and the second camera are specially manufactured according to certain special requirements in advance, that is, according to certain physical design parameters. The lens manufacturing system can determine the number of shots according to the empirical value according to the user's target parameter requirements, and formulate corresponding specific hardware configuration parameters according to the number of pieces, such as the focal length of each lens and the relative position between each lens. Due to the difficulty of setting the large aperture, the large aperture cannot achieve a clear image within the entire field of view without increasing the number of lenses. Therefore, in a specific design, there is a compromise between the aperture value and the size of the field of view of the high-quality image, and the smaller the aperture value, the smaller the range of the angle of view that satisfies the required image clarity.
本发明实施例中,如图5所示,第一摄像头中成像镜头201由5片透镜组成,但透镜数量并不限于5片,也可以是4~6片。本发明对成像镜头在进行设计时,设计的光圈值偏小,例如为FNO 1(FNO 1小于1.6),并增加了[0,θ 1]这一视场角范围的成像质量设计权重;使得第一摄像头在大光圈下,在[0,θ 1]这一视场角范围的成像质量满足预期;即[0,θ 1]这部分视场角范围对应的成像范围的质量是符合FNO 1的要求的;而对 于大于θ 1的视场角范围的图像质量则不关注,即使质量很差也没有关系。在这种参数约束下,现有工艺能够制造出相应的成像镜头,且不会增加太大的难度。可见,第一摄像头实现更小值的FNO 1是以牺牲[0,θ 1]以外的视场角范围对应的图像质量为代价的。 In the embodiment of the present invention, as shown in FIG. 5, the imaging lens 201 in the first camera is composed of five lenses, but the number of lenses is not limited to five, and may be four to six. When the imaging lens is designed, the aperture value of the invention is designed to be small, for example, FNO 1 (FNO 1 is less than 1.6), and the imaging quality design weight of the field of view angle range of [0, θ 1 ] is increased; Under the large aperture of the first camera, the imaging quality in the range of the angle of view of [0, θ 1 ] satisfies the expected; that is, the quality of the imaging range corresponding to the portion of the field of view of [0, θ 1 ] is in accordance with FNO 1 The image quality of the range of angles of view greater than θ 1 is not of concern, even if the quality is poor. Under this parameter constraint, the existing process can produce a corresponding imaging lens without increasing the difficulty. It can be seen that the FNO 1 of the first camera achieving a smaller value is at the expense of the image quality corresponding to the range of angles of view other than [0, θ 1 ].
因此,第一摄像头的成像镜头由于上述特殊的设计需求,使得第一摄像头所获得的待拍摄对象中,视场角范围[0,θ 1]所对应的图像质量是能够符合FNO 1的要求的。符合FNO 1可以用MTF来衡量,如在预设的空间频率阈值下,MTF的值依旧能够达到预设标准。θ 1不大于第一摄像头的视场角的1/2。应理解,视场角是摄像头固定在终端后的固有属性,是终端位于某一固定位置时,摄像头能够成像的最大视场角度。业界公知,以被测目标的物像可通过镜头的最大范围的两条边缘构成的夹角,称为视场角。 Therefore, the imaging lens of the first camera has the image quality corresponding to the field of view angle range [0, θ 1 ] in the object to be photographed obtained by the first camera due to the above special design requirements, and can meet the requirements of FNO 1 . . FNO 1 can be measured by MTF. For example, under the preset spatial frequency threshold, the value of MTF can still reach the preset standard. θ 1 is not more than 1/2 of the angle of view of the first camera. It should be understood that the field of view is an inherent property of the camera after it is fixed at the terminal, and is the maximum field of view that the camera can image when the terminal is at a fixed position. It is well known in the industry that the angle at which the object image of the object to be measured can pass through the two edges of the largest range of the lens is called the angle of view.
具体实现过程中,一种评价标准可以参照图6。本实施例中第一摄像头具体参数为:FNO为1.4;焦距3.95mm;FOV为75°,也可以表示为[0,37.5°](注:本申请中,视场角范围为
Figure PCTCN2018084518-appb-000006
可以表示为
Figure PCTCN2018084518-appb-000007
表示以镜头为起始点,以光轴为中心,与光轴成
Figure PCTCN2018084518-appb-000008
夹角的所有射线所形成的圆锥体区域,投影到平面的角度范围是
Figure PCTCN2018084518-appb-000009
MTF性能如图6所示,图中展示的是成像镜头的弧矢方向(Sagittal)的MTF(通过光学***的输出像的对比度总比输入像的对比度要差,这个对比度的变化量与空间频率特性有密切的关系。把输出像与输入像的对比度之比可以定义为调制传递函数MTF),标注的角度为对应的半视场角。不同的线表示不同的1/2FOV的MTF曲线,横坐标表示空间频率,数值越大,图像的分辨率越精细。纵坐标表示MTF,表征图像的对比度,对比度越大表示图像越清晰。图中的虚线代表***的对比度的极限,越靠近它图像质量越好。同理,成像镜头在子午方向的MTF的表现与弧矢方向类似。
In the specific implementation process, an evaluation standard can refer to FIG. 6. In the embodiment, the specific parameters of the first camera are: FNO is 1.4; focal length is 3.95 mm; FOV is 75°, and can also be expressed as [0, 37.5°] (Note: in the present application, the field of view angle range is
Figure PCTCN2018084518-appb-000006
It can be expressed as
Figure PCTCN2018084518-appb-000007
Indicates that the lens is the starting point, centered on the optical axis, and the optical axis
Figure PCTCN2018084518-appb-000008
The area of the cone formed by all the rays of the angle, the angle range of projection to the plane is
Figure PCTCN2018084518-appb-000009
The MTF performance is shown in Figure 6. The figure shows the MTF of the sagittal direction of the imaging lens. (The contrast of the output image through the optical system is always worse than the contrast of the input image. The contrast and spatial frequency of this contrast. The characteristics are closely related. The ratio of the contrast of the output image to the input image can be defined as the modulation transfer function MTF), and the angle of the annotation is the corresponding half angle of view. The different lines represent the different 1/2 FOV MTF curves, and the abscissa represents the spatial frequency. The larger the value, the finer the resolution of the image. The ordinate represents the MTF, which characterizes the contrast of the image, and the greater the contrast, the clearer the image. The dashed line in the figure represents the limit of the contrast of the system, and the closer it is to the better the image quality. Similarly, the MTF of the imaging lens in the meridional direction behaves like the sagittal direction.
第一摄像头成像时,中心FOV约[0°,32°]的MTF值较高,该视场角范围内的图像在空间频率为500线对/毫米时,所得到的图像的MTF依旧能保持在0.3以上,因此可以认为该区域的成像质量在FNO为1.4的条件下能够达到很好的水平。而在第一摄像头获取的图像在FOV约[32°,37.5°]的范围内,相应的MTF值相对较差,而这个视场区域的高质量成像将会由第二摄像头承担。应理解,由于制造工艺的因素,高质量的图像区域和低质量的图像区域的界限不一定严格的圆形,实际的界限可以是不规则的图形。When the first camera is imaged, the MTF value of the center FOV is about [0°, 32°], and the MTF of the obtained image is still maintained when the image in the range of the angle of view is 500 line pairs/mm. Above 0.3, it can be considered that the image quality of this region can reach a very good level under the condition that the FNO is 1.4. While the image acquired by the first camera is in the range of FOV about [32°, 37.5°], the corresponding MTF value is relatively poor, and the high quality imaging of this field of view area will be borne by the second camera. It should be understood that due to factors of the manufacturing process, the boundaries of the high quality image area and the low quality image area are not necessarily strictly circular, and the actual limit may be an irregular pattern.
本发明实施例中,如图7所示,第二摄像头中成像镜头204由5片透镜组成,但透镜数量并不限于5片,也可以是4~6片。本发明对成像镜头在进行设计时,设计的光圈值偏小,例如为FNO 2(FNO 2小于1.6),并增加了[θ 2,θ 3]这一视场角范围的成像质量设计权重;使得第二摄像头在大光圈下,在[θ 2,θ 3]这一视场角范围的成像质量满足预期;即[θ 2,θ 3]这部分视场角范围对应的成像范围的质量是符合FNO 2的要求的;而对于小于θ 2的视场角范围的图像质量则不关注,即使质量很差也没有关系。在这种参数约束下,现有工艺能够制造出相应的成像镜头,且不会增加太大的难度。可见,第二摄像头实现更小值的FNO 2是以牺牲了[0,θ 2]的视场角范围对应的图像质量为代价的。 In the embodiment of the present invention, as shown in FIG. 7, the imaging lens 204 in the second camera is composed of five lenses, but the number of lenses is not limited to five, and may be four to six. When the imaging lens is designed, the aperture value of the invention is designed to be small, for example, FNO 2 (FNO 2 is less than 1.6), and the imaging quality design weight of the field of view angle range [θ 2 , θ 3 ] is increased; Making the second camera at a large aperture, the imaging quality in the range of the angle of view of [θ 2 , θ 3 ] satisfies the expectation; that is, the quality of the imaging range corresponding to the portion of the field of view of [θ 2 , θ 3 ] is Meets the requirements of FNO 2 ; and does not pay attention to image quality for a range of angles of view less than θ 2 , even if the quality is poor. Under this parameter constraint, the existing process can produce a corresponding imaging lens without increasing the difficulty. It can be seen that the second camera achieves a smaller value of FNO 2 at the expense of the image quality corresponding to the range of field angles of [0, θ 2 ].
因此,第二摄像头的成像镜头由于上述特殊的设计需求,使得第二摄像头所获得的 待拍摄对象中,视场角范围[θ 2,θ 3]所对应的图像质量是能够符合FNO 2的要求的。符合FNO 2可以用MTF来衡量,如在预设的空间频率阈值下,MTF的值依旧能够达到预设标准。θ 3不大于第二摄像头的视场角的1/2。 Therefore, the imaging lens of the second camera has the image quality corresponding to the field of view angle range [θ 2 , θ 3 ] in the object to be photographed obtained by the second camera due to the above special design requirements, and can meet the requirements of FNO 2 . of. FNO 2 can be measured by MTF. For example, under the preset spatial frequency threshold, the value of MTF can still reach the preset standard. θ 3 is not more than 1/2 of the angle of view of the second camera.
具体实现过程中,一种评价标准可以参照图8。本实施例中第二摄像头具体参数为:FNO为1.4;焦距3.95mm;FOV为75°,也可以表示为[0,37.5°];MTF性能如图8所示,图中展示的是成像镜头的弧矢方向(Sagittal)的MTF(通过光学***的输出像的对比度总比输入像的对比度要差,这个对比度的变化量与空间频率特性有密切的关系。把输出像与输入像的对比度之比可以定义为调制传递函数MTF),标注的角度为对应的半视场角。不同的线表示不同的1/2FOV的MTF曲线,横坐标表示空间频率,数值越大,图像的分辨率越精细。纵坐标表示MTF,表征图像的对比度,对比度越大表示图像越清晰。图中的虚线代表***的对比度的极限,越靠近它图像质量越好。同理,成像镜头在子午方向的MTF的表现与弧矢方向类似。In the specific implementation process, an evaluation standard can refer to FIG. 8. In this embodiment, the specific parameters of the second camera are: FNO is 1.4; focal length is 3.95 mm; FOV is 75°, which can also be expressed as [0, 37.5°]; MTF performance is shown in FIG. 8 , and the imaging lens is shown in the figure. The MTF of the sagittal direction (the contrast of the output image through the optical system is always worse than the contrast of the input image. The amount of change in contrast is closely related to the spatial frequency characteristics. The contrast between the output image and the input image is The ratio can be defined as the modulation transfer function MTF), and the angle of the annotation is the corresponding half angle of view. The different lines represent the different 1/2 FOV MTF curves, and the abscissa represents the spatial frequency. The larger the value, the finer the resolution of the image. The ordinate represents the MTF, which characterizes the contrast of the image, and the greater the contrast, the clearer the image. The dashed line in the figure represents the limit of the contrast of the system, and the closer it is to the better the image quality. Similarly, the MTF of the imaging lens in the meridional direction behaves like the sagittal direction.
第二摄像头成像时,图像在FOV为[28°,37.5°](注:本申请中,以镜头为起始点,以光轴为中心,与光轴成
Figure PCTCN2018084518-appb-000010
夹角的所有射线所形成的圆锥体区域C1,与光轴成
Figure PCTCN2018084518-appb-000011
夹角的所有射线所形成的圆锥体区域C2,则
Figure PCTCN2018084518-appb-000012
表示的视场区域为C1与C2之间的区域)的范围内的MTF值较高,该视场角范围内的图像在空间频率为500线对/毫米的范围内,MTF依旧能够保持在0.25以上,因此可以认为该区域的成像质量在FNO为1.4的条件下能够达到很好的水平。而在第二摄像头获取的图像在FOV约[0°,28°]的范围内,相应的MTF值相对较差。
When the second camera is imaged, the image has an FOV of [28°, 37.5°]. (Note: In this application, the lens is used as the starting point, the optical axis is centered, and the optical axis is
Figure PCTCN2018084518-appb-000010
The cone region C1 formed by all the rays of the angle is formed with the optical axis
Figure PCTCN2018084518-appb-000011
The cone area C2 formed by all the rays of the angle is
Figure PCTCN2018084518-appb-000012
The MTF value in the range of the field of view between C1 and C2 is high. The image in the range of the field angle is in the range of 500 line pairs/mm, and the MTF can still be maintained at 0.25. As described above, it can be considered that the image quality of the region can reach a good level under the condition that the FNO is 1.4. The image acquired at the second camera is in the range of FOV about [0°, 28°], and the corresponding MTF value is relatively poor.
镜头的参数信息预先存储在拍照设备本地,或者云端服务器;因此在进行后续图像处理的过程中;处理器可以根据本地的参数信息对摄像头获取到的图像,获取其中满足超大光圈的清晰度的部分区域,以便进行后续的拼接融合处理(本发明中,拼接和融合都是实现图像拼接,只是叫法不同而已,指代的都是将多个局部图片处理为一个完整图片的现有技术)。The parameter information of the lens is pre-stored locally in the photographing device or the cloud server; therefore, in the process of performing subsequent image processing; the processor may acquire the image acquired by the camera according to the local parameter information, and obtain the portion in which the sharpness of the super-large aperture is satisfied. The area is used for subsequent splicing and fusion processing (in the present invention, splicing and merging are both image splicing, but the naming is different, referring to the prior art of processing a plurality of partial pictures into one complete picture).
具体实现过程中,如图9所示,第一摄像头包括成像镜头201,滤光片202,图像传感器203;第二摄像头包括成像镜头204,滤光片205,图像传感器206。第一摄像头与第二摄像头的光轴相互平行,且光轴间距为预设距离。由于第一摄像头和第二摄像头的特殊设计;第一摄像头能够在视场角范围[0,θ 1]内获得清晰度良好的图像;第一摄像头能够在视场角范围[θ 2,θ 3]内获得清晰度良好的图像。 In a specific implementation process, as shown in FIG. 9 , the first camera includes an imaging lens 201 , a filter 202 , and an image sensor 203 ; and the second camera includes an imaging lens 204 , a filter 205 , and an image sensor 206 . The optical axes of the first camera and the second camera are parallel to each other, and the optical axis spacing is a preset distance. Due to the special design of the first camera and the second camera; the first camera is capable of obtaining a sharp image in the field of view angle range [0, θ 1 ]; the first camera can be in the field of view angle range [θ 2 , θ 3 Get a good-resolution image inside.
成像镜头所能获取到的图像如果投影到图像传感器所在平面理论上应该是一个圆形区域,圆形的大小取决于成像镜头的完整的视场角大小。然而,通常图像传感设计为方形,因此第一摄像头和第二摄像头分别最终所获取到的图像,即图像传感器最终接收到的图像,为方形。应理解,如果两个传感器获得的图像差异在后续处理算法允许差异范围内,则可直接作为第一图像和第二图像;如果两个传感器获得的图像差异超过了后续处理算法的允许差异范围,则需要通过现有技术中的修正技术或者截取相同内容区域,得到第一图像和第二图像。因此如何根据这两个摄像头获取到的方形图像进行后续的处理,就格外重要。即步骤33、34、35。The image that can be acquired by the imaging lens should theoretically be a circular area if projected onto the plane of the image sensor. The size of the circle depends on the complete field of view of the imaging lens. However, usually the image sensing is designed to be square, so that the image finally acquired by the first camera and the second camera, that is, the image finally received by the image sensor, is square. It should be understood that if the image difference obtained by the two sensors is within the tolerance range of the subsequent processing algorithm, it may be directly used as the first image and the second image; if the image difference obtained by the two sensors exceeds the allowable difference range of the subsequent processing algorithm, Then, the first image and the second image need to be obtained by using a correction technique in the prior art or intercepting the same content region. Therefore, how to perform subsequent processing based on the square images acquired by the two cameras is particularly important. That is, steps 33, 34, and 35.
步骤33:获取第一摄像头的第一参数信息;第一参数信息包括:第一摄像头的制造参数、性能参数等,如第一摄像头在什么视场角范围能够在大光圈下获得到清晰度满 足要求的图像。例如,具体可以得到第一摄像头在视场角范围为[0,θ 1]内拍摄的图像,在预设空间频率对应的调制传递函数MTF值大于第一预设阈值;其中,θ 1小于所述第一摄像头的视场角的1/2。 Step 33: Acquire first parameter information of the first camera; the first parameter information includes: manufacturing parameters, performance parameters, and the like of the first camera, such as, in what angle of view the first camera can obtain the resolution under a large aperture. The requested image. For example, the image captured by the first camera in the field of view angle range [0, θ 1 ] may be obtained, and the modulation transfer function MTF value corresponding to the preset spatial frequency is greater than the first preset threshold; wherein θ 1 is smaller than 1/2 of the angle of view of the first camera.
获取所述第一摄像头中图像传感器的图像接收区域P,也即可以获取到第一摄像头中图像传感器所接收到的图像;Acquiring an image receiving area P of the image sensor in the first camera, that is, an image received by the image sensor in the first camera may be acquired;
根据上述参数信息以及传感器的图像接收区域P,确定出所述第一图像在视场角范围为[0,θ 1]的区域与所述P的交集区域S1的图像,作为第一子图像。 Based on the parameter information and the image receiving area P of the sensor, an image of the first image in the region of the angle of view angle [0, θ 1 ] and the intersection S1 of the P is determined as the first sub-image.
具体如图10所示实施例,方形203所在区域表示第一摄像头中的图像传感器接收图像区域;不同的圆表示不同的视场角范围,如圆301所在区域对应于视场角范围为[0,θ 1]的图像区域。圆302所在区域对应于第一摄像头完整视场角的图像区域。因此,本例中,301和203的交集区域就是第一子图像。第一摄像头在具体设计的过程中,301和203的几何中心是相同的,且301的直径小于方形203的外接圆的直径。由于第一摄像头的特殊设计以及上述得到第一子图像的方法,所得到的第一子图像的清晰度满足超大光圈FNO 1Specifically, as shown in the embodiment of FIG. 10, the area where the square 203 is located represents the image sensor receiving image area in the first camera; the different circles represent different angles of view, for example, the area where the circle 301 is located corresponds to the field of view angle range [0] Image area of θ 1 ]. The area where the circle 302 is located corresponds to the image area of the full camera angle of the first camera. Therefore, in this example, the intersection area of 301 and 203 is the first sub-image. The first camera is specifically designed to have the same geometric center of 301 and 203, and the diameter of 301 is smaller than the diameter of the circumcircle of square 203. Due to the special design of the first camera and the above-described method of obtaining the first sub-image, the sharpness of the obtained first sub-image satisfies the super-large aperture FNO 1 .
步骤34:获取第二摄像头的第二参数信息;第二参数信息包括:第二摄像头的制造参数、性能参数等,如第二摄像头在什么视场角范围能够在大光圈下获得到清晰度满足要求的图像。例如,具体可以得到第二摄像头在视场角范围为[θ 2,θ 3]内拍摄的图像,在预设空间频率对应的调制传递函数MTF值大于第二预设阈值;其中,0<θ 21,θ 3小于等于所述第二摄像头的视场角的1/2。 Step 34: Acquire second parameter information of the second camera; the second parameter information includes: manufacturing parameters, performance parameters, and the like of the second camera, such as, in what angle of view the second camera can obtain the resolution under a large aperture. The requested image. For example, an image captured by the second camera in a range of viewing angles [θ 2 , θ 3 ] may be obtained, and a modulation transfer function MTF value corresponding to the preset spatial frequency is greater than a second preset threshold; wherein, 0<θ 2 < θ 1 , θ 3 is less than or equal to 1/2 of the angle of view of the second camera.
获取所述第二摄像头中图像传感器的图像接收区域Q,也即可以获取到第二摄像头中图像传感器所接收到的图像;Acquiring an image receiving area Q of the image sensor in the second camera, that is, an image received by the image sensor in the second camera may be acquired;
根据上述参数信息以传感器的图像接收区域Q,确定出所述第二图像在视场角范围为[θ 2,θ 3]的区域与所述P的交集区域S2的图像,作为第二子图像。 Determining, by the image information receiving area Q of the sensor, the image of the second image in the intersection of the region of the viewing angle range [θ 2 , θ 3 ] and the intersection of the P, as the second sub image .
具体如图10所示实施例,方形206所在区域表示第二摄像头中的图像传感器接收图像区域;不同的圆表示不同的视场角范围,如圆303所在区域对应于视场角范围为[0,θ 2]的图像区域,圆304所在区域对应于视场角范围为[0,θ 3]的图像区域;圆303与圆304之间所夹的圆环306所在区域对应于视场角范围为[θ 2,θ 3]的图像区域。因此,本例中,圆环306和方形206的交集区域就是第二子图像。第二摄像头在具体设计的过程中,圆303、圆304和方形206的几何中心是相同的,且圆303的直径小于方形206外接圆的直径,同时小于上述圆301的直径,而圆304的直径大于上述圆301的直径,以实现图像的无缝拼接;通常来说304的直径还可以大于方形206的外界圆的直径,以保证后续能够形成完整图像。由于第二摄像头的特殊设计以及上述得到第二子图像的方法,所得到的第二子图像的清晰度满足超大光圈FNO 2Specifically, as shown in the embodiment shown in FIG. 10, the area where the square 206 is located indicates that the image sensor in the second camera receives the image area; the different circles represent different angles of view, for example, the area where the circle 303 is located corresponds to the field angle range [0] The image area of θ 2 ], the area where the circle 304 is located corresponds to the image area of the field of view angle range [0, θ 3 ]; the area of the circle 306 sandwiched between the circle 303 and the circle 304 corresponds to the field of view angle range The image area of [θ 2 , θ 3 ]. Therefore, in this example, the intersection of the ring 306 and the square 206 is the second sub-image. In the specific design process of the second camera, the geometric centers of the circle 303, the circle 304 and the square 206 are the same, and the diameter of the circle 303 is smaller than the diameter of the circumscribed circle of the square 206, and smaller than the diameter of the circle 301, and the circle 304 The diameter is larger than the diameter of the circle 301 described above to achieve seamless stitching of the image; in general, the diameter of 304 can also be larger than the diameter of the outer circle of the square 206 to ensure subsequent formation of a complete image. Due to the special design of the second camera and the above-described method of obtaining the second sub-image, the sharpness of the obtained second sub-image satisfies the super-large aperture FNO 2 .
如此一来,获得的第一子图像和第二子图像如果共圆心放置,则形成图10中305的重叠区域,而301所在区域和306所在区域恰好具备能够拼接成为一个完整图像的条件。其中交叠部分为圆环305所在区域。In this way, if the obtained first sub-image and the second sub-image are placed at the center of the circle, the overlapping area of 305 in FIG. 10 is formed, and the area where 301 is located and the area where 306 is located just have the condition that can be spliced into one complete image. The overlapping portion is the area where the ring 305 is located.
因此,设上述第一子图像为S1,上述第二子图像为S2;则,Therefore, the first sub-image is S1, and the second sub-image is S2;
步骤35的具体实现可以包含以下几种形式:The specific implementation of step 35 may include the following forms:
确定出S1与S2的交集区域S3的图像;Determining an image of the intersection area S3 of S1 and S2;
确定S3在S2中的补集区域S32的图像;Determining an image of the complement region S32 of S3 in S2;
将S1的图像和S32的图像按照第一预设拼接算法得到目标图像。The image of S1 and the image of S32 are obtained according to a first preset stitching algorithm.
或者,or,
确定出S1与S2的交集区域S3的图像;Determining an image of the intersection area S3 of S1 and S2;
确定S3在S1中的补集区域S31的图像;Determining an image of the complement region S31 of S3 in S1;
将S31的图像和S2的图像按照第二预设拼接算法得到目标图像。The image of S31 and the image of S2 are obtained according to a second preset stitching algorithm.
或者,or,
确定出S1与S2的交集区域S3的图像;Determining an image of the intersection area S3 of S1 and S2;
确定S3在S1中的补集区域S31的图像;Determining an image of the complement region S31 of S3 in S1;
确定S3在S2中的补集区域S32的图像;Determining an image of the complement region S32 of S3 in S2;
将S1的图像和S2的图像对S3的图像按照预设增强算法进行增强处理,得到S4的图像;The image of S1 and the image of S2 are enhanced by a preset enhancement algorithm to obtain an image of S4;
将S31的图像、S32的图像以及S4的图像按照第三预设拼接算法得到目标图像。The image of S31, the image of S32, and the image of S4 are obtained according to a third preset stitching algorithm.
综上可以得出,圆301内部的区域的图像质量满足超大光圈下保证高清晰度;圆环306之间所在区域的图像质量满足超大光圈下保证高清晰度。因此,将其进行拼接后形成的目标图像的图像质量也同样满足超大光圈下保证高清晰度。In summary, it can be concluded that the image quality of the area inside the circle 301 satisfies the high definition under the large aperture; the image quality of the area between the rings 306 satisfies the high definition under the large aperture. Therefore, the image quality of the target image formed by splicing also satisfies the high definition under the large aperture.
应理解,为了达到第一摄像头和第二摄像头所获得的图像能够互相补充的目的,第二摄像头应与第一摄像头的主要参数近似。包括但不限于光圈值,摄像头视场角的整体范围、镜片数、成像焦距、成像镜头的整体尺寸、传感器的性能以及大小。应理解,任何制造方法都很难得到完全一样的结果,实际参数存在一些误差是允许的,只要误差范围不足以改变技术实现实质的都应属于本发明保护的范围之内。It should be understood that in order to achieve the purpose that the images obtained by the first camera and the second camera can complement each other, the second camera should be approximated by the main parameters of the first camera. Including but not limited to the aperture value, the overall range of the camera's field of view, the number of lenses, the imaging focal length, the overall size of the imaging lens, the performance and size of the sensor. It should be understood that it is difficult to obtain exactly the same result in any manufacturing method, and that some error in the actual parameters is allowed, as long as the error range is insufficient to change the technical realization of the essence, it should fall within the scope of protection of the present invention.
在具体实现过程中,所述获取第一摄像头拍摄待拍摄对象的第一图像,与所述获取第二摄像头拍摄所述待拍摄对象的第二图像;可以由同一个触发信号触发;也可以分别由两个不同触发信号触发。In a specific implementation process, the acquiring a first camera to capture a first image of the object to be photographed, and acquiring the second image of the object to be photographed by the second camera; may be triggered by the same trigger signal; Triggered by two different trigger signals.
在具体实现过程中,为了保证第一摄像头和第二摄像头拍摄的图片能融合地更好,所述第一摄像头和所述第二摄像头之间的间距小于预设距离,以保证两个摄像头在拍摄同一个物体时拍出来的画面尽量相同;应理解,在双摄场景下,两个摄像头的距离的设定是和需要得到图像区域相关的,而图像区域的确定又是由后续的图像区域的处理算法决定的,本发明中,两个摄像头的所获得的图像是要进行后续的拼接,因此两个摄像头所获得图像重叠区域越大越好;可选的,两个摄像头的间距设置小于1.5cm,一些设计中还可以小于等于1cm。In a specific implementation process, in order to ensure that the pictures captured by the first camera and the second camera can be better merged, the spacing between the first camera and the second camera is less than a preset distance to ensure that the two cameras are The pictures taken when shooting the same object are as identical as possible; it should be understood that in the dual-camera scene, the distance between the two cameras is set in relation to the image area to be obtained, and the determination of the image area is determined by the subsequent image area. According to the processing algorithm, in the present invention, the obtained images of the two cameras are to be subsequently spliced, so the larger the overlapping area of the images obtained by the two cameras, the better; alternatively, the spacing of the two cameras is less than 1.5. Cm, some designs can also be less than or equal to 1cm.
在具体实现过程中,第一摄像头和第二摄像头拍摄待拍摄物体的远近也会对图片的获取视野具有一定的影响,例如摄像头距离待拍摄物体越接近,则视野偏差越小;摄像头距离待拍摄物体越接远,则视野偏差越大。In the specific implementation process, the distance between the first camera and the second camera to capture the object to be photographed also has a certain influence on the acquisition field of the image. For example, the closer the camera is to the object to be photographed, the smaller the field of view deviation is; the camera distance is to be photographed. The farther the object is, the larger the field of view deviation.
在具体实现过程中,拍照设备还可以包含调节装置,调整第一摄像头和第二摄像头的间距,可以根据待拍摄物体的远近不同,灵活调整两个摄像头的间距,以保证对于不同远近的待拍摄物体,都能获得尽量相同的图像(如内容相似度大于90%,或两个图像相同内容的画面与单个图像画面的占比大于90%等),并且能够保证第一摄像头的第一子图像和第二摄像头的第二子图像能有重叠区域。In a specific implementation process, the photographing device may further comprise an adjusting device, adjusting the spacing between the first camera and the second camera, and flexibly adjusting the spacing of the two cameras according to different distances of the object to be photographed, so as to ensure that the camera is to be photographed for different distances. Objects can obtain the same image as possible (such as content similarity greater than 90%, or the ratio of the image of the two images to the single image is greater than 90%, etc.), and can ensure the first sub-image of the first camera And the second sub-image of the second camera can have an overlapping area.
在具体实现过程中,有时两个摄像头并不能获得整个视角的超大光圈的清晰图像;上述实施例中,304的直径没有大于方形206的外界圆的直径,则还会遗漏一些局部区域的图像没有满足超大光圈的清晰度。这时,拍照设备还可以包含第三摄像头,第三摄像头的光轴与第一摄像头的光轴互相平行;第三摄像头与第一摄像头之间的间距小于预设距离;第三摄像头与第二摄像头之间的间距小于预设距离;获取第三摄像头拍摄所述待拍摄对象的第三图像;根据第三预设规则,获取第三图像的第三子图像,第三子图像对应第三摄像头的视场角范围为[θ 4,θ 5];其中,θ 2435,第二子图像和第三子图像存在重叠图像;θ 5小于第三摄像头的视场角的1/2。将第一子图像、第二子图像、第三子图像,按照第四预设拼接算法得到目标图像。 In the specific implementation process, sometimes the two cameras cannot obtain a clear image of the super-large aperture of the entire viewing angle; in the above embodiment, the diameter of 304 is not larger than the diameter of the outer circle of the square 206, and some partial area images are missed. Meet the clarity of the large aperture. At this time, the photographing device may further comprise a third camera, the optical axis of the third camera and the optical axis of the first camera are parallel to each other; the distance between the third camera and the first camera is less than a preset distance; the third camera and the second Obtaining a distance between the cameras is less than a preset distance; acquiring a third image of the third image by the third camera; acquiring a third sub image of the third image according to a third preset rule, where the third sub image corresponds to the third camera The angle of view of the field of view is [θ 4 , θ 5 ]; wherein θ 2 < θ 4 < θ 3 < θ 5 , there is an overlapping image of the second sub-image and the third sub-image; θ 5 is smaller than the field of view of the third camera 1/2 of the angle. The first sub image, the second sub image, and the third sub image are obtained according to a fourth preset splicing algorithm.
实现的光圈越低,可能需要的摄像头就会越多,图像处理方法类似,本发明中对更多镜头的应用不予以一一列举。The lower the aperture achieved, the more cameras may be needed, and the image processing methods are similar. The application of more lenses in the present invention is not enumerated.
由于上述摄像头的特殊设计,获取到的第一字图像和第二子图像本身具备拼接条件,上述实施例中所提到的第一预设拼接算法、第二预设拼接算法、第三预设拼接算法、第四预设拼接算法都可以用现有技术实现。本文不予以赘述。Due to the special design of the camera, the acquired first word image and the second sub-image itself have splicing conditions, the first preset splicing algorithm, the second preset splicing algorithm, and the third preset mentioned in the above embodiments. The splicing algorithm and the fourth preset splicing algorithm can all be implemented by using existing technologies. This article will not go into details.
本发明提供了一种图像处理方法,该方法应用于包含第一摄像头和第二摄像头的拍照设备;第一摄像头第二摄像头的光轴互相平行,间距小于预设距离;它们的光圈值均小于1.6;获取第一摄像头拍摄待拍摄对象的第一图像;获取第二摄像头拍摄待拍摄对象的第二图像;根据第一预设规则,获取第一图像的第一子图像,第一子图像对应所述第一摄像头的视场角范围为[0,θ 1],根据第二预设规则,获取第二图像的第二子图像,第二子图像对应第二摄像头的视场角范围为[θ 2,θ 3];其中,θ 21,第一子图像和第二子图像存在重叠图像;将第一子图像、第二子图像,按照预设拼接算法得到目标图像。采用该方法,能够在现有工艺下,得到满足超大光圈的高清晰度图像,简化摄像头的物理设计以及制造工艺,节约设计成本和制造成本。 The present invention provides an image processing method, which is applied to a photographing apparatus including a first camera and a second camera; the optical axes of the second camera of the first camera are parallel to each other with a pitch smaller than a preset distance; and their aperture values are smaller than Obtaining a first image of the first image of the object to be photographed; acquiring a second image of the object to be photographed; acquiring a first sub image of the first image according to the first preset rule, corresponding to the first sub image The field of view of the first camera is in the range of [0, θ 1 ], and the second sub-image of the second image is acquired according to the second preset rule, and the field of view of the second sub-image corresponding to the second camera is [ θ 2 , θ 3 ]; wherein θ 21 , the first sub-image and the second sub-image have overlapping images; and the first sub-image and the second sub-image are obtained according to a preset splicing algorithm. By adopting the method, a high-definition image satisfying an ultra-large aperture can be obtained under the existing process, the physical design of the camera and the manufacturing process are simplified, and the design cost and the manufacturing cost are saved.
基于上述实施例提供的图像处理方法,本发明实施例提供一种图像处理装置700,所述装置700应用于包含第一摄像头和第二摄像头的拍照设备,所述第一摄像头、所述第二摄像头的光轴互相平行,所述第一摄像头和所述第二摄像头之间的间距小于预设距离;所述第一摄像头、所述第二摄像头的光圈值均小于1.6;所述装置包括:如图11所示,该装置700包括第一获取模块701、第二获取模块702、第三获取模块703、第四获取模块704和图像拼接模块705,其中:Based on the image processing method provided by the foregoing embodiment, an embodiment of the present invention provides an image processing apparatus 700, where the apparatus 700 is applied to a photographing apparatus including a first camera and a second camera, the first camera, the second The optical axes of the cameras are parallel to each other, and the distance between the first camera and the second camera is less than a preset distance; the aperture values of the first camera and the second camera are all less than 1.6; As shown in FIG. 11, the apparatus 700 includes a first obtaining module 701, a second obtaining module 702, a third obtaining module 703, a fourth obtaining module 704, and an image stitching module 705, where:
第一获取模块701,用于获取第一摄像头拍摄待拍摄对象的第一图像。该第一获取模块701可以由处理器调用第一摄像头获取图像实现。The first obtaining module 701 is configured to acquire a first image of the first camera to take a subject to be photographed. The first obtaining module 701 can be implemented by the processor by calling the first camera to acquire an image.
第一获取模块702,用于获取第二摄像头拍摄待拍摄对象的第二图像。该第二获取模块702可以由处理器调用第一摄像头获取图像实现。The first obtaining module 702 is configured to acquire a second image that the second camera captures an object to be photographed. The second obtaining module 702 can be implemented by the processor by calling the first camera to acquire an image.
第三获取模块703,用于根据第一预设规则,获取第一图像的第一子图像,第一子图像对应第一摄像头的视场角范围为[0,θ 1],该第三获取模块703可以由处理器实现,可以通过调用本地存储器或云端服务器中的数据以及算法,进行相应计算,从第一图像中得到第一子图像。 The third obtaining module 703 is configured to obtain a first sub-image of the first image according to the first preset rule, where the first sub-image corresponds to the field of view of the first camera, and the third field is [0, θ 1 ]. The module 703 can be implemented by a processor, and can perform a corresponding calculation by calling data and an algorithm in a local storage or a cloud server to obtain a first sub-image from the first image.
第四获取模块704,用于根据第二预设规则,获取第二图像的第二子图像,第二子图像对应第二摄像头的视场角范围为[θ 2,θ 3];其中,θ 21,该第四获取模块704 可以由处理器实现,可以通过调用本地存储器或云端服务器中的数据以及算法,进行相应计算,从第二图像中得到第二子图像。 a fourth obtaining module 704, configured to acquire a second sub-image of the second image according to the second preset rule, where the second sub-image corresponds to a second camera having a field angle range of [θ 2 , θ 3 ]; wherein θ 2 < θ 1 , the fourth obtaining module 704 can be implemented by a processor, and can perform corresponding calculation by calling data and an algorithm in the local storage or the cloud server to obtain a second sub-image from the second image.
图像拼接模块705,用于将第一子图像、第二子图像,按照预设拼接算法得到目标图像。该图像拼接模块705可以由处理器实现,可以通过调用本地存储器或云端服务器中的数据以及拼接融合算法,进行相应计算,将第一子图像和第二子图像拼接成为一个完整的目标图像,该目标图像在超大光圈下仍然具备高清晰度。The image splicing module 705 is configured to obtain the target image according to a preset splicing algorithm by using the first sub-image and the second sub-image. The image splicing module 705 can be implemented by a processor, and can perform corresponding calculation by calling data in a local memory or a cloud server and a splicing fusion algorithm, and splicing the first sub image and the second sub image into a complete target image. The target image still has high definition under a large aperture.
在具体实现过程中,第一获取模块701具体用于执行步骤31中所提到的方法以及可以等同替换的方法;第二获取模块702具体用于执行步骤32中所提到的方法以及可以等同替换的方法;第三获取模块703具体用于执行步骤33中所提到的方法以及可以等同替换的方法;第四获取模块704具体用于执行步骤34中所提到的方法以及可以等同替换的方法;图像拼接模块705具体用于执行步骤35中所提到的方法以及可以等同替换的方法。其中,上述具体的方法实施例以及实施例中的解释和表述也适用于装置中的方法执行。In a specific implementation process, the first obtaining module 701 is specifically configured to perform the method mentioned in step 31 and the method that can be replaced by the same; the second obtaining module 702 is specifically configured to perform the method mentioned in step 32 and may be equivalent. An alternative method; the third obtaining module 703 is specifically configured to perform the method mentioned in step 33 and the method that can be replaced by the same; the fourth obtaining module 704 is specifically configured to perform the method mentioned in step 34 and can be replaced equally Method; the image splicing module 705 is specifically configured to perform the method mentioned in the step 35 and a method that can be equivalently replaced. The above specific method embodiments and the explanations and expressions in the embodiments are also applicable to the method execution in the device.
此外,在一种具体实现过程中,拍照设备还可以包含第三摄像头,第三摄像头的光轴与第一摄像头的光轴互相平行;第三摄像头与第一摄像头之间的间距小于预设距离;第三摄像头与第二摄像头之间的间距小于预设距离;装置还包括:第五获取模块706(图中未示出),用于获取第三摄像头拍摄所述待拍摄对象的第三图像;第六获取模块707(图中未示出),用于根据第三预设规则,获取所述第三图像的第三子图像,第三子图像对应所述第三摄像头的视场角范围为[θ 4,θ 5];其中,θ 2435,第二子图像和第三子图像存在重叠图像,θ 5小于所述第三摄像头拍摄的视场角的1/2;图像拼接模块705具体用于将第一子图像、第二子图像、第三子图像,按照第四预设拼接算法得到目标图像。 In addition, in a specific implementation process, the photographing device may further include a third camera, the optical axis of the third camera and the optical axis of the first camera are parallel to each other; and the spacing between the third camera and the first camera is less than a preset distance. The distance between the third camera and the second camera is less than the preset distance; the device further includes: a fifth acquisition module 706 (not shown) for acquiring a third image of the third camera to capture the object to be photographed a sixth acquisition module 707 (not shown), configured to acquire a third sub-image of the third image according to a third preset rule, where the third sub-image corresponds to a range of field angles of the third camera [θ 4 , θ 5 ]; wherein θ 2435 , the second sub-image and the third sub-image have overlapping images, and θ 5 is smaller than the angle of view of the third camera The image splicing module 705 is specifically configured to obtain the target image according to the fourth preset splicing algorithm by using the first sub image, the second sub image, and the third sub image.
本发明提供了一种图像处理装置,该装置应用于包含第一摄像头和第二摄像头的拍照设备;第一摄像头第二摄像头的光轴互相平行,间距小于预设距离;它们的光圈值均小于1.6;该装置获取第一摄像头拍摄待拍摄对象的第一图像;获取第二摄像头拍摄待拍摄对象的第二图像;根据第一预设规则,获取第一图像的第一子图像,第一子图像对应所述第一摄像头的视场角范围为[0,θ 1],根据第二预设规则,获取第二图像的第二子图像,第二子图像对应第二摄像头的视场角范围为[θ 2,θ 3];其中,θ 21,第一子图像和第二子图像存在重叠图像;将第一子图像、第二子图像,按照预设拼接算法得到目标图像。采用该装置,能够在现有工艺下,得到满足超大光圈的高清晰度图像,简化摄像头的物理设计以及制造工艺,节约设计成本和制造成本。 The present invention provides an image processing apparatus, which is applied to a photographing apparatus including a first camera and a second camera; the optical axes of the second camera of the first camera are parallel to each other with a pitch smaller than a preset distance; and their aperture values are smaller than 1.6: acquiring, by the first camera, a first image of the object to be photographed; acquiring a second image of the second camera to capture the object to be photographed; acquiring the first sub-image of the first image according to the first preset rule, the first sub-image The image corresponding to the first camera has an angle of view of [0, θ 1 ], and the second sub-image of the second image is obtained according to the second preset rule, and the second sub-image corresponds to the range of the second camera. [θ 2 , θ 3 ]; wherein θ 21 , the first sub-image and the second sub-image have overlapping images; and the first sub-image and the second sub-image are obtained according to a preset splicing algorithm. By adopting the device, a high-definition image satisfying an ultra-large aperture can be obtained under the existing process, the physical design of the camera and the manufacturing process are simplified, and the design cost and the manufacturing cost are saved.
应理解以上装置700中的各个模块的划分仅仅是一种逻辑功能的划分,实际实现时可以全部或部分集成到一个物理实体上,也可以物理上分开。例如,以上各个模块可以为单独设立的处理元件,也可以集成在终端的某一个芯片中实现,此外,也可以以程序代码的形式存储于控制器的存储元件中,由处理器的某一个处理元件调用并执行以上各个模块的功能。此外各个模块可以集成在一起,也可以独立实现。这里所述的处理元件可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤或以上各个模块可以通过处理器元件中的硬件的集成逻辑电路或者软件形式的指令完成。该处理元件可以是通用处理器,例如中央处理器(英文:central processing unit, 简称:CPU),还可以是被配置成实施以上方法的一个或多个集成电路,例如:一个或多个特定集成电路(英文:application-specific integrated circuit,简称:ASIC),或,一个或多个微处理器(英文:digital signal processor,简称:DSP),或,一个或者多个现场可编程门阵列(英文:field-programmable gate array,简称:FPGA)等。It should be understood that the division of each module in the above device 700 is only a division of a logical function, and the actual implementation may be integrated into one physical entity in whole or in part, or may be physically separated. For example, each of the above modules may be a separately set processing component, or may be integrated in one chip of the terminal, or may be stored in a storage element of the controller in the form of program code, and processed by one of the processors. The component calls and executes the functions of each of the above modules. In addition, the individual modules can be integrated or implemented independently. The processing elements described herein can be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the above method or each of the above modules may be completed by an integrated logic circuit of hardware in the processor element or an instruction in a form of software. The processing element may be a general purpose processor, such as a central processing unit (CPU), or may be one or more integrated circuits configured to implement the above method, for example: one or more specific integrations Circuit (English: application-specific integrated circuit, ASIC for short), or one or more microprocessors (English: digital signal processor, referred to as: DSP), or one or more field programmable gate arrays (English: Field-programmable gate array, referred to as: FPGA).
本领域内的技术人员应明白,本发明的实施例可提供为方法、***、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。Those skilled in the art will appreciate that embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
本发明是参照根据本发明实施例的方法、设备(***)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (system), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or FIG. These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing device to produce a machine for the execution of instructions for execution by a processor of a computer or other programmable data processing device. Means for implementing the functions specified in one or more of the flow or in a block or blocks of the flow chart.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。The computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device. The apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device. The instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
尽管已描述了本发明的部分实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例作出另外的变更和修改。所以,所附权利要求意欲解释为包括已列举实施例以及落入本发明范围的所有变更和修改。显然,本领域的技术人员可以对本发明实施例进行各种改动和变型而不脱离本发明实施例的精神和范围。这样,倘若本发明实施例的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。Although a part of the embodiments of the present invention have been described, those skilled in the art can make additional changes and modifications to the embodiments once they are aware of the basic inventive concept. Therefore, the appended claims are intended to be interpreted as including all the modifications and modifications It is apparent that those skilled in the art can make various modifications and variations to the embodiments of the invention without departing from the spirit and scope of the embodiments of the invention. Thus, it is intended that the present invention cover the modifications and modifications of the embodiments of the invention.

Claims (18)

  1. 一种图像处理方法,其特征在于,所述方法应用于包含第一摄像头和第二摄像头的拍照设备,所述第一摄像头、所述第二摄像头的光轴互相平行,所述第一摄像头和所述第二摄像头之间的间距小于预设距离;所述第一摄像头、所述第二摄像头的光圈值均小于1.6,且所述第一摄像头、所述第二摄像头的镜片数均不大于6;所述方法包括:An image processing method, wherein the method is applied to a photographing apparatus including a first camera and a second camera, wherein optical axes of the first camera and the second camera are parallel to each other, the first camera and The spacing between the second cameras is less than a preset distance; the aperture values of the first camera and the second camera are all less than 1.6, and the number of lenses of the first camera and the second camera are not greater than 6; the method comprises:
    获取第一摄像头拍摄待拍摄对象的第一图像;Obtaining a first camera to capture a first image of the object to be photographed;
    获取第二摄像头拍摄所述待拍摄对象的第二图像;Obtaining a second camera to capture a second image of the object to be photographed;
    获取所述第一图像的第一子图像;其中,所述第一子图像的清晰度满足预设清晰度标准;Obtaining a first sub-image of the first image; wherein a resolution of the first sub-image satisfies a preset definition standard;
    获取所述第二图像的第二子图像;其中,所述第二子图像的清晰度满足所述预设清晰度标准;且所述第一子图像和所述第二子图像存在图像交集,所述第一子图像和所述第二子图像的图像并集能够表达所述待拍摄对象;Obtaining a second sub-image of the second image; wherein a resolution of the second sub-image satisfies the preset definition standard; and the first sub-image and the second sub-image have an image intersection, The image of the first sub-image and the second sub-image is capable of expressing the object to be photographed;
    对所述第一子图像、所述第二子图像进行融合处理,得到目标图像。The first sub-image and the second sub-image are subjected to fusion processing to obtain a target image.
  2. 如权利要求1所述方法,其特征在于,所述获取所述第一图像的第一子图像包括:The method of claim 1, wherein the obtaining the first sub-image of the first image comprises:
    获取第一摄像头的第一物理设计参数;其中,所述第一物理设计参数表达了在所述第一摄像头拍摄得到的任一图像中,第一区域的图像的清晰度高于第二区域的图像的清晰度,且满足所述预设清晰度标准,所述第二区域为所述第一区域在所述第一摄像头拍摄的任一图像中的补集;Acquiring a first physical design parameter of the first camera; wherein the first physical design parameter expresses that in any image captured by the first camera, the image of the first region has higher definition than the second region a sharpness of the image, and satisfying the preset definition standard, the second region being a complement of the first region in any image captured by the first camera;
    根据所述第一物理设计参数获取所述第一图像的第一区域;Acquiring the first region of the first image according to the first physical design parameter;
    获取所述第一摄像头中图像传感器的图像接收区域P;Obtaining an image receiving area P of the image sensor in the first camera;
    确定出所述第一图像的第一区域与所述P的交集区域S1的图像,作为第一子图像。An image of the intersection area S1 of the first area of the first image and the P is determined as the first sub-image.
  3. 如权利要求1或2所述方法,其特征在于,所述获取所述第二图像的第二子图像包括:The method of claim 1 or 2, wherein the acquiring the second sub-image of the second image comprises:
    获取第二摄像头的第二物理设计参数;其中,所述第二物理设计参数表达了在所述第二摄像头拍摄得到的任一图像中,第三区域的图像的清晰度高于第四区域的图像的清晰度,且满足所述预设清晰度标准,所述第四区域为所述第三区域在所述第二摄像头拍摄的任一图像中的补集;Obtaining a second physical design parameter of the second camera; wherein the second physical design parameter expresses that in any image captured by the second camera, the image of the third region has higher definition than the fourth region The sharpness of the image, and satisfying the preset definition standard, the fourth region being a complement of the third region in any image captured by the second camera;
    根据所述第二物理设计参数获取所述第二图像的第三区域;Obtaining a third region of the second image according to the second physical design parameter;
    获取所述第二摄像头中图像传感器的图像接收区域Q;Obtaining an image receiving area Q of the image sensor in the second camera;
    确定出所述第二图像的第三区域与所述Q的交集区域S2的图像,作为第二子图像。An image of the third region of the second image and the intersection region S2 of the Q is determined as a second sub-image.
  4. 如权利要求2或3所述方法,其特征在于,所述第一物理设计参数包括:The method of claim 2 or 3, wherein the first physical design parameter comprises:
    所述第一摄像头在视场角范围为[0,θ 1]内拍摄的图像,在预设空间频率对应的调制传递函数MTF值大于第一预设阈值;且所述第一摄像头在其它视场角范围内拍摄的图像,在预设空间频率对应的调制传递函数MTF值不大于第一预设阈值;其中,θ 1小于所述第一摄像头的视场角的1/2。 The image captured by the first camera in the field of view angle range [0, θ 1 ], the modulation transfer function MTF value corresponding to the preset spatial frequency is greater than the first preset threshold; and the first camera is in other views For the image captured in the field angle range, the modulation transfer function MTF value corresponding to the preset spatial frequency is not greater than the first preset threshold; wherein θ 1 is less than 1/2 of the field of view of the first camera.
  5. 如权利要求4所述方法,其特征在于,所述第二物理设计参数包括:The method of claim 4 wherein said second physical design parameter comprises:
    所述第二摄像头在视场角范围为[θ 2,θ 3]内拍摄的图像,在预设空间频率对应的调制传递函数MTF值大于第二预设阈值;且所述第二摄像头在其它视场角范围内拍摄的图像,在预设空间频率对应的调制传递函数MTF值不大于第二预设阈值;其中,θ 3小于所述第二 摄像头的视场角的1/2,且0<θ 213The image captured by the second camera in the field of view angle range [θ 2 , θ 3 ], the modulation transfer function MTF value corresponding to the preset spatial frequency is greater than the second preset threshold; and the second camera is in the other The image of the image captured in the field of view has a modulation transfer function MTF value corresponding to the preset spatial frequency that is not greater than a second predetermined threshold; wherein θ 3 is less than 1/2 of the field of view of the second camera, and 0 <θ 213 .
  6. 如权利要求3-5任一项所述方法,其特征在于,所述对所述第一子图像、所述第二子图像进行融合处理,得到目标图像包括:The method according to any one of claims 3-5, wherein the performing the fusion processing on the first sub-image and the second sub-image to obtain the target image comprises:
    确定出所述S1与所述S2的交集区域S3的图像;Determining an image of the intersection region S3 of the S1 and the S2;
    确定所述S3在所述S2中的补集区域S32的图像;Determining an image of the complement region S32 of the S3 in the S2;
    对所述S1的图像和所述S32的图像进行融合处理,得到目标图像。The image of the S1 and the image of the S32 are subjected to fusion processing to obtain a target image.
  7. 如权利要求3-5任一项所述方法,其特征在于,所述对所述第一子图像、所述第二子图像进行融合处理,得到目标图像包括:The method according to any one of claims 3-5, wherein the performing the fusion processing on the first sub-image and the second sub-image to obtain the target image comprises:
    确定出所述S1与所述S2的交集区域S3的图像;Determining an image of the intersection region S3 of the S1 and the S2;
    确定所述S3在所述S1中的补集区域S31的图像;Determining an image of the complement region S31 of the S3 in the S1;
    对所述S31的图像和所述S2的图像进行融合处理,得到目标图像。The image of the S31 and the image of the S2 are subjected to fusion processing to obtain a target image.
  8. 如权利要求3-5任一项所述方法,其特征在于,所述对所述第一子图像、所述第二子图像进行融合处理,得到目标图像包括:The method according to any one of claims 3-5, wherein the performing the fusion processing on the first sub-image and the second sub-image to obtain the target image comprises:
    确定出所述S1与所述S2的交集区域S3的图像;Determining an image of the intersection region S3 of the S1 and the S2;
    确定所述S3在所述S1中的补集区域S31的图像;Determining an image of the complement region S31 of the S3 in the S1;
    确定所述S3在所述S2中的补集区域S32的图像;Determining an image of the complement region S32 of the S3 in the S2;
    根据所述S1和所述S2对所述S3进行增强处理,得到S4的图像;Performing an enhancement process on the S3 according to the S1 and the S2, to obtain an image of S4;
    对所述S31的图像、所述S32的图像以及所述S4的图像进行融合处理,得到目标图像。The image of the S31, the image of the S32, and the image of the S4 are subjected to fusion processing to obtain a target image.
  9. 如权利要求1-8任一项所述方法,其特征在于,所述第一摄像头和所述第二摄像头的光圈值相等,或者具有相同的焦距,或者具有相同的视场角。The method of any of claims 1-8, wherein the first camera and the second camera have equal aperture values, or have the same focal length, or have the same field of view.
  10. 如权利要求1-9任一项所述方法,其特征在于,所述拍照设备还包含调节装置,所述方法还包括:The method according to any one of claims 1 to 9, wherein the photographing apparatus further comprises an adjusting device, the method further comprising:
    控制所述调节装置,调整所述第一摄像头和所述第二摄像头的间距。The adjusting device is controlled to adjust a spacing between the first camera and the second camera.
  11. 一种图像处理装置,其特征在于,所述装置应用于包含第一摄像头和第二摄像头的拍照设备,所述第一摄像头、所述第二摄像头的光轴互相平行,所述第一摄像头和所述第二摄像头之间的间距小于预设距离;所述第一摄像头、所述第二摄像头的光圈值均小于1.6,且所述第一摄像头、所述第二摄像头的镜片数均不大于6;所述装置包括:An image processing apparatus, wherein the apparatus is applied to a photographing apparatus including a first camera and a second camera, wherein optical axes of the first camera and the second camera are parallel to each other, the first camera and The spacing between the second cameras is less than a preset distance; the aperture values of the first camera and the second camera are all less than 1.6, and the number of lenses of the first camera and the second camera are not greater than 6; the device comprises:
    第一获取模块,用于获取第一摄像头拍摄待拍摄对象的第一图像;a first acquiring module, configured to acquire, by the first camera, a first image of the object to be photographed;
    第二获取模块,用于获取第二摄像头拍摄所述待拍摄对象的第二图像;a second acquiring module, configured to acquire a second image of the object to be photographed by the second camera;
    第三获取模块,用于获取所述第一图像的第一子图像;其中,所述第一子图像的清晰度满足预设清晰度标准;a third acquiring module, configured to acquire a first sub-image of the first image; wherein a resolution of the first sub-image satisfies a preset definition standard;
    第四获取模块,用于获取所述第二图像的第二子图像;其中,所述第二子图像的清晰度满足所述预设清晰度标准;且所述第一子图像和所述第二子图像存在图像交集,且所述第一子图像和所述第二子图像的图像并集能够表达所述待拍摄对象;a fourth acquiring module, configured to acquire a second sub-image of the second image; wherein a resolution of the second sub-image satisfies the preset definition standard; and the first sub-image and the first The two sub-images have an image intersection, and the image of the first sub-image and the second sub-image is capable of expressing the object to be photographed;
    图像拼接模块,用于对所述第一子图像、所述第二子图像进行融合处理,得到目标图像。An image splicing module is configured to perform fusion processing on the first sub-image and the second sub-image to obtain a target image.
  12. 如权利要求11所述装置,其特征在于,所述第三获取模块具体用于:The device of claim 11, wherein the third obtaining module is specifically configured to:
    获取第一摄像头的第一物理设计参数;其中,所述第一物理设计参数表达了在所述第一摄像头拍摄得到的任一图像中,第一区域的图像的清晰度高于第二区域的图像的清晰度, 且满足所述预设清晰度标准,所述第二区域为所述第一区域在所述第一摄像头拍摄的任一图像中的补集;Acquiring a first physical design parameter of the first camera; wherein the first physical design parameter expresses that in any image captured by the first camera, the image of the first region has higher definition than the second region a sharpness of the image, and satisfying the preset definition standard, the second region being a complement of the first region in any image captured by the first camera;
    根据所述第一物理设计参数获取所述第一图像的第一区域;Acquiring the first region of the first image according to the first physical design parameter;
    获取所述第一摄像头中图像传感器的图像接收区域P;Obtaining an image receiving area P of the image sensor in the first camera;
    确定出所述第一图像的第一区域与所述P的交集区域S1的图像,作为第一子图像。An image of the intersection area S1 of the first area of the first image and the P is determined as the first sub-image.
  13. 如权利要求11或12所述装置,其特征在于,所述第四获取模块具体用于:The device according to claim 11 or 12, wherein the fourth obtaining module is specifically configured to:
    获取第二摄像头的第二物理设计参数;其中,所述第二物理设计参数表达了在所述第二摄像头拍摄得到的任一图像中,第三区域的图像的清晰度高于第四区域的图像的清晰度,且满足所述预设清晰度标准,所述第四区域为所述第三区域在所述第二摄像头拍摄的任一图像中的补集;Obtaining a second physical design parameter of the second camera; wherein the second physical design parameter expresses that in any image captured by the second camera, the image of the third region has higher definition than the fourth region The sharpness of the image, and satisfying the preset definition standard, the fourth region being a complement of the third region in any image captured by the second camera;
    根据所述第二物理设计参数获取所述第二图像的第三区域;Obtaining a third region of the second image according to the second physical design parameter;
    获取所述第二摄像头中图像传感器的图像接收区域Q;Obtaining an image receiving area Q of the image sensor in the second camera;
    确定出所述第二图像的第三区域与所述Q的交集区域S2的图像,作为第二子图像。An image of the third region of the second image and the intersection region S2 of the Q is determined as a second sub-image.
  14. 如权利要求13所述装置,其特征在于,The device of claim 13 wherein:
    所述第一物理设计参数包括:The first physical design parameter includes:
    所述第一摄像头在视场角范围为[0,θ 1]内拍摄的图像,在预设空间频率对应的调制传递函数MTF值大于第一预设阈值;且所述第一摄像头在其它视场角范围内拍摄的图像,在预设空间频率对应的调制传递函数MTF值不大于第一预设阈值;其中,θ 1小于所述第一摄像头的视场角的1/2; The image captured by the first camera in the field of view angle range [0, θ 1 ], the modulation transfer function MTF value corresponding to the preset spatial frequency is greater than the first preset threshold; and the first camera is in other views The image captured in the field angle range, the modulation transfer function MTF value corresponding to the preset spatial frequency is not greater than the first preset threshold; wherein θ 1 is less than 1/2 of the field of view of the first camera;
    所述第二物理设计参数包括:The second physical design parameter includes:
    所述第二摄像头在视场角范围为[θ 2,θ 3]内拍摄的图像,在预设空间频率对应的调制传递函数MTF值大于第二预设阈值;且所述第二摄像头在其它视场角范围内拍摄的图像,在预设空间频率对应的调制传递函数MTF值不大于第二预设阈值;其中,θ 3小于所述第二摄像头的视场角的1/2,且0<θ 213The image captured by the second camera in the field of view angle range [θ 2 , θ 3 ], the modulation transfer function MTF value corresponding to the preset spatial frequency is greater than the second preset threshold; and the second camera is in the other The image of the image captured in the field of view has a modulation transfer function MTF value corresponding to the preset spatial frequency that is not greater than a second predetermined threshold; wherein θ 3 is less than 1/2 of the field of view of the second camera, and 0 <θ 213 .
  15. 如权利要求13-14任一项所述装置,其特征在于,所述图像拼接模块具体用于实现以下三种方式中的任意一种:The device according to any one of claims 13-14, wherein the image splicing module is specifically configured to implement any one of the following three ways:
    方式1:确定出所述S1与所述S2的交集区域S3的图像;Mode 1: determining an image of the intersection area S3 of the S1 and the S2;
    确定所述S3在所述S2中的补集区域S32的图像;Determining an image of the complement region S32 of the S3 in the S2;
    对所述S1的图像和所述S32的图像进行融合处理,得到目标图像;或,Performing fusion processing on the image of S1 and the image of S32 to obtain a target image; or
    方式2:确定出所述S1与所述S2的交集区域S3的图像;Mode 2: determining an image of the intersection area S3 of the S1 and the S2;
    确定所述S3在所述S1中的补集区域S31的图像;Determining an image of the complement region S31 of the S3 in the S1;
    对所述S31的图像和所述S2的图像进行融合处理,得到目标图像;或,Performing fusion processing on the image of S31 and the image of S2 to obtain a target image; or
    方式3:确定出所述S1与所述S2的交集区域S3的图像;Mode 3: determining an image of the intersection area S3 of the S1 and the S2;
    确定所述S3在所述S1中的补集区域S31的图像;Determining an image of the complement region S31 of the S3 in the S1;
    确定所述S3在所述S2中的补集区域S32的图像;Determining an image of the complement region S32 of the S3 in the S2;
    将所述S1和所述S2对所述S3按照预设增强算法进行增强处理,得到S4的图像;Performing the S1 and the S2 on the S3 according to a preset enhancement algorithm to obtain an image of S4;
    对所述S31的图像、所述S32的图像以及所述S4的图像进行融合处理,得到目标图像。The image of the S31, the image of the S32, and the image of the S4 are subjected to fusion processing to obtain a target image.
  16. 如权利要求11-15任一项所述装置,其特征在于,所述装置还包含调节模块,用 于调整所述第一摄像头和所述第二摄像头的间距。The apparatus of any of claims 11-15, wherein said apparatus further comprises an adjustment module for adjusting a spacing of said first camera and said second camera.
  17. 一种终端设备,其特征在于,所述终端设备包含第一摄像头和第二摄像头,存储器、处理器、总线;所述第一摄像头、所述第二摄像头、所述存储器以及所述处理器通过所述总线相连;其中,A terminal device, comprising: a first camera and a second camera, a memory, a processor, and a bus; the first camera, the second camera, the memory, and the processor pass The buses are connected; wherein
    所述第一摄像头、所述第二摄像头的光轴互相平行,所述第一摄像头和所述第二摄像头之间的间距小于预设距离;所述第一摄像头、所述第二摄像头的光圈值均小于1.6,且所述第一摄像头、所述第二摄像头的镜片数均不大于6;The optical axes of the first camera and the second camera are parallel to each other, and the spacing between the first camera and the second camera is less than a preset distance; the aperture of the first camera and the second camera The value of each of the first camera and the second camera is not more than 6;
    所述摄像头用于在所述处理器的控制下采集图像信号;The camera is configured to acquire an image signal under the control of the processor;
    所述存储器用于存储计算机程序和指令;The memory is for storing computer programs and instructions;
    所述处理器用于调用所述存储器中存储的所述计算机程序和指令,执行如权利要求1~10任一项所述方法。The processor is configured to invoke the computer program and instructions stored in the memory to perform the method of any one of claims 1 to 10.
  18. 如权利要求17所述的终端设备,所述终端设备还包括天线***、所述天线***在处理器的控制下,收发无线通信信号实现与移动通信网络的无线通信;所述移动通信网络包括以下的一种或多种:GSM网络、CDMA网络、3G网络、FDMA、TDMA、PDC、TACS、AMPS、WCDMA、TDSCDMA、WIFI以及LTE网络。The terminal device according to claim 17, further comprising an antenna system, wherein the antenna system transmits and receives wireless communication signals under the control of the processor to implement wireless communication with the mobile communication network; the mobile communication network includes the following One or more of: GSM network, CDMA network, 3G network, FDMA, TDMA, PDC, TACS, AMPS, WCDMA, TDSCDMA, WIFI, and LTE networks.
PCT/CN2018/084518 2017-06-23 2018-04-25 Image processing method and apparatus, and device WO2018233373A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP18820572.8A EP3629569A4 (en) 2017-06-23 2018-04-25 Image processing method and apparatus, and device
US16/723,554 US11095812B2 (en) 2017-06-23 2019-12-20 Image processing method, apparatus, and device

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201710488848.7 2017-06-23
CN201710488848.7A CN107295256A (en) 2017-06-23 2017-06-23 A kind of image processing method, device and equipment
CN201711243255.0A CN109120818B (en) 2017-06-23 2017-11-30 Image processing method, device and equipment
CN201711243255.0 2017-11-30

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/723,554 Continuation US11095812B2 (en) 2017-06-23 2019-12-20 Image processing method, apparatus, and device

Publications (1)

Publication Number Publication Date
WO2018233373A1 true WO2018233373A1 (en) 2018-12-27

Family

ID=64736177

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/084518 WO2018233373A1 (en) 2017-06-23 2018-04-25 Image processing method and apparatus, and device

Country Status (1)

Country Link
WO (1) WO2018233373A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112053403A (en) * 2020-04-27 2020-12-08 北京迈格威科技有限公司 Method and device for determining included angle of optical axes of double cameras
CN112132879A (en) * 2019-06-25 2020-12-25 北京沃东天骏信息技术有限公司 Image processing method, device and storage medium
CN112233185A (en) * 2020-09-24 2021-01-15 浙江大华技术股份有限公司 Camera calibration method, image registration method, camera device and storage device
CN114820314A (en) * 2022-04-27 2022-07-29 Oppo广东移动通信有限公司 Image processing method and device, computer readable storage medium and electronic device
CN114845052A (en) * 2022-04-22 2022-08-02 杭州海康威视数字技术股份有限公司 Shooting parameter adjusting method and device, camera and target equipment
CN115022510A (en) * 2022-05-30 2022-09-06 艾酷软件技术(上海)有限公司 Camera assembly, electronic equipment, shooting method of electronic equipment and shooting device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101183175A (en) * 2006-11-13 2008-05-21 华晶科技股份有限公司 Optical aberration correcting system and method of digital cameras
US8922625B2 (en) * 2009-11-19 2014-12-30 Lg Electronics Inc. Mobile terminal and controlling method thereof
CN105120145A (en) * 2015-07-31 2015-12-02 努比亚技术有限公司 Electronic equipment and image processing method
CN105262951A (en) * 2015-10-22 2016-01-20 努比亚技术有限公司 Mobile terminal having binocular camera and photographing method
CN105338244A (en) * 2015-10-30 2016-02-17 努比亚技术有限公司 Information processing method and mobile terminal
CN106161980A (en) * 2016-07-29 2016-11-23 宇龙计算机通信科技(深圳)有限公司 Photographic method and system based on dual camera
CN106570110A (en) * 2016-10-25 2017-04-19 北京小米移动软件有限公司 De-overlapping processing method and apparatus of image
CN107295256A (en) * 2017-06-23 2017-10-24 华为技术有限公司 A kind of image processing method, device and equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101183175A (en) * 2006-11-13 2008-05-21 华晶科技股份有限公司 Optical aberration correcting system and method of digital cameras
US8922625B2 (en) * 2009-11-19 2014-12-30 Lg Electronics Inc. Mobile terminal and controlling method thereof
CN105120145A (en) * 2015-07-31 2015-12-02 努比亚技术有限公司 Electronic equipment and image processing method
CN105262951A (en) * 2015-10-22 2016-01-20 努比亚技术有限公司 Mobile terminal having binocular camera and photographing method
CN105338244A (en) * 2015-10-30 2016-02-17 努比亚技术有限公司 Information processing method and mobile terminal
CN106161980A (en) * 2016-07-29 2016-11-23 宇龙计算机通信科技(深圳)有限公司 Photographic method and system based on dual camera
CN106570110A (en) * 2016-10-25 2017-04-19 北京小米移动软件有限公司 De-overlapping processing method and apparatus of image
CN107295256A (en) * 2017-06-23 2017-10-24 华为技术有限公司 A kind of image processing method, device and equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3629569A4 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112132879A (en) * 2019-06-25 2020-12-25 北京沃东天骏信息技术有限公司 Image processing method, device and storage medium
CN112132879B (en) * 2019-06-25 2024-03-08 北京沃东天骏信息技术有限公司 Image processing method, device and storage medium
CN112053403A (en) * 2020-04-27 2020-12-08 北京迈格威科技有限公司 Method and device for determining included angle of optical axes of double cameras
CN112233185A (en) * 2020-09-24 2021-01-15 浙江大华技术股份有限公司 Camera calibration method, image registration method, camera device and storage device
CN112233185B (en) * 2020-09-24 2024-06-11 浙江大华技术股份有限公司 Camera calibration method, image registration method, image pickup device and storage device
CN114845052A (en) * 2022-04-22 2022-08-02 杭州海康威视数字技术股份有限公司 Shooting parameter adjusting method and device, camera and target equipment
CN114845052B (en) * 2022-04-22 2024-03-12 杭州海康威视数字技术股份有限公司 Shooting parameter adjustment method and device, camera and target equipment
CN114820314A (en) * 2022-04-27 2022-07-29 Oppo广东移动通信有限公司 Image processing method and device, computer readable storage medium and electronic device
CN115022510A (en) * 2022-05-30 2022-09-06 艾酷软件技术(上海)有限公司 Camera assembly, electronic equipment, shooting method of electronic equipment and shooting device

Similar Documents

Publication Publication Date Title
US11095812B2 (en) Image processing method, apparatus, and device
KR102310430B1 (en) Filming method, apparatus and device
WO2018233373A1 (en) Image processing method and apparatus, and device
CN109891874B (en) Panoramic shooting method and device
CN113472976B (en) Microspur imaging method and terminal
WO2023016127A1 (en) Phase plate, camera module, and mobile terminal
EP4030744A1 (en) Camera module and terminal device
WO2020014881A1 (en) Image correction method and terminal
WO2021218551A1 (en) Photographing method and apparatus, terminal device, and storage medium
US20230275940A1 (en) Electronic device with automatic selection of image capturing devices for video communication
CN110602381B (en) Depth of field detection method and device, storage medium and terminal
US11611693B1 (en) Updating lens focus calibration values
US11363187B2 (en) Focusing method and apparatus applied to terminal device, and terminal device
KR102689351B1 (en) Focusing method and apparatus applied to terminal devices and terminal devices
WO2019072222A1 (en) Image processing method and device and apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18820572

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018820572

Country of ref document: EP

Effective date: 20191227