WO2021031096A1 - Capture d'images à prises de vues multiples sans stabilisation d'image - Google Patents

Capture d'images à prises de vues multiples sans stabilisation d'image Download PDF

Info

Publication number
WO2021031096A1
WO2021031096A1 PCT/CN2019/101473 CN2019101473W WO2021031096A1 WO 2021031096 A1 WO2021031096 A1 WO 2021031096A1 CN 2019101473 W CN2019101473 W CN 2019101473W WO 2021031096 A1 WO2021031096 A1 WO 2021031096A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
sensels
handheld device
movement
image capture
Prior art date
Application number
PCT/CN2019/101473
Other languages
English (en)
Inventor
Yamamoto Takashi
Original Assignee
Huawei Technologies Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co., Ltd. filed Critical Huawei Technologies Co., Ltd.
Priority to PCT/CN2019/101473 priority Critical patent/WO2021031096A1/fr
Priority to CN201980099166.4A priority patent/CN114208153B/zh
Publication of WO2021031096A1 publication Critical patent/WO2021031096A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6812Motion detection based on additional sensors, e.g. acceleration sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/134Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/48Increasing resolution by shifting the sensor relative to the scene

Definitions

  • the present disclosure relates to obtaining high resolution images using hardware that takes advantage of naturally existing camera shake and/or human user movement which is provided in response to indications during capture of a series of images by an image sensor.
  • this disclosure relates to image sensors using a Bayer arrayed colour filter in a handheld device, such as a smartphone.
  • CMOS image sensor using a monochrome image sensor and a colour filter array, CFA, such as a Bayer arrayed color filter
  • CFA such as a Bayer arrayed color filter
  • one pixel does not receive all the colour information of the image. This is because of the layout of the Bayer array in which the frequency of green filtered pixels is twice that of red filtered or blue filtered pixels.
  • a common technique is to obtain a color information per pixel by performing interpolation between 4 pixels (Red, Green, Green, and Blue) .
  • 4 pixels Red, Green, Green, and Blue
  • red and blue with a small number of pixels, it can only have information up to the region of low spatial frequency compared with green.
  • a pixel which has green filter of the Bayer filter array applied to it may be referred to as a green-filtered sensel, and the others can be deduced by analogy.
  • Point W acquires color information from neighboring GBRG. By this the full color information at the point W can be approximated.
  • the full-color information of point X can be calculated in the same way as the point W.
  • the point X uses the same Blue information as the point W.
  • the point Z shares the information of Red with the point X.
  • the point Y uses Red information common to the points X and Z. Therefore, Red and Blue have only half of the Green’s frequency information in principle with respect to one direction.
  • Pixel Shift Multi Shot is used. This is a method of moving an image sensor by 1 pixel in the vertical and horizontal directions by using the mechanism of image stabilization, shooting a plurality of pictures and combining all of these pictures to give all color information in 1 pixel area. This makes it possible to obtain a higher resolution image. An illustration of this movement is shown in Figure 2.
  • the pixel pitch (in Figure 2, this is the centre-to-centre distance between e.g. the pixel covered by G in top row, and the pixel B immediately below the top row) of DSLR is about 4 micrometer even for a high resolution model, usually about 6 micrometer.
  • the sensor for smartphone has a pixel pitch of about 1 micrometer, which is very small compared with DSLR. It’s difficult to accurately control the movement of a distance of 1 micrometer at high speed.
  • Embodiments are provided to implement aspects, or parts thereof which facilitate the realization of any aspect.
  • high resolution images are obtainable by combining data of separate images captured at deviations which correspond to the separation of adjacent sensels in a Bayer-arrayed filtered image sensor in a handheld device. In an embodiment, this is facilitated by determining which deviations match a separation of the sensels.
  • the deviations arise from movement provided by the natural camera shake movement of a human user.
  • the movement is provided in response to indications provided to the human user.
  • Said indications encourage the human user to move the handheld device, e.g. in a certain direction or towards a target whilst images are being captured in a capture process.
  • the indications may be obtained by calculation, such that according to the calculation, movement of the handheld device should result in the handheld device following a spatial path which moves the sensels through deviations which match the sensel separation/position.
  • the sensels of the image sensor undergo relative movement with respect to image forming rays, such that an image forming ray from the same object is made to fall on several different adjacent sensels.
  • the indication is thus reflecting a computed estimate of a preferred way to move the device.
  • the indication may by updated during the capture process, e.g. in real time, as it is likely there will be a discrepancy between how well the actual movement provided by the human user follows the indication.
  • the image sensor will obtain data from a different sensel relative to other images. This enables an increase in image resolution compared to using one image by itself, since the separate images can be combined together with a reference image to form a combined image.
  • the data for any one resulting pixel is a combination of sensel data from each of the separate images. Preferably there is a defined relationship between the any one resulting pixel and the sensel data.
  • the sensel data from the separate images preferably comprises data from four filtered sensels comprising a red-filtered sensel in one image, data from a blue-filtered sensel in another image, data from a green-filtered sensel in yet another image, and data from a green-filtered sensel in yet another.
  • these sensels are adjacent. It is possible to use fewer sensels or more than the four.
  • a greater amount of colour information is captured. Further, resolution is increased and interpolation can be avoided, since there are multiple sensels each individually and separately contributing to any one resulting pixel.
  • the data from any one sensel in a captured image does not need to be (but can, if desired) shared.
  • an image capture method for a handheld device comprising an image sensor and a colour filter array, the method comprising
  • the image capture method may be performed by a processor of the handheld device.
  • the handheld device may be a device suitable to be supported by the human body for positioning and maintaining substantively in position for taking a plurality of photos during a duration of time t.
  • the handheld device is suitable to be supported by one or two humans hands.
  • the skilled person will thus be aware of suitable sizes and masses of the handheld device that meet these requirements.
  • the handheld device is a terminal such as a mobile phone, e.g. a smartphone with a screen. It will be understood that the method may be applicable to other devices which are subject to external movement e.g. vibrations and/or can be controlled to move, yet due to various constraints such as size cannot use image stabilization mechanisms.
  • the colour filter array may be a Bayer filtered array, or any colour filtered array.
  • the image sensor may be any suitable type e.g. a CMOS image sensor.
  • the combination of the colour filter array and monochrome image capture by the image sensor result in a lower resolution compared to the resolution of the raw captured data, since for any resulting full colour pixel from one captured image, the data of that resulting pixel has to be obtained from the surrounding four (red green green and blue) sensels by interpolation.
  • the pixel pitch of the image sensor may be approximately 1 micrometer. This may be the magnitude of the distance between adjacent sensels arrayed in a horizontal and vertical array. Other pixel pitches or layouts of sensels may be used. The pixel pitch is thus corresponding to the separation of adjacent sensels. Similarly, the sensels are in the array at certain positions, and between any two sensels there is a relative displacement, usually specified in terms of horizontal and vertical distance. Pixels may be adjacent when the relative displacement between positions of sensels is a minimum.
  • the plurality of images may be obtained by the image sensor or from memory.
  • the obtaining of images may be an ongoing process performed in real-time and hence does not necessarily terminate before the determining step.
  • the determining and obtaining may be performed consecutively or concurrently.
  • the plurality of images may be captured during a time period t.
  • time period t many images are captured by the image sensor, each at a different time point tx e.g. time point t1, time point t2 etc.
  • a first image may be captured at time point t1, a second image at time point t2 and so on.
  • Time point t1 may be earlier or later than time point t2.
  • Time point t1 may correspond to a time point when or shortly after the capturing of the images is initiated.
  • the first image may thus be a reference image, although other images may also be used as a reference image against which the deviation is computed.
  • the time period t can be variable in duration or predetermined.
  • the capturing of the images and start of time period t may be initiated by a command, for example pressing a “shoot” button.
  • the time period t may terminate after a reasonable period of time for a human user taking photographs, or when the method has obtained sufficient matching images.
  • a practical upper limit for completing the method may be 2-4 seconds.
  • the handheld device During all or a portion of the time period t, the handheld device is undergoing movement. Consequently, due to a lack of relative movement deployed between the image sensor and the handheld device, both undergo movement together.
  • a deviation may correspond to that between a first (e.g. reference) image and a second image by being the deviation between those images.
  • the deviation may be the estimated in-plane deviation of an image or the image sensor, as obtained in various ways.
  • a deviation may be determined to match when a value of the deviation is approximately equal, or more preferably equal to, a value of the relative displacement between positions.
  • a deviation may be determined to match when the magnitude of the deviation is approximately equal, or more preferably equal to, the pixel pitch.
  • deviation may be determined to match when a horizontal displacement and a vertical displacement corresponding to the deviation, such as of the handheld device, is approximately equal, or more preferably equal to, the pixel pitch.
  • the deviation may match when the horizontal displacement and the vertical displacement are approximately equal, or more preferably equal to horizontal and vertical displacements between positions of two sensels.
  • a suitable cut-off for being approximately equal may be when the deviation is within 30%of the pixel pitch, or the relative displacement (total or separate) between positions. Other methods of determining a match may be used e.g. depending on how the deviation is calculated.
  • Determining a match may further involve determining a specific time point as a matching time point or a specific image as a matching image. These are alternative ways of identifying image data to be used subsequently.
  • the data of images which are matching images may be combined as disclosed herein. Data of images which do not match may be discarded.
  • An identifier may be output for each specific time point or each specific image.
  • the method may be performed so that, or until, a match is determined between two adjacent sensels, or between three adjacent sensels, or between four adjacent sensels, or more. Consequently there will be one, two or three matching images correspondingly, and a reference image.
  • the method may terminate when the deviation passes through all three sensel positions that are adjacent to a central reference sensel.
  • the movement of the handheld device arises due to a force applied externally to the handheld device.
  • the force is applied to the handheld device and thence to the image sensor due to the lack of relative motion therebetween. This may because of a rigid connection between the image sensor and the handheld device, or the disapplication of any image stabilization mechanism.
  • the direction and magnitude of the force may vary with time during the image capture process, and thus will result in the handheld device undergo changing directions, distances and velocities.
  • the movement of the handheld device is a handheld shake vibration.
  • the handheld shake vibration may the well-known camera shake vibration provided by a human user. This may be that arising from pressing of a photo shoot button and/or the small vibrations that typically arise whilst the human body holds an object.
  • the image capture method further comprises obtaining an aiming indication, a first function of which is indicate a target position or a direction of movement of the handheld device such that movement of the handheld device in the indicated direction of movement or towards the target position should result in a deviation which matches the relative displacement between positions of the two or more sensels.
  • the indicated target position or direction of movement is calculated. Thus it is a prediction of how moving the handheld device in response to the aiming indication should result in a desired deviation. However, due to the response being provided by a human user, the match may not occur as the movement may deviate.
  • the image capture method further comprises determining whether a movement of the handheld device in response to the aiming indication results in a deviation which matches the relative displacement between positions of the two or more sensels. Thus the method may be iterated.
  • the image capture method further comprises
  • the deviation (s) may be measured or determined by various methods.
  • the deviation may be computed or estimated as the in-plane horizontal and vertical deviation about the image sensor.
  • Gyroscope data and image frame data may include a "time stamp" . Gyroscope and image data be correctly associated by using the time stamp even if image processing take a long time.
  • the image capture method further comprises obtaining and/or updating the aiming indication, in order to obtain images which have a corresponding deviation that matches a relative displacement between any sensels out of the two or more sensels for which no match has yet been determined.
  • the aiming indication may be obtained or updated according to analysis of already obtained images for which a match exists, or according to deviations that have already been obtained which match.
  • the objective may be to get colour data for four sensels. This will result in sufficient matching images.
  • By updating the aiming indication all colour data for sensels can be obtained. However, even if the method only achieves one match, there will still be an improvement.
  • the image capture method further comprises wherein when the performing the determining is for more than two sensels, the sensels comprise any three sensels selected from eight sensels surrounding a central reference sensel, such that the central reference sensel and the three selected sensels preferably comprise two green filtered sensels, one blue filtered sensel and one red filtered sensel.
  • the three sensels selected from the eight sensels comprise any three adjacent sensels.
  • the Bayer filter array comprises a regular two dimensional array of sensels.
  • any green-filted sensel is vertically or horizontally adjacent to two red pixels and two blue pixels.
  • To get all data from four (red, green, green, blue) adjacent sensels in such an array there are four possible ways of obtaining such for nearest-neighbour sensels. Any of these ways may be used to get the data.
  • three adjacent pixels comprise sensels arrayed in a L-shape around a central reference pixel.
  • the image capture method further comprises
  • providing the aiming indication in a manner perceivable by a human user comprising any of displaying on a display screen, outputting to a graphical user interface, and a directional indicator.
  • the aiming indication may be provided in various ways.
  • the aiming indication may be provided in real time.
  • the aiming indication real-time is an overlay, crosshairs, target indicator relative to a current orientation, a superimposed (desired) image relative to a current image in the plurality of images.
  • the image capture method further comprises
  • a second function of the aiming indication is to indicate a target position or a direction of movement of the handheld device, such that movement in the indicated direction of movement or towards the target position should result in the handheld device returning towards an original orientation or original position of the handheld device.
  • the second function is to minimise the overall movement of the handheld device whilst maintaining smaller scale movements.
  • This aims to solve a problem that some human users tend to move the handheld device by relatively large amounts away from an original orientation or position (such as that at a start time or corresponding to a reference image) when capturing images.
  • the deviations may be large (for example, greater than the pixel pitch) .
  • the handheld device does not return or only occasionally or slowly returns towards the original orientation. In this sitation it is difficult to obtain any or only some images which are matching images in a reasonable time period, as the deviation may be outside the pixel pitch or only correspond to the relative displacement of some sensels.
  • the image capture method further comprises
  • obtaining the first and /or second function of the aiming indication from gyroscope data and/or accelerometer data or from image recognition analysis of the plurality of images.
  • the image capture method comprises
  • an apparatus configured to perform any of the methods and/or implementation forms as disclosed herein.
  • the various steps of the method may be performed by various functionalities of the apparatus, and may be performed by modules or units of the apparatus having corresponding configured functionality.
  • a processor and non-volatile memory for a handheld device, wherein the memory stores instructions which when implemented by the processor cause the processor to perform any of the methods and/or implementation forms as disclosed herein.
  • a computer program product comprising computer program code which, when run, would instruct a computer to perform any of the methods and/or implementation forms as disclosed herein.
  • a and/or B may indicate any one of A, B or both of A and B.
  • the disclosed apparatuses and methods may be implemented in other manners.
  • the described apparatus embodiments are merely an example.
  • the unit division is merely logical function division and may be other division in actual implementation.
  • a plurality of units or components may be combined or integrated into another system, or some features may be omitted or not performed.
  • the displayed or discussed mutual couplings or direct couplings or communication connections may be indirect couplings or communication connections between some interfaces, apparatuses, and units, or may be implemented in electronic, mechanical, or other forms.
  • the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed. Some or all of the units may be selected according to actual requirements to achieve the objectives of the solutions of the embodiments.
  • functional units in the embodiments of the present invention may be integrated into one processing unit, or each of the units may exist alone physically, or two or more units may be integrated into one unit.
  • the integrated unit may be implemented in a form of hardware, or may be implemented in a form of hardware in addition to a software functional unit.
  • the integrated unit may be stored in a computer-readable storage medium.
  • the software functional unit is stored in a storage medium and includes several instructions for instructing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor to perform some of the steps of the methods described in the embodiments of the present invention.
  • the foregoing storage medium includes any medium that can store program code, such as a USB flash drive, a removable hard disk, a read-only memory (Read-Only Memory, ROM) , a random access memory (Random Access Memory, RAM) , a magnetic disk, or an optical disc.
  • FIG. 1 is a schematic illustration of a Bayer Filter Array.
  • FIG. 2 is an illustration of the movement of an image sensor using Pixel Shift Multi Shot technique.
  • FIG. 3 is a flowchart of an image capture method as disclosed herein.
  • FIG. 4 are illustrations of how movement of a handheld device causes deviation which matches the positions of multiple sensels or only one sensel.
  • FIG. 5 is an illustration of four different ways of obtaining matching images from sensels surrounding a central reference sensel.
  • FIG. 6 is an illustration of aiming indications.
  • FIG. 7 is an illustration of an apparatus configured to perform the image capture method as disclosed herein.
  • Figure 1 is a conventional Bayer filtered Array.
  • Figure 2 is an illustration of how the Bayer Filter Array of Figure 1 undergoes Pixel Shift Multi Shot.
  • an example indication of movement is shown as a looped arrow for four adjacent sensels.
  • the lower half of Figure 2 is a step by step sequence from left to right of the controlled movements of the image sensor in Pixel Shift Multi Step, as applied to a group of four adjacent sensels.
  • the choice of starting sensel is arbitrary.
  • the dotted circle surrounding a sensel indicates where a light forming ray from the same source point falls.
  • the arrow indicates the direction of movement that will lead to the subsequent step in the sequence.
  • FIG. 3 is a flowchart of an embodiment of an image capture method as disclosed herein. Not all steps are shown. The steps may be performed by a processor, or sub units or modules thereof, instructed by instructions stored in a memory.
  • the processor and memory may be on a handheld device which captures the plurality of images.
  • the handheld device has an image sensor with an array of sensels and a Bayer Filter Array. Whilst the images are being captured the handheld device is undergoing movement, which is camera shake movement and/or directed movement provided in response to an aiming indication. Hardware provided image stabilization functionality such as movement of the image sensor relative to the handheld device may be not used.
  • the plurality of images are captured during duration of time t and are made available.
  • a deviation is obtained for each corresponding time point t where an image is captured.
  • the deviation is compared against a known relative displacement between positions of the sensels. It is determined whether there is a match between the deviation and the relative displacement between sensels.
  • the image from that time point when the match is determined is suitable for use in further processing.
  • the data from images that match may be combined with a reference image (the reference image may also be considered a matching image as there is zero deviation and hence the deviation matches the origin which is also a position of sensels) in an optional step.
  • the image forming ray from the same scene object point passes over adjacent sensels of the image sensor.
  • a real-time aiming indication is obtained by calculation and displayed.
  • the aiming indication indicates a target position or direction of movement.
  • the upper half of Figure 4 shows a case of complete matching for a group of 4 sensels against time. Due to hand held camera shake or directed motion in response to the aiming indication, the sensels follow a spatial path shown as a 2 projection on the horizontal and vertical plane behind the group of 4 sensels.
  • a value of a green (G) filtered pixel is recorded a first time shown as the first frame. This acts as a reference frame for subsequent deviations. Other frames or sensels could have been chosen.
  • the deviation of the sensels is shown as X and Y deviations on the vertical axis relative to a position established by the reference frame.
  • the X and Y deviations are calculated from gyroscope data of the handheld device.
  • Match R, Match G and Match B each of the X Y tracks jointly intercepts the horizontal reference line of the reference frame. These points have been matched to their corresponding sensels.
  • the lower half of Figure 4 is similar to the upper half, but shows a case of incomplete matching, as there is only one time point labelled Match B where the X and Y tracks jointly intercept the reference line.
  • the other sensels ( (R, G) do not have deviations that intercept the reference line, as the movement of the handheld device does not pass through the positions of the adjacent 4 pixels.
  • Figure 5 illustrates 4 different ways in which the movement may be compared against.
  • Figure 6 illustrates an embodiment of one possible implementation of the aiming indication. From three pictures are shown, such as may be displayed on a display of a smartphone.
  • a reference point (A) of the moment the user started the image capture method is displayed on the screen. This corresponds to a start time of the image capture.
  • the deviation of the angle of view from an angle view of the start time of image capture is computed by any method.
  • An aiming indication is displayed superimposed on the screen with the first reference point (A) as aiming point (B) .
  • a human user manipulates the smartphone to match the reference point (A) with aiming point (B) .
  • the camera image may be displayed in real time, the camera image may be displayed still images of the moment of pressing the shutter first, and it may be displayed by superimposing both.
  • Figure 7 is an illustration of hardware that may be utilized or provided in a handheld device e.g. a smartphone with a display (not shown) for performing the functions of the method.
  • the hardware comprises a processor which may compute the deviation, determine whether the deviation matches the relative displacement of positions of sensels and combine data of matching images as disclosed herein.
  • the memory may store the captured images from the image sensor and provide them to the processor and/or the display (not shown) .
  • the processor, image sensor and memory are connected via communication channel labelled bus.
  • the gyroscope /accelerometer is shown in dotted lines as the function is optional for this embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Studio Devices (AREA)

Abstract

L'invention concerne un procédé et un appareil destinés à obtenir des images à haute résolution par un ordiphone en combinant des images séparées capturées à des déviations dans le plan, dont chacune correspond à la séparation de pixels de détection adjacents dans un capteur d'images filtré par une matrice de Bayer dans l'ordiphone. Un mécanisme de stabilisation d'image n'est pas utilisé. Les déviations proviennent des tremblements de la main de l'utilisateur humain et/ou en réponse à des indications transmises à l'utilisateur, lesquelles indiquent à l'utilisateur qu'il doit déplacer l'ordiphone selon un trajet qui résultera en des déviations qui correspondent à la séparation de pixels de détection des pixels de détection nécessaires.
PCT/CN2019/101473 2019-08-20 2019-08-20 Capture d'images à prises de vues multiples sans stabilisation d'image WO2021031096A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2019/101473 WO2021031096A1 (fr) 2019-08-20 2019-08-20 Capture d'images à prises de vues multiples sans stabilisation d'image
CN201980099166.4A CN114208153B (zh) 2019-08-20 2019-08-20 不使用防抖的多重拍摄图像捕获

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/101473 WO2021031096A1 (fr) 2019-08-20 2019-08-20 Capture d'images à prises de vues multiples sans stabilisation d'image

Publications (1)

Publication Number Publication Date
WO2021031096A1 true WO2021031096A1 (fr) 2021-02-25

Family

ID=74659810

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/101473 WO2021031096A1 (fr) 2019-08-20 2019-08-20 Capture d'images à prises de vues multiples sans stabilisation d'image

Country Status (2)

Country Link
CN (1) CN114208153B (fr)
WO (1) WO2021031096A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004054884A (ja) * 2002-05-31 2004-02-19 Sanyo Electric Co Ltd 画像処理装置
CN101472069A (zh) * 2007-12-28 2009-07-01 奥林巴斯映像株式会社 摄像显示装置及方法
US20100103294A1 (en) * 2008-10-24 2010-04-29 Samsung Electronics Co., Ltd. Image pickup devices and image processing methods using the same
CN104079904A (zh) * 2014-07-17 2014-10-01 广东欧珀移动通信有限公司 一种彩色图像生成方法及装置
CN105830090A (zh) * 2013-10-04 2016-08-03 伊克莱瑞迪公司 使用阵列传感器以传感器的全分辨率测量多种类型数据的方法
CN107924572A (zh) * 2015-04-17 2018-04-17 快图凯曼有限公司 使用阵列相机执行高速视频捕获和深度估计的***和方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5412767B2 (ja) * 2008-09-02 2014-02-12 株式会社ニコン ずれ測定装置およびずれ測定方法
JP5045801B2 (ja) * 2009-09-09 2012-10-10 株式会社ニコン 焦点検出装置、撮影レンズユニット、撮像装置およびカメラシステム
EP2720455B1 (fr) * 2011-06-09 2016-06-22 FUJIFILM Corporation Dispositif de capture d'image imageant une image animée en trois dimensions et une image animée en deux dimensions, et appareil de capture d'image comportant le dispositif de capture d'image
CN105185302B (zh) * 2015-08-28 2018-01-09 西安诺瓦电子科技有限公司 单色图像间灯点位置偏差修正方法及其应用

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004054884A (ja) * 2002-05-31 2004-02-19 Sanyo Electric Co Ltd 画像処理装置
CN101472069A (zh) * 2007-12-28 2009-07-01 奥林巴斯映像株式会社 摄像显示装置及方法
US20100103294A1 (en) * 2008-10-24 2010-04-29 Samsung Electronics Co., Ltd. Image pickup devices and image processing methods using the same
CN105830090A (zh) * 2013-10-04 2016-08-03 伊克莱瑞迪公司 使用阵列传感器以传感器的全分辨率测量多种类型数据的方法
CN104079904A (zh) * 2014-07-17 2014-10-01 广东欧珀移动通信有限公司 一种彩色图像生成方法及装置
CN107924572A (zh) * 2015-04-17 2018-04-17 快图凯曼有限公司 使用阵列相机执行高速视频捕获和深度估计的***和方法

Also Published As

Publication number Publication date
CN114208153A (zh) 2022-03-18
CN114208153B (zh) 2023-03-10

Similar Documents

Publication Publication Date Title
US11259009B2 (en) Modular configurable camera system
US8111910B2 (en) Stereoscopic image processing device, method, recording medium and stereoscopic imaging apparatus
US8908013B2 (en) Systems and methods for collaborative image capturing
JP5683025B2 (ja) 立体画像撮影装置および立体画像撮影方法
JP7146662B2 (ja) 画像処理装置、画像処理方法、およびプログラム
JP6289811B2 (ja) 映像処理装置及び方法
CN105794197A (zh) 能够生成全景文件的便携式设备
CN102959943A (zh) 立体全景图像合成装置、图像捕获装置、立体全景图像合成方法、记录介质、以及计算机程序
KR20150050172A (ko) 관심 객체 추적을 위한 다중 카메라 동적 선택 장치 및 방법
KR101997991B1 (ko) 시점 변환을 이용하는 영상 결합 방법 및 시스템
CN112311965A (zh) 虚拟拍摄方法、装置、***及存储介质
GB2517730A (en) A method and system for producing a video production
JP2005328497A (ja) 撮像装置及び撮像方法
JP2013025649A (ja) 画像処理装置及び画像処理方法、プログラム
JP6921031B2 (ja) 制御装置及び撮影方法
JP2010206643A (ja) 撮像装置、方法およびプログラム
CN105721788A (zh) 一种多摄像头电子设备及其拍摄方法
KR101801100B1 (ko) 몰입형 콘텐츠 제작 지원용 영상 제공 장치 및 방법
JP2010166218A (ja) カメラシステム及びその制御方法
WO2021031096A1 (fr) Capture d'images à prises de vues multiples sans stabilisation d'image
CN111279352B (zh) 通过投球练习的三维信息获取***及摄像头参数算出方法
JP6362473B2 (ja) 画像処理装置、撮像装置、画像処理方法、プログラム、記憶媒体
WO2015141185A1 (fr) Dispositif de commande d'imagerie, procédé de commande d'imagerie, et support d'informations
CN113781560B (zh) 视点宽度的确定方法、装置及存储介质
KR102298047B1 (ko) 디지털 콘텐츠를 녹화하여 3d 영상을 생성하는 방법 및 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19942606

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19942606

Country of ref document: EP

Kind code of ref document: A1