GB2537886A - An image acquisition technique - Google Patents

An image acquisition technique Download PDF

Info

Publication number
GB2537886A
GB2537886A GB1507356.2A GB201507356A GB2537886A GB 2537886 A GB2537886 A GB 2537886A GB 201507356 A GB201507356 A GB 201507356A GB 2537886 A GB2537886 A GB 2537886A
Authority
GB
United Kingdom
Prior art keywords
image
portions
primary
images
capturing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1507356.2A
Other versions
GB201507356D0 (en
GB2537886B (en
Inventor
Laaksonen Lasse
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nokia Technologies Oy
Original Assignee
Nokia Technologies Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Technologies Oy filed Critical Nokia Technologies Oy
Priority to GB1507356.2A priority Critical patent/GB2537886B/en
Publication of GB201507356D0 publication Critical patent/GB201507356D0/en
Publication of GB2537886A publication Critical patent/GB2537886A/en
Application granted granted Critical
Publication of GB2537886B publication Critical patent/GB2537886B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/743Bracketing, i.e. taking a series of images with varying exposure conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10144Varying exposure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0088Synthesising a monoscopic image signal from stereoscopic images, e.g. synthesising a panoramic or high resolution monoscopic image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0092Image segmentation from stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • H04N25/58Control of the dynamic range involving two or more exposures
    • H04N25/587Control of the dynamic range involving two or more exposures acquired sequentially, e.g. using the combination of odd and even image fields
    • H04N25/589Control of the dynamic range involving two or more exposures acquired sequentially, e.g. using the combination of odd and even image fields with different integration times, e.g. short and long exposures

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

Method of in-camera removal of unwanted features from captured images by: obtaining a first image using a short exposure time; obtaining a second image using a long exposure time; processing and compositing the two images. First and second photographs may be captured simultaneously using two different imaging systems or intermittently. Image sequences or video may be captured and multiple images may be combined. First picture may be segmented into primary regions of interest and non-primary areas. Background portions of the second image spatially corresponding to secondary sections of the first may be combined with primary portions of the first. Pre-captured images may be used in place of the second image. Primary areas may be: those closest to the image plane based on disparity and depth map information; those on which the camera was focussed; selected by a user; objects of interest, such as people, within a specified distance of the camera. Remote and local direction indications may indicate destinations. Blended images may be displayed on a screen serving as a viewfinder. Images may be aligned. Image enhancement, such as brightness, colour balance, contrast adjustment, may be applied. Irrelevant details, such as crowds, traffic and vehicles, may be made to disappear.

Description

Intellectual Property Office Application No. GII1507356.2 RTM Date:20 October 2015 The following terms are registered trade marks and should be read as such wherever they occur in this document: Blu-ray (Page 35) Intellectual Property Office is an operating name of the Patent Office www.gov.uk/ipo An image acquisition technique
TECHNICAL FIELD
The example and non-limiting embodiments of the present invention relate to processing of digital images.
BACKGROUND
When capturing an image or video of one or more objects of interest in an environment that also includes a plurality of other objects, the resulting image or video is likely to include visual information that is unnecessary, undesired and sometimes even distracting. A typical example of such a situation is when a user is taking a photograph or capturing a video sequence of one or more persons in a crowded place or with a busy street in the background, where the crowd or the traffic may be blocking parts of the scene the user wishes to capture.
In various applications, however, presenting the user with an image that de-15 emphasizes or completely omits at least some of the other objects may be beneficial in order to depict the objects of interest and possible other pieces of perceptually important image content in a visually clear manner.
SUMMARY
According to an example embodiment, an apparatus is provided, the apparatus comprising image acquisition means for obtaining at least a first image depicting a scene and a second image depicting said scene, said image acquisition means configured to obtain said first image using a first exposure time and obtain said second image using a second exposure time that is longer than said first exposure time, and image processing means for deriving a composition image depicting said scene based at least in part on said first image and said second image.
According to another example embodiment, a method is provided, the method comprising obtaining at least a first image depicting a scene and a second image depicting said scene, comprising obtaining said first image using a first exposure time and obtaining said second image using a second exposure time that is longer than said first exposure time, and deriving a composition image depicting said scene based at least in part on said first image and said second image.
According to another example embodiment, an apparatus is provided, the apparatus comprising at least one processor and a memory storing a program of instructions, wherein the memory storing the program of instructions is configured to, with the at least one processor, configure the apparatus to at least obtain at least a first image depicting a scene and a second image depicting said scene, wherein the image capturing portion is caused to obtain said first image using a first exposure time and obtain said second image using a second exposure time that is longer than said first exposure time. The apparatus is further caused to derive a composition image depicting said scene based at least in part on said first image and said second image.
According to another example embodiment, a computer program is provided, the computer program comprising computer readable program code configured to cause performing the following when said program code is run on a computing apparatus: obtain at least a first image depicting a scene and a second image depicting said scene, comprising obtaining said first image using a first exposure time and obtaining said second image using a second exposure time that is longer than said first exposure time, and derive a composition image depicting said scene based at least in part on said first image and said second image.
The computer program referred to above may be embodied on a volatile or a non-volatile computer-readable record medium, for example as a computer program product comprising at least one computer readable non-transitory medium having program code stored thereon, the program which when executed by an apparatus cause the apparatus at least to perform the operations described hereinbefore for the computer program according to an example embodiment of the invention.
The embodiments of the invention presented in this patent application are not to be interpreted to pose limitations to the applicability of the appended claims. The verb "to comprise" and its derivatives are used in this patent application as an open limitation that does not exclude the existence of also unrecited features. The features described hereinafter are mutually freely combinable unless explicitly stated otherwise.
Some features of the invention are set forth in the appended claims. Embodiments of the invention, however, both as to its construction and its method of operation, together with additional objects and advantages thereof, will be best understood from the following description of some example embodiments when read in connection with the accompanying drawings.
BRIEF DESCRIPTION OF FIGURES
The embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings, where Figure 1 schematically illustrates some components of an exemplifying 20 electronic device according to an example embodiment; Figure 2 schematically illustrates some components of an exemplifying electronic device according to an example embodiment; Figures 3a to 3c depict a schematic example concerning derivation of a composite image based at least in part on a first image and a second image 25 according to an example embodiment; Figure 4 depicts a schematic example of a further image that may be applied in derivation of the composite image according to an example embodiment; Figure 5 schematically illustrates some components of an exemplifying electronic device according to an example embodiment; Figures 6a and 6b depict a schematic example concerning derivation of a composite image based at least in part on a first image according to an example embodiment;; Figure 7 depicts a schematic example of an augmented composite image according to an example embodiment; and Figure 8 illustrates a method according to an example embodiment.
DESCRIPTION OF SOME EMBODIMENTS
Figure 1 schematically illustrates some components of an exemplifying electronic device 100 that may be employed to provide various embodiments of the present invention. Components of the electronic device 100 depicted in Figure 1 include an image capturing means 110 for capturing images, image acquisition means 120 for obtaining images using said image capturing means 110 and image processing means 130 for processing images. The electronic device 100 is further depicted with memory 115 for storing images and control information, such as images captured using the image capturing means 110, images created by using the image processing means 130, image data characterizing images stored in the memory 115 and/or control information for controlling operation of the image capturing means 110, image acquisition means 120 and/or the image processing means 130. The electronic device 100 is further depicted with user I/O (input/output) components 118 that may be arranged to provide a user interface for receiving input from a user and/or providing output to the user. The user I/O components 118 may comprise e.g. a display or a touchscreen for displaying information to a user and/or a keyboard, a set of one or more keys or control buttons, a touchscreen or a touchpad for receiving commands from a user.
The electronic device 100 may be provided e.g. as a mobile device, such as a mobile phone, a smartphone, a digital camera, a camcorder, a media player device, a tablet computer, a laptop computer, a personal digital assistant (FDA), a portable navigation device, a gaming device, etc. As another example, the electronic device 110 may be provided as a stationary device that is installed in its operating environment in fixed or semi-fixed manner, such as a component of an information system, an infotainment system and/or a navigation system in a vehicle such as a car.
The image capturing means 110 may be provided by hardware means or by a combination of hardware means and software means. As an example, the image capturing means 110 may be provided as an image capturing portion that comprises one or more camera modules together with appropriate control function(s) configured to enable capturing images using desired imaging parameters under control of the image acquisition means 120.
Each of the image acquisition means 120 and the image processing means 130 may be provided by respective hardware means, by respective software means or by a respective combination of hardware means and software means. As a non-limiting example of providing the image acquisition means and the image processing means 130 are provided as respective combinations of hardware means and software means, Figure 2 schematically illustrates an electronic device 200, which is a variation of the electronic device 100. The electronic device 200 may also be referred to as a computing device.
In addition to the components described in context of the electronic device 100, the electronic device 200 further comprises a processor 116. In the electronic device 200 the memory 115 is further applied for storing computer program code, and the processor 116 may be arranged to control operation of the electronic device 200 e.g. in accordance with the computer program code stored in the memory 115 and possibly further in accordance with the user input received via the user I/O components 118. In this regard, respective portions of the computer program code stored in the memory 115 may be arranged to, with the processor 116, to provide the image acquisition means 120 and the image processing means 130, depicted in Figure 2 as logical components within the processor 116. One or more further portions of the computer program code stored in the memory 115 may be arranged to provide further aspects of a control function for controlling at least some aspects of operation of the image capturing means 110. Moreover, the user I/O components 118 may be arranged, possibly together with the processor 116 and respective one or more portions of the computer program code, to provide at least some aspects of the user interface of the electronic device 200 to enable receiving input from a user and/or providing output to the user.
In the examples depicted in Figures 1 and 2 the components of the electronic device 100, 200 are connected to each other via a bus 140 that enables transfer of data and control information between two components of the electronic device 100, 200. In other examples, connecting means for connecting the components within the electronic device 100, 200 to each other may be provided e.g. by dedicated connections for transfer of data and control information between pairs of components. The electronic device 100, 200 may comprise further components that are not depicted in Figures 1 or 2. As an example in this regard, the electronic device 100, 200 may further comprise a communication portion (communication means) for communicating with other devices via a wireless and/or wired connection.
The image capturing means 110, the image acquisition means 120 and the image processing means 130, together with the memory 115 and other components of the electronic device 100, 200 may be arranged to jointly implement an image acquisition arrangement, as will be described in the following using a number of examples.
In order to provide the image acquisition arrangement, the image acquisition means 120 may be arranged to obtain, by using the image capturing means 110, a first image depicting a scene of interest and a second image depicting the same scene. In particular, the image acquisition means 120 may be configured to obtain the first image using first exposure time and to obtain the second image using a second exposure time, which second exposure time is longer than the first exposure time. Moreover, the image processing means 130 is configured to derive a composition image based at least in part on the first image and the second image. In the following, a number of example embodiments in this framework are described.
The term exposure time as used herein refers to a duration of the time period over which light sensitive elements of an image sensor of the image capturing means 110 are exposed to an incoming radiation (e.g. to incoming visible light). Another term that is commonly applied to describe the same aspect in the field of photography is shutter speed, which expresses the rate at which the shutter applied to control exposure of the respective image sensor of the image capturing means 110 is closed (into a position where no exposure to the incoming radiation (e.g. incoming visible light) takes after having been opened (into a position where the exposure to the incoming radiation takes place).
For the purpose of describing various example embodiments of a present invention, an image depicting a scene may be considered to include at least one or more background portions that represent the background (or background objects) in the depicted scene. In case the image capturing means 110 remains stationary or substantially stationary while capturing the first and second images, the background objects that constitute the background are stationary or substantially stationary objects in the depicted scene (such as furniture, walls, buildings, trees, plants, scenery in general), represented in captured images by respective background portions.
Additionally, the image may further comprise one or more of the following: - one or more primary image portions that represent respective one or more primary objects in the depicted scene; and - one or more secondary image portions that represent respective one or more secondary objects in the depicted scene.
Herein, a primary object refers to any object that may serve as an object of interest to be photographed against the background formed by the stationary or substantially stationary background objects, whereas a secondary object refers to an object that does not belong to the background but that is no interest to the photographer either. A secondary object may be either a stationary or a non-stationary object with respect to the image capturing means 110. The background objects and/or secondary object may be jointly referred to as non-primary objects and the background portions and/or secondary image portions may be jointly referred to as non-primary image portions. As an example, a primary object may be a person to be photographed in front of a landmark forming part of the background, while other people within the captured scene represent the secondary objects.
In case the image capturing means 110 is non-stationary while capturing the first and second images (e.g. the electronic device 100, 200 is operated in a moving vehicle to capture images of a scene outside the vehicle), the characteristics of background objects are typically different from the case of stationary or substantially stationary image capturing means 110: in this regard, the background keeps changing and hence at least some background objects that are relatively near to the image capturing means 110 may be non-stationary (while at least some background objects further away from the image capturing means 110 may be stationary or substantially stationary). As in case of the stationary or substantially stationary image capturing means 110, any secondary objects in the depicted scene may be stationary or non-stationary.
Various example embodiments of the image acquisition arrangement may be arranged to enable the electronic device 100, 200 to provide a user with a composition image where information that is considered perceptually important, e.g. one or more primary image portions together with the one or more background portions, is retained while information that is considered perceptually unimportant or even distracting, e.g. one or more secondary image portions, is omitted or removed and possibly replaced with spatially corresponding image content from other images.
The first exposure time, hereinafter referred to as a short exposure time, enables capturing a sharp image of the scene regardless of the possible movement within the depicted scene. Usage of the short exposure time 'freezes the movement' in the depicted scene. Therefore, while the primary image portions provide sharp images of the respective primary objects, also the secondary image portions provide sharp images of the respective secondary objects and hence the captured image may depict both the primary object(s) and the secondary object(s) in equally prominent roles. In some cases usage of the short exposure time may result in a relatively limited depth of focus in the first image. Consequently, when focusing to one or more primary objects upon capturing the first image, objects other than the primary one(s) may be depicted in the first image in a non-sharp or even blurred manner.
The second exposure time, hereinafter referred to as a long exposure time, typically results in so-called motion blur in image portions that depict non-stationary objects within the depicted scene. A technique known in the art of photography is to apply a long exposure time in order to make non-stationary secondary objects in the scene to disappear from the captured image. In this regard, long exposure time results in intentional motion blur in portion(s) of the captured image that depict secondary objects that are non-stationary with respect to the image capturing means 110. Taken to a sufficient extent, the motion blur caused by the long exposure time substantially results in completely removing or leaving only a limited trace of non-stationary secondary objects in the captured image, thereby effectively providing a captured image without secondary image portions or with a reduced number of secondary image portions. The excluded secondary image portions (that are typically present in the corresponding first image) are completely or partially replaced by background portions that depict respective parts of the background of the depicted scene. In some cases usage of the long exposure time may enable deeper depth of focus for the second image in comparison to that of the first image. Consequently, even when focusing to one or more primary objects upon capturing the second image, also some or all of the background objects are typically depicted in the second image in a sharper manner than in the first image.
On the other hand, even in case the primary object(s) within the captured scene are substantially stationary with respect to the image capturing means 110, e.g. in case of a handheld electronic device 100, 200 involuntary shaking of the hand of a user holding the electronic device 100, 200 and other involuntary movements of the user are likely to cause at least minor changes in relative positions of the primary object(s) and the image capturing device 110 during the long exposure time. Therefore, some extent of motion blur may be introduced also in the portion(s) of the captured image that depict the primary object(s). Similar unintentional motion blur is likely to occur in a larger extent when the primary object(s) are e.g. persons or animals that are not likely to remain exactly stationary during the long exposure time.
Consequently, the sharpness of primary image portions of the second image may be compromised and, in particular, may be inferior to that of respective image portions of the first image.
In order to ensure as high quality representation of the primary object(s) as possible together with background objects of interest, the image acquisition arrangement may be configured to derive the composition image on basis of the first image and the second image such that parts of the composition image that represent the primary objects in the depicted scene are created on basis of the first image while the remaining parts of the composition image may be created on basis of the second image. With such an approach portions of the composition image that depict the primary object(s) remain sharp, while at least parts of the composition image that would depict non-stationary secondary object(s) of the depicted scene (that may appear in the first image) are completely or substantially avoided due to the motion blur effect described in the foregoing and hence the part of the composition image that originates from the second image basically include background portions that depict respective pieces of the background of the depicted scene.
Herein, the short exposure time may be an exposure time selected for example from the range from 1/8000 to 1/125 seconds, e.g. 1/500 seconds, whereas the long exposure time may be an exposure time selected for example from the range from 0.5 to 30 seconds, e.g. 2 seconds. These ranges serve as illustrative and non-limiting examples of specific values and value ranges that may be applicable for use as the short and/or long exposure times. Moreover, most appropriate values to be applied for the short and long exposure times typically strongly depend on environmental conditions upon capturing the first and second images as well as on user preferences regarding e.g. the sharpness of the primary image portions in the first image and the desired extent motion blur effect to be introduced in the second image.
According to an example embodiment, the electronic device 100, 200 may enable selection of desired values for the short and long exposure times via the user interface, thereby leaving the decision in this regard for the user of the electronic device 100, 200. As an example in this regard, the user may be provided with free selection of the short and long exposure times e.g. from the exemplifying ranges described in the foregoing. As further examples, the selection of the value for the long exposure time may be limited to a sub-range of values within a full range (e.g. the range described in the foregoing) in dependence of the value selected for the short exposure time or the selection of the value for the short exposure time may be limited to a sub-range of values within a full range (e.g. the range described in the foregoing) in dependence of the value selected for the long exposure time.
According to another example embodiment, the electronic device 100, 200 (e.g. the image acquisition means 120) may be configured to automatically select the short and long exposure times in accordance with a predefined selection rule. Such a predefined selection rule may be arranged to make the selections in accordance with observed environmental parameters and/or with other operational parameters that control operation of the image capturing means 110. As non-limiting examples, such environmental parameters may comprise a parameter indicative of observed level of ambient light, whereas such other operational parameters may be based on automatic configuration and/or use settings made via the user interface of the electronic device 100, 200. As a further example, such environmental parameter may comprise one or more parameters that are descriptive of the extent of movement in the captured scene. Such parameter(s) may be derived by the image acquisition means 120 e.g. by capturing a sequence of images and tracking the 'speed' of movement of one or more objects depicted in images of the sequence, and using the tracked 'speed' of the object(s) as an input for automatic determination of the short and long exposure times.
The first and second images are typically, but not necessarily, rectangular images, defined by a set of pixel values that define respective pixel values for an array of pixel positions that is arranged into rows and columns. However, this serves as a non-limiting example and images of different shape and/or arrangement of pixel positions of different arrangement may be applied instead without departing from the scope of the disclosed invention.
While the expression 'capturing an image' is used herein in context of the first and second images to refer obtaining a first set of image data (e.g. a first array of pixel values) that represents the first image and a second set of image data (e.g. a second array of pixel values) that represents the second image, the first and second images serve as intermediate images that do not necessarily serve as an output of the image acquisition arrangement that would hence serve as 'captured images' in the user's point of view but that are rather used as source data for derivation of the composition image. The composition image, in turn, may be provided for presentation to a user via the user interface (e.g. a display or a touchscreen) of the electronic device 100, 200 as part of operation of the image acquisition arrangement and/or provided as an output image -i.e. as the 'captured image in the user's point of view -from the image acquisition arrangement for subsequent viewing by the user.
According to an example embodiment, the image capturing means 110 may be provided as an image capturing portion that comprises a single imaging io system (e.g. a single camera module) that is employed for capturing both the first image and the second image. Alternatively, the image capturing means 110 may be provided as an image capturing portion that comprises two or more imaging systems (e.g. two or more camera modules) while a single imaging system of the two or more imaging systems is employed for capturing both the first image and the second image. In such an arrangement the image acquisition means 120 may be configured to control the image capturing means 110 to capture the first and second images intermittently, i.e. one after another, such that the second image is captured after the first image or vice versa. The time gap between capturing the first and second images is preferably as short as possible in order to ensure similar environmental conditions for capturing the two images and to minimize the risk of any involuntary movement change(s) in relative positions of the image capturing means 110 and the primary object(s) between the two images.
As another example, the single imaging system of the image capturing means 110 may be employed in such a way that the image acquisition means 120 is configured to capture a sequence of images such that each image of the sequence is captured using the short exposure time, while the image acquisition means is further configured to obtain the first and second images on basis of this sequence of images. In this regard, one of the images of the sequence is selected as the first image, whereas the second image is derived on basis of images that constitute the sequence. The image of the sequence selected as the first image may any image taken from the sequence, selected e.g. automatically by the image acquisition means 120 or by a user via the user interface of the electronic device 100, 200 (after having been enabled to view the sequence of images via the user interface).
Automatic selection by the image acquisition means 120 may comprise selecting e.g. the first image of the sequence, the last image of the sequence, the Nth image of a sequence of 2N or 2N+1 images, or a randomly selected image of the sequence. The second image may be derived, by the image acquisition means 120, on basis of the images of the sequence e.g. as a linear combination of the images of the sequence. Such linear combination may be provided by composing each pixel value of the second image as a linear combination of respective pixel values in the images of the sequence, e.g. as an average or a weighted average of the respective pixel values in the images of the sequence.
According to an example embodiment, the image capturing means 110 may be provided as an image capturing portion that comprises two or more imaging systems (e.g. two or more camera modules) and hence the image capturing means 110 includes at least a first imaging system and a second imaging system. Moreover, the first and second imaging systems can be controlled at least in part independently of each other by the image acquisition means 120. In particular, at least the exposure times applied in the first and second imaging systems can be selected independently of each other.
For the sake of an example, the first image may be captured using the first imaging system and the second image may be captured using a second imaging system. The first image is preferably captured simultaneously or substantially simultaneously with the second image in order to ensure similar environmental conditions for capturing the two images and to minimize the risk of any involuntary change(s) in relative positions of the image capturing means 110 and the primary object(s) between the two images. Herein, the expression simultaneously or substantially simultaneously refers to capturing the first image (using the short exposure time) during the (long) exposure time of the second image, in other words such that a first time period that constitutes the short exposure time completely overlaps with a second time period that constitutes the long exposure time. The first time period may be a time period that begins simultaneously or substantially simultaneously with the second time period, a time period that ends simultaneously or substantially simultaneously with the time period, or a time period that is otherwise completely included in the second time period, e.g. a time period temporally centered at the temporal mid-point of the second time period.
According to an example embodiment, the first and the second imaging systems of the image capturing means 110 are arranged to capture respective images of the same or substantially the same scene, in other words the fields of view provided by the first and the second imaging systems are preferably the same or substantially the same. Consequently, the fields of view to the captured scene provided in the first and second images are inherently similar. As an example, the same or substantially the same field of view may be provided by arranging the first and second subsystems to at least partially share optical components, e.g. by employing the same lens arrangement to receive the incoming radiation (e.g. incoming visible) that is directed by means of a suitable reflector arrangement to respective image sensors of the first and second imaging systems. In case the image sensors in the first and second imaging systems are similar, they are employed in a similar manner, and the light transmission paths from the reflector arrangement to the respective image sensors are substantially similar, such an arrangement of two separate imaging systems results in the fields of view to the captured scene in the first and second images to be inherently similar and typically no image alignment processing in order to ensure similar fields of view is necessary.
According to an example embodiment, as a variation of the above example, even while sharing at least part of the lens arrangement for reception of the incoming radiation, different image sensors in the first and second imaging systems and/or different use of similar image sensors in the first and second imaging systems (e.g. with respect to applied aspect ratio) may result in a scenario where the fields of view provided in the first and second images are not the same.
According to a further example embodiment, the first and second imaging systems (that are similar or different in characteristics) may be arranged at a predefined or otherwise known distance from each other such that the optical axes of their lens arrangements are parallel or substantially parallel to each other or such that their optical axes exhibit a predefined or otherwise known (small) angle therebetween. This kind of arrangement of imaging systems may be applied e.g. in stereo imaging. Such arrangement of imaging systems provide inherently different fields of view in the first and second images and hence image alignment processing is typically needed. On the other hand, when using two imaging systems that provide inherently different fields of view, the image acquisition means 120 and/or image processing means 130 may be arranged to extract disparity information (e.g. a depth map) that is descriptive of the difference in distance of an object depicted in both the first and second images and the image capturing means 110. In this regard, any technique for extracting disparity information on basis of a pair of images known in the art may be applied.
In a scenario where the first and second imaging systems do not share an optical axis or otherwise provide a different field of view to the captured scene, the image acquisition means 120 and/or the image processing means 130 may be arranged to apply image alignment processing to at least one of the first and second images or part(s) thereof in order to align the field of views of the first and second images such that their fields of view match each other.
The image alignment processing may be applied to the first and/or second image in full or to one or more portions of the first and/or second images. The 30 image alignment processing may involve e.g. scaling, rotating and/or translating one or both of the first and second images or part(s) thereof in view of the known distance and/or angle between the first and second imaging systems and imaging parameters applied therein to capture the first and second images. In this regard, any image alignment processing technique known in the art may be applied.
The image alignment processing may be applied by the image acquisition means 120 prior to provision of the first and second images for derivation of the composition image by the image processing means 130. Alternatively, the image alignment processing may be carried out by the image processing means 130 prior to or in the course of derivation of the composition image.
Regardless of the arrangement of imaging systems employed to capture the first and second images, the image acquisition means 120 or the image processing means 130 may be arranged to apply image enhancement processing to the second image in order to bring image characteristics such as brightness, color balance, contrast, etc. similar to those of the first image to facilitate provision of a good quality composite image. Alternatively, the image enhancement processing may be applied to the first image instead of the second image or the image enhancement processing may be applied to both the first and second images. In this regard, any image enhancement processing technique known in the art may be applied.
The image enhancement processing may be applied by the image acquisition means 120 prior to provision of the second image for derivation of the composition image by the image processing means 130. Alternatively, the image enhancement processing may be carried out by the image processing means 130 prior to or in the course of derivation of the composition image.
In case the image alignment processing and/or the image enhancement processing described in the foregoing is applied to the first image and/or the second image, subsequent image processing steps such as image segmentation and extraction of primary image portions and background portions (that will be described by examples in the following) are carried out on basis of the respective aligned and/or enhanced first and second images (unless explicitly stated otherwise).
As described in the foregoing, at least the primary image portions are identified from the first image that is captured using the short exposure time to facilitate provision of a sharp composite image in this regard. The primary image portions may be subsequently extracted from the first image and applied to derive the composition image.
In order to identify the primary image portion(s), the first image may be segmented into two or more non-overlapping image portions using any suitable image segmentation technique known in the art. As an example in this regard, if disparity information is available, a segmentation technique that at least in part relies on the disparity information may be applied. Herein, the disparity information may be available from the image acquisition means 120 (as described in the foregoing) or the image processing means 130 may be arranged to extract the disparity information on basis of the first and second images.
Each image portion resulting from the segmentation procedure may be defined e.g. as a respective set of pixel positions that define the pixel positions of the first image belonging to the respective image portion. Each pixel position an image may be indicated as a respective position in a coordinate system having its origin e.g. in the middle of the image or in one of the corners of the image.
One or more of the portions that result from the segmentation procedure are designated as primary image portions, whereas the remaining ones are designated as non-primary image portions. In particular, all image portions that are not designated as primary image portions may be considered (or designated) as non-primary portions without an explicit designation procedure in this regard. The non-primary image portions may represent secondary objects or background objects in the depicted scene and they may serve as the reference for identifying parts or portions of other images, e.g. the second image, that spatially coincide with the non-primary portions of the first image.
According to an example embodiment, the designation of primary image portions is carried out automatically by the image processing means 130. Such an automatic designation may employ information that indicates settings and/or imaging parameters employed in the imaging system applied in capturing the first image. As an example in this regard, the image capturing means 110 and/or the image acquisition means 120 may be arranged to provide indication of one or more parts or areas of the first image that depict object(s) to which the lens arrangement of the imaging system were focused upon capturing the first image. Consequently, the image processing means 130 may identify image portions that spatially coincide with these one or more image parts or areas and designate the identified image portions as primary image portions. As another example, alternatively or additionally, automatic designation of image portions may be based on an assumption that among objects depicted in the first image the one that was closest to the image capturing means 110 upon capturing the first image is the primary object. Consequently, the automatic designation of image portions may comprise obtaining disparity information (e.g. the depth map), using the obtained disparity information to detect image portions that depict object(s) that were closest to the image capturing means upon capturing the first image 110, and designating the detected image portions as primary image portions. Herein, obtaining the disparity information may comprise receiving the disparity information from the image acquisition means 120 or deriving the disparity information on basis of the first and second images by the image processing portion 130.
According to an example embodiment, the designation of primary image portion(s) is carried out at least in part on basis of user input received via the user interface of the electronic device 100, 200. As an example in this regard, the first image may be displayed to a user via the user interface (e.g. in a display or in a touchscreen) and the user is provided with a user interface mechanism that enables indicating one or more primary objects or one or more primary image portions in the displayed image (e.g. by further employing a suitable pointing device such as a touchscreen or a touchpad).
As described in the foregoing, designation of primary image portions results in considering or designating the remaining image portions of the first image as non-primary image portions. The image processing means 130 may be configured to identify portions or pads of the second image that spatially coincide with the non-primary image portions of the first image. These identified portions of the second image may be designated as or referred to as background portions. In other words, the sets of pixel positions that define the non-primary image portions in the first image may be employed to define the spatial locations of the background portions in the second image. The background portions may be subsequently extracted from the second image and applied to derive the composition image.
As described in the foregoing, the image processing means 130 may be arranged to derive the composition image based at least in part on the first and second images. According to an example embodiment, the image processing means 130 is configured to extract the primary image portions from the first image, to extract the background portions from the second image and to combine the primary image portions and the background portions into the composite image.
Figures 3a to 3c depict a schematic example concerning derivation of the composite image on basis of the first and second images. In an exemplifying photographed scene a person that is the primary object is standing on a hill with a city skyline forming a (further) background for the photographed scene. Moreover, there are two other persons moving around the hill in the background. Figure 3a illustrates an exemplifying first image, where the primary object is depicted as the tall human figure (that is close to the image capturing means 110) and where the secondary objects are visible as smaller human figures (further away from the image capturing means 110), while the hill and the city skyline form respective parts of the background of the depicted scene. In the exemplifying second image illustrated in Figure 3b the primary object appears slightly blurred (due to small movement during the long exposure time), while the secondary objects are not depicted at all due the motion blur effect described in the foregoing -whereas the hill and the city skyline are depicted in the background. Figure 3c illustrates an exemplifying composite image derived (according to an approach described in the foregoing) by extracting the primary image portions from the first image of Figure 3a and background portions from the second image of Figure 3b, thereby providing a sharp image of the primary object while removing all or substantially all traces of the secondary objects that are visible in the first image of Figure 3a.
According to an example embodiment, the image processing means 130 may be further arranged to obtain one or more pre-captured further images that each depict at least part of the same scene as depicted in the first and second images and to derive the composite image further based at least in part of the obtained one or more further images. In particular, each of the one or more images depicts at least part of the same scene as depicted in the first and second images, but without any primary or secondary objects. In other words, each of the further images depicts at least part of the background (objects) of the scene that is depicted in the first and second images. This enables using the further image(s) or one or more parts thereof to replace or complement at least some of the background portions extracted from the second image for derivation of the composite image. While it is possible (or even likely) that a further image does not provide a field of view fully matching that of the first and/or second images, e.g. in a case where the second image nevertheless depicts one or more secondary objects (such as secondary objects that remain stationary or substantially stationary at time period of capturing the second image) or traces thereof using a further image as basis of some parts or portions of composite image may be beneficial or even necessary.
In context of the example provided in Figures 3a to 3c, an example of such a case could be a scenario where one or both of the secondary objects visible in the first image of Figure 3a would be stationary or substantially stationary during the long exposure time applied for capturing the second image, thereby leaving the respective person partly or completely visible also in the second image. Consequently, an exemplifying further image illustrated in Figure 4 or a further image that can be processed into one depicting at least part of the image content shown in Figure 4 would enable derivation of the composite image of improved quality (in terms of avoiding secondary objects in the resulting composite image).
The one or more further images that may be used for derivation of the composite image comprise pre-captured images that may be stored in the electronic device 100, 200 (e.g. in the memory 115 or a mass storage device that is provided as part of the electronic device 100, 200) or in another device that is accessible by the electronic device 100, 200. As an example in this regard, the one or more further images may comprise one or more images that are previously captured by using the image capturing means 110 of the electronic device 100, 200. As another example, the one or more further images may comprise one or more images that are previously captured using another device and transferred to the electronic device 100, 200 for subsequent use by the image processing means 130.
As a further example, the electronic device 100, 200 may access an image database in a server device to obtain the one or more further images in preparation of or upon derivation of the composite image. As an example in this regard, Figure 5 schematically illustrates an electronic device 300 (as a variation of the electronic device 200), further comprising the communication means 150 and sensor means 160. Figure 5 further illustrates a server device 170 arranged to store and access an image database 172 that includes images that may serve as the further images (as described in the foregoing) and to obtain images that may be relevant for deriving the composite image on basis of the first and second images. The images of the database 172 are stored with metadata that includes e.g. location information and orientation information that jointly specify the field of view provided in the respective image.
The electronic device 300 may use the communication means 150 to access the server device 170 to obtain images that may be relevant for deriving the composite image on basis of the first and second images. As a particular example, the electronic device 300 (e.g. the image processing means 130 via the communication means 150) may send to the server device 170 a request including an indication of its location and possibly also an indication of its orientation, whereas the server device 170, upon receiving the request, consults the image database 172 in order to identify suitable further images that match the indicated location (and possibly also the indicated orientation) and in response to identifying one or more suitable images sends one or more responses including the identified one or more further images back to the electronic device 300 for use in derivation of the composition image. The electronic device 300 may employ the sensor means 160 to extract its location and its orientation upon capturing the first and second images. In this regard, the sensor means 160 may comprise a positioning means, e.g. a satellite positioning means such as a GPS receiver or a Glonass receiver and/or a mobile positioning means or the sensor means 160 may be arranged to use another positioning technique known in the art.
Image alignment processing similar to that described in the foregoing for alignment of the field of views of the first and second images may be applied to align the field of view at least some of the one or more further images applied in derivation of the composition image to those of the first and/or second images. Since a further image may only provide part of the field of view of the first/second image, the resulting aligned further image may only represent (respective) part of the field of view of the first and/or second images and may hence be only useable for derivation of the respective parts of the composition image. Along similar lines, image enhancement processing similar to that described in the foregoing for enhancement of the first and/or second images may be applied to at least some of the one or more further images applied in derivation of the composition.
The image alignment processing and/or the image enhancement processing may be applied e.g. by the image processing means 130 prior to or in the course of derivation of the composition image. In case the image alignment processing and/or the image enhancement processing is applied to a further image, the derivation of the composite image relies on aligned and/or enhanced version of the respective further image.
The one or more further images may be employed in derivation of the composition image in a number of different ways. As an example, the image processing means 130 may be configured to replace any background portion derivable from the second image with a spatially corresponding image portion of one of the further images if respective portion is available in one of the further images. Consequently, at least some of the secondary objects that may be present in the second image are automatically replaced in the composite image with corresponding image content from one of the further images.
As another example, the image processing means 130 may be arranged to replace one or more user-selected background portions derivable from the second image with a spatially corresponding image portion of one of the further images if the respective images portion is available in one of the further images. As an example in this regard, the second image may be displayed to a user via the user interface (e.g. in a display or in a touchscreen) and the user is provided with a user interface mechanism that enables indicating one or more non-primary image portions in the displayed image (e.g. by further employing a suitable pointing device such as a touchscreen or a touchpad). The user-indicated non-primary image portions may either indicate the background portions to be taken from the second image (e.g. non-primary image portions that depict the background of the scene) or the background portions to be taken from spatially corresponding image portions in one of the further images (e.g. non-primary image portions that depict a secondary object in the depicted scene). Consequently, the image processing means 130 is arranged to extract the respective background portions either from the second image or one of the further images in accordance with the indication received via the user interface.
As a further example, the image processing means 130 may be arranged to make use of the second image as much as possible in derivation of the composite image to ensure background portions having image characteristics as close as possible to those of the first image. As an example in this regard, the image processing means 130 may be arranged to compare each background portion derivable from the second image with spatially corresponding image portions available in the further images and replace a certain background portion derivable from the second image with a spatially corresponding image portion from a further image only in response to finding the image content provided in this part of a further image different from the respective part of the second image. Consequently, only those portions of the composite image that in the second image are found to represent an object that does not match the background of the scene as illustrated in the further image(s) are taken from one of the further images, thereby favoring the image content available in the second image whenever found feasible.
According to an example embodiment, the image processing means 130 may be arranged to provide the composite image as an output image of the image acquisition arrangement. Consequently, the image processing means 130 may be arranged to store the composite image in the memory 115 or in a mass storage device for subsequent viewing by a user and/or for subsequent image processing by the electronic device 100, 200, 300 or by another electronic device. This enables capturing images where any secondary objects are excluded from the output image, thereby enabling e.g. an application where the primary object is photographed in a crowded place while still omitting any possible 'disturbing' secondary objects.
According to an example embodiment, the image processing means 130 may be arranged to provide the composite image for display via the user interface of the electronic device 100, 200, 300. As an example, this may involve displaying the composite image e.g. in a display of a mobile phone or in a viewfinder of a digital camera, which enables a user of the electronic device 100, 200, 300 to view the to-be-photographed scene without the secondary objects. In certain scenarios, e.g. in a crowded place, this is likely to provide the user with improved consideration of the underlying geometry of the photographed scene. In this scenario, the output image of the image acquisition arrangement may be the composite image (e.g. as displayed in the viewfinder upon capturing the image) or the output image may be, nevertheless, the first image that also includes the secondary objects.
In the foregoing, the examples concerning structure and operation of the image acquisition arrangement is implicitly described in context of (digital) still images. However, the operation of the image acquisition arrangement generalizes into capturing, acquisition and processing of image sequences that constitute respective video sequences. In this regard, the image acquisition means 120 may be arranged to obtain a sequence of image pairs including the first image and the respective second image, whereas the image processing means 130 may be arranged to process a plurality of image pairs, one after another, to derive a corresponding sequence of composite images In this regard, the image acquisition means 120 may be arranged to cause the image capturing means 110 to capture a sequence of first images (i.e. a first sequence) and a sequence of second images (i.e. a second sequence) and to obtain the image pairs such that each image pair includes the first image taken from the first sequence and the second image taken from the second sequence such that that their characteristics (at least) with respect exposure times and times of capture in relation to each other follow those described for the first and second images in the foregoing.
According to an example embodiment, the image acquisition means 120 may be arranged to periodically cause the image capturing means 110 to capture a first image using the short exposure time and a second image using the long exposure time, thereby obtaining a first image sequence comprising 5 images captured using the short exposure time and a corresponding second image sequence comprising images captured using the long exposure time. The image processing means 130 may be arranged to process the first and second image sequences such that each pair of a first image (of the first image sequence) and a temporally corresponding second image (of the 10 second image sequence) to derive a respective composite image, providing a sequence of composite images.
According to another example embodiment, the image acquisition means 120 may be arranged to periodically cause the image capturing means 110 to capture images using the short exposure time, thereby obtaining a sequence of images (that may constitute a video sequence), whereas the image processing means 130 may be arranged to derive a sequence of composite images based on first and second images obtained from this image sequence. In this regard, each image of this sequence (or e.g. every Nth image of the sequence) may serve as the first image, whereas the respective second image may be derived on basis of a plurality of images of the sequence, e.g. as a linear combination of a plurality of the images of the sequence, as described in the foregoing. Each pair of first and second images so obtained may be used, by the image processing means 130, to derive a respective composite image, thereby providing a sequence of composite images In an exemplifying use case the electronic device 100, 200, 300 arranged to provide the image acquisition arrangement for obtaining video sequences is installed or otherwise operated in a vehicle to capture images depicting view outside the vehicle, e.g. in front of the vehicle. In such a use case the electronic device 100, 200, 300 may be provided as part of an information system or a navigation system in a vehicle or as a device that operatively coupled to the information/navigation system in the vehicle. This enables the image acquisition arrangement to receive further information from other components of the information/navigation system in the vehicle and/or a corresponding system in one or more other vehicles and to provide the derived sequence of composite images for further use by the information/navigation system in the vehicle. As an example, the derived sequence of composite images may be provided for display in a display device within the vehicle, e.g. a display screen in the dashboard of the vehicle or a head-up display (HUD) projected e.g on the windscreen of the vehicle in front of the driver of the vehicle.
In such use, the sequence of composite images may be derived such that it delivers a processed view of the traffic in front of the vehicle by considering other vehicles that are close (or closest) to the vehicle hosting the image acquisition arrangement as the primary objects while the other objects in the field of view are regarded as non-primary objects (e.g. as secondary objects and/or background objects). This enables live tracking of the position of the vehicle on the road with respect to the most relevant other vehicles while omitting at least part of the background and/or at least some of the other vehicles that are further away and that could appear as distracting elements in the sequence of first images. Additionally or alternatively, this may enable displaying information that would be otherwise blocked from the view of the driver by one of the so-called A pillars of the vehicle and/or by other structures of the vehicle.
As an example in this regard, Figure 6a illustrates an exemplifying first image 500a that includes image portions representing vehicles 510 that may be considered as primary objects of the depicted scene and image portions representing vehicles 520 that may be considered secondary objects of the depicted scene. While in this example the primary objects include the vehicles 510 that are closest to the image capturing means applied to capture the first image 500a, this not necessarily the case, as described in the following. Figure 6b illustrates an exemplifying composite image 500b derived on basis of the first image 500 (and on basis of a corresponding second image), where only the image portions representing the vehicles 510 (i.e. the primary objects) are included. It should be noted that Figures 6a and 6b provide schematic illustrations of the respective images 500a and 500b, omitting any possible further secondary objects and/or background objects for improved graphical clarity. In the following, some examples for identification of primary image portions in the first image for derivation of the composite image are described by using a single first image (together with the respective second image) as an example. However, the description generalizes into processing, one by one, a plurality of images that constitute a video sequence or another sequence of images.
In this regard, determination of the primary image portions (and determination of the primary objects) in a given first image may be automated such that any object that is within a predefined distance from the image capturing means 110 (or from another reference point in the vehicle) upon capturing the first image is considered as a primary object and image portions of the given first image depicting those objects are designated as primary image portions for determination of the composite image. As an example, the predefined distance may be a value selected from a range from zo a few meters to a few tens of meters.
As an example of designation of primary image portions on basis of the distance, the image portions that depict other vehicles (or other objects) that are within the predefined distance from the vehicle hosting the image acquisition arrangement may be identified by finding image portions of the respective first image that, according to the disparity information (e.g. the depth map) pertaining to the respective segments of the first image, appear in the first image closer than the predefined distance.
As another example regarding designation of the primary image portions, the image processing means 130 may be arranged to carry out the following in order to identify primary image portions in the first image: - receive, for one more objects depicted in the first image (e.g. the vehicles 510 and 520), a respective distance indication that indicates the distance between the vehicle hosting the image acquisition arrangement and the respective other vehicle.
-identify, at least for those objects that are within the predefined distance from the apparatus, respective one or more image portions of the first image that depict the respective object, and - designate the image portions so identified as primary image portions for determination of the composition image (whereas some or all other image portions may be considered or designated as non-primary image portions).
As an example, the distance indication may be received in message that is received, via a wireless link, from an information/navigation system of the respective other vehicle. As another example, the distance indication may be derived, e.g. by the image processing means 130, on basis of position information received from the information/navigation system of the respective other vehicle. In the latter example, the message may carry a remote position indication that indicates the current (real-world) position of the respective other vehicle. Additionally, the image processing means 130 may receive (e.g. from a local positioning means) a local position indication that indicates the current (real-world) position of the vehicle hosting image acquisition arrangement. The image processing means 130 may further compute or estimate the distance between the two vehicles by using use the remote and local position indications together with a priori knowledge about the position and orientation of the image capturing means 110 with respect to the vehicle hosting the image acquisition arrangement. The distance so computed or estimated may be applied as the distance indication useable for identification of the primary image portions in the first image. Herein, the remote and local position indications may include e.g. the GPS coordinates that define the respective (real-world) position.
In addition to or instead of the other vehicles that are estimated to be within the predefined distance, other vehicles that are likely to move closer to the vehicle or towards the same (intermediate) target location as the vehicle hosting the image acquisition arrangement may be considered as primary objects -and the image portions of the first image that depict these vehicles may hence be designated as primary image portions of the first image for determination of the composite image. As an example in this regard, the image processing means 130 may be arranged to carry out the following in order to identify primary image portions in the first image: -receive, for one or more other vehicles, a remote direction indication that indicates a destination or direction associated with the respective other vehicle, - receive a local direction indication that indicates a destination or direction associated with the vehicle hosting the image acquisition arrangement, identify, at least for those other vehicles that share the destination or direction with the vehicle hosting the image acquisition arrangement, respective one or more image portions that depict the respective other vehicle, and designate the image portions of the first image so identified as primary image portions (whereas some or all other image portions may be considered or designated as non-primary image portions).
Such direction indications may be provided as or derived from navigation information obtained from the information/navigation system: the remote direction indication may be received, via a wireless link, from the information/navigation system of the respective other vehicle, whereas the local direction indication may be received from the information/navigation system in the vehicle hosting the image acquisition arrangement. The navigation information may include information concerning an expected upcoming turn or other change of direction of the respective vehicle. As an example, the navigation information may identify a number of a road, a name of a street or an exit of a highway which serves as an (intermediate) destination of the respective vehicle.
Moreover, the image acquisition arrangement (e.g. the image processing means 130) may obtain the local direction information from the information/navigation system and augment the composite image with additional information that reflects the received direction information. As an example in this regard, the image processing means 130 may be arranged to obtain the local direction information and current location of the vehicle with respect to the road, and use these pieces of information to derive a lane io changing routine e.g. to enable taking a turn or an exit that will lead the vehicle towards the final destination defined for the navigation system. Moreover, the image processing means 130 may be further arranged to augment the composite image with a visual cue that indicates the derived lane changing routine with respect to the other vehicles depicted in the is composite image. The visual cue may comprise an arrow or a series of arrows that indicate the required lane changes with respect to the depicted other vehicles. As an example, Figure 7 schematically illustrates an augmented composite image 500c, where such visual cue introduced to the composite image 500b of Figure 6b. Consequently, the driver may be zo provided with an augmented video sequence that depicts only the relevant information together with visual cue(s) that provides direction information while omitting less important information that may be distracting to the driver.
While in the foregoing the operation of the image acquisition arrangement is described in context of the electronic device 100, 200, 300, the operation of the image acquisition arrangement may be also described as steps of a method. As an example in this regard, Figure 8 illustrates a method 400 according to an example embodiment. The method 400 may be carried out by the electronic device 100, 200, 300 or by another electronic device.
The method 400 comprises obtaining at least a first image depicting a scene and a second image depicting said scene. In particular, obtaining the first and second images comprises obtaining said first image using a first exposure time, as indicated in block 410, and obtaining said second image using a second exposure time that is longer than said first exposure time, as indicated in block 420. Obtaining of the first image using the first (short) exposure time and the second image using the second (long) exposure time is described in the foregoing in context of the image capturing means 110 and the image acquisition 130 means using a number of examples in context of the image. These examples and variations thereof are equally applicable in context of the method 400.
io The method 400 further comprises deriving a composition image depicting said scene based at least in part on said first image and said second image, as indicated in block 430. Derivation of the composition image is described in the foregoing in context of (the image acquisition means 120 and) the image processing means 130 using a number of examples. These examples and variations thereof are equally applicable in context of the method 400.
Although described in the foregoing with references to the image acquisition means 120 controlling the image capturing means 110 provided in the electronic device 100, 200 to capture the first and second images, the image acquisition means 120 may be arranged to read pre-captured first and second images from the memory 115 instead. While also in this scenario the first and second images may be captured using the image capturing means 110 or, alternatively, the pre-captured images may originate from another device, received via communication means and stored in the memory 115 for subsequent determination of the composition image by the image processing means 130. Therefore, in some example embodiments the image acquisition arrangement may operate on basis of pre-captured first and second images, and such embodiments the electronic device 100, 200, 300 may be provided without the image capturing means 110. Similarly, in context of the method 400 (or its variations) the processing steps indicated in blocks 410, 420 may constitute reading pre-captured images from the memory 115 instead of operating the image capturing means 110 to capture the images.
Referring back to Figures 2 and 5, the processor 216 is configured to read from and write to the memory 215. Although the processor 216 is depicted as a single component, the processor 216 may be implemented as one or more separate components. Similarly, although the memory 215 is illustrated as a single component, the memory 215 may be implemented as one or more separate components, some or all of which may be integrated/removable and/or may provide permanent / semi-permanent/ dynamic/cached storage.
The memory 215 may store the computer program comprising computer-executable instructions that control the operation of the electronic device 200, 300 when loaded into the processor 216. As an example, the computer program may include one or more sequences of one or more instructions. The computer program may be provided as a computer program code. The processor 216 is able to load and execute the computer program by reading the one or more sequences of one or more instructions included therein from the memory 215. The one or more sequences of one or more instructions may be configured to, when executed by the processor 216, cause the electronic device 200, 300 to carry out operations, procedures and/or functions described in the foregoing in context of the image acquisition arrangement. Hence, the electronic device 200, 300 may comprise at least one processor 216 and at least one memory 215 including computer program code for one or more programs, the at least one memory 215 and the computer program code configured to, with the at least one processor 216, cause the electronic device 200, 300 to perform operations, procedures and/or functions described in the foregoing in context of the image acquisition arrangement.
The computer program may be comprised in a computer program product. According to an example embodiment, the computer program product may comprise a non-transitory computer-readable medium. Thus, the computer program product may be provided e.g. as a computer program product that comprises at least one computer-readable non-transitory medium having program code stored thereon, the program code, when executed in the electronic device 200, 300, causing the apparatus at least to perform operations, procedures and/or functions described in the foregoing in context of the image acquisition arrangement. The computer-readable non-transitory medium may comprise a memory device or a record medium such as a CD-ROM, a DVD, a Blu-ray disc or another article of manufacture that tangibly embodies the computer program. As another example, the computer program may be provided as a signal configured to reliably transfer the computer program.
Reference(s) to a processor should not be understood to encompass only 10 programmable processors, but also dedicated circuits such as field-programmable gate arrays (FPGA), application specific circuits (ASIC), signal processors, etc Features described in the preceding description may be used in combinations other than the combinations explicitly described. Although functions have been described with reference to certain features, those functions may be performable by other features whether described or not. Although features have been described with reference to certain embodiments, those features may also be present in other embodiments whether described or not.

Claims (9)

  1. Claims 1. An apparatus, comprising: image acquisition means for obtaining at least a first image depicting a scene and a second image depicting said scene, said image acquisition means configured to obtain said first image using a first exposure time, and obtain said second image using a second exposure time that is longer than said first exposure time; and image processing means for deriving a composition image depicting said scene based at least in part on said first image and said second image.
  2. 2. An apparatus according to claim 1, further comprising an image capturing means for capturing image data, wherein the image acquisition means is configured to obtain said first and second images using said image capturing means.
  3. 3. An apparatus according to claim 2, wherein said image capturing means comprises a first imaging system for capturing said first image, and a second imaging system for capturing said second image; and wherein said image acquisition means is configured to control said image capturing means to capture said first image and said second image substantially simultaneously.
  4. An apparatus according to claim 2, wherein said image acquisition means is configured to control said image capturing means to intermittently capture said first image and said second image.
  5. 5. An apparatus according to claim 2, wherein said image acquisition means is configured to capture a sequence of images using said first exposure time and to use one image of said sequence as the first image, and derive said second image as a combination of a plurality of images of said sequence.
  6. An apparatus according to any of claims 1 to 5, wherein said image processing means is configured to segment the first image into a plurality of non-overlapping image portions; designate one or more of said image portions as primary image portions and designate the remaining image portions as non-primary image portions; and derive said composition image by combining said one or more primary image portions with one more background portions, wherein each background portion comprises an image portion of the second image that spatially coincides with a respective non-primary image portion of the first image.
  7. 7. An apparatus according to any of claims 1 to 5, wherein said image acquisition means is configured to obtain one or more pre-captured further images depicting at least part of said scene, and wherein said image processing means is configured to segment the first image into a plurality of non-overlapping image portions; designate one or more of said image portions as primary image portions and designate the remaining image portions as non-primary image portions; and derive said composition image by combining said one or more primary image portions with one or more background portions, wherein each background potion comprises one of the following: an image portion of the second image that spatially coincides with a respective non-primary image portion of the first image, and an image portion of one of the further images that spatially coincides with a respective non-primary image portion of the first image.
  8. 8. An apparatus according to claim 6 or 7, wherein said designating comprises one of the following: designating one or more image portions that, according to disparity information pertaining to the first image, are the ones closest to the image plane as primary image portions, designating one or more image portions that represent image content to which an image capturing means employed to capture the first image was focused upon capturing the first image as primary image portions; and receiving a user selection that indicates one or more primary image portions in the first image
  9. 9. An apparatus according to claim 6, wherein said designating comprises designating one or more image portions depicting one or more objects in said scene that are within a predefined distance from the image capturing means as primary image portions.An apparatus according to claim 9, wherein said designating further comprises receiving, for one or more objects in said scene, a respective distance indication that indicates the distance between the apparatus and the respective object; identifying, for those objects that are within the predefined distance from the apparatus, respective one or more image portions that depict the respective object; and designating the identified image portions as primary image portions.11 An apparatus according to claim 6, 9 or 10, wherein said designating comprises receiving, for one or more objects in said scene, a remote direction indication that indicates a destination associated with the respective object; receiving a local direction indication that indicates a destination associated with the apparatus; identifying, for those objects for which the respective remote direction indication indicates the same destination as the local direction indication, respective one or more image portions that depict the respective object; and designating the identified image portions as primary image portions.12 An apparatus according to any of claims 1 to 8, wherein said image processing means is further configured to provide said composition image for display in a display means serving as a viewfinder of the apparatus.13 An apparatus according to claim 12, wherein said image processing means is further configured to carry out at least one of the following: provide said first image as an output image; and provide said composition image as an output image.14 An apparatus, comprising: at least one processor and a memory storing a program of instructions, wherein the memory storing the program of instructions is configured to, with the at least one processor, configure the apparatus to at least: obtain at least a first image depicting a scene and a second image depicting said scene, wherein the image capturing portion is caused to obtain said first image using a first exposure time, and obtain said second image using a second exposure time that is longer than said first exposure time; and derive a composition image depicting said scene based at least in part on said first image and said second image.An apparatus according to claim 14, wherein the apparatus further comprises an image capturing portion comprising one or more imaging systems for capturing image data, and the image acquisition means is configured to obtain said first and second images using said image capturing portion.16 An apparatus according to claim 15, wherein said image capturing portion comprises a first imaging system for capturing said first image, and a second imaging system for capturing said second image; and wherein said apparatus is caused to control said image capturing portion to capture said first image and said second image substantially simultaneously.17 An apparatus according to claim 15, wherein said apparatus is caused to control said image capturing portion to intermittently capture said first image and said second image.18 An apparatus according to claim 15, wherein said apparatus is caused to capture a sequence of images using said first exposure time and to use one image of said sequence as the first image, and derive said second image as a combination of a plurality of images of said sequence.is 19 An apparatus according to any of claims 14 to 18, wherein said apparatus is caused to segment the first image into a plurality of non-overlapping image portions; designate one or more of said image portions as primary image portions and designate the remaining image portions as non-primary image portions; and derive said composition image by combining said one or more primary image portions with one more background portions, wherein each background portion comprises an image portion of the second image that spatially coincides with a respective non-primary image portion of the first image.20. An apparatus according to any of claims 14 to 18, wherein said apparatus is caused to obtain one or more pre-captured further images depicting at least part of said scene; segment the first image into a plurality of non-overlapping image portions; designate one or more of said image portions as primary image portions and designate the remaining image portions as non-primary image portions; and derive said composition image by combining said one or more primary image portions with one or more background portions, wherein eachbackground potion comprises one of the following:an image portion of the second image that spatially coincides with a respective non-primary image portion of the first image, and an image portion of one of the further images that spatially coincides with a respective non-primary image portion of the first image.21 An apparatus according to claim 19 or 20, wherein said designating comprises one of the following: designating the image portions that represent image content to which the image capturing portion employed to capture the first image was focused upon capturing the first image as primary image portions; and receiving a user selection that indicates one or more primary image portions in the first image 22 An apparatus according to claim 19, wherein said designating comprises designating one or more image portions depicting one or more objects in said scene that are within a predefined distance from the image capturing means as primary image portions.23 An apparatus according to claim 22, wherein said designating further comprises receiving, for one or more objects in said scene, a respective distance indication that indicates the distance between the apparatus and the respective object; identifying, for those objects that are within the predefined distance from the apparatus, respective one or more image portions that depict the respective object; and designating the identified image portions as primary image portions.24. An apparatus according to claim 19, 22 or 23, wherein said designating comprises receiving, for one or more objects in said scene, a remote direction indication that indicates a destination associated with the respective object; receiving a local direction indication that indicates a destination associated with the apparatus; identifying, for those objects for which the respective remote direction indication indicates the same destination as the local direction indication, respective one or more image portions that depict the respective object; and designating the identified image portions as primary image portions.An apparatus according to any of claims 14 to 21, wherein said apparatus is caused to provide said composition image for display in a display apparatus serving as a viewfinder of the apparatus.26. An apparatus according to claim 25, wherein said apparatus is caused to carry out at least one of the following: provide said first image as an output image; and provide said composition image as an output image.27 A method comprising obtaining at least a first image depicting a scene and a second image depicting said scene, comprising obtaining said first image using a first exposure time, and obtaining said second image using a second exposure time that is longer than said first exposure time; and deriving a composition image depicting said scene based at least in part on said first image and said second image.28 A method according to claim 27, wherein obtaining the first and second images comprises capturing said first image using a first imaging system, and capturing said second image using a second imaging system; and wherein said first and second images are captured substantially simultaneously.29. A method according to claim 27, wherein obtaining the first and second images comprises intermittently capturing said first and second images.A method according to claim 27, wherein obtaining the first and second images comprises capturing a sequence of images using said first exposure time and to use one image of said sequence as the first image, and derive said second image as a combination of a plurality of images of said sequence.31 A method according to any of claims 27 to 30, wherein deriving the composite image comprises segmenting the first image into a plurality of non-overlapping image portions; designating one or more of said image portions as primary image portions and designating the remaining image portions as non-primary image portions; and deriving said composition image by combining said one or more primary image portions with one more background portions, wherein each background portion comprises an image portion of the second image that spatially coincides with a respective non-primary image portion of the first image.32 A method according to any of claims 27 to 30, further comprising obtaining one or more pre-captured further images depicting at least part of said scene, wherein deriving the composite image comprises segmenting the first image into a plurality of non-overlapping image portions; designating one or more of said image portions as primary image portions and designating the remaining image portions as non-primary image portions; and deriving said composition image by combining said one or more primary image portions with one or more background portions, wherein each background potion comprises one of the following: an image portion of the second image that spatially coincides with a respective non-primary image portion of the first image, and an image portion of one of the further images that spatially coincides with a respective non-primary image portion of the first image.33 A method according to claim 31 or 32, wherein said designating comprises one of the following: designating the image portions that represent image content to which an image capturing apparatus employed to capture the first image was focused upon capturing the first image as primary image portions; and receiving a user selection that indicates one or more primary image portions in the first image 34 A method according to claim 31, wherein said designating comprises designating one or more image portions depicting one or more objects in said scene that are within a predefined distance from the image capturing means as primary image portions.A method according to claim 34, wherein said designating further comprises receiving, for one or more objects in said scene, a respective distance indication that indicates the distance between the apparatus and the respective object; identifying, for those objects that are within the predefined distance from the apparatus, respective one or more image portions that depict the respective object; and designating the identified image portions as primary image portions.36. A method according to claim 31, 34 or 35, wherein said designating comprises receiving, for one or more objects in said scene, a remote direction indication that indicates a destination associated with the respective object; receiving a local direction indication that indicates a destination associated with the apparatus; identifying, for those objects for which the respective remote direction indication indicates the same destination as the local direction indication, respective one or more image portions that depict the respective object; and designating the identified image portions as primary image portions.37 A method according to any of claims 27 to 33, further comprising providing said composition image for display in a display apparatus serving as a viewfinder of the apparatus.38 A method according to claim 37, further comprising at least one of the following: providing said first image as an output image; and providing said composition image as an output image.39 A computer program comprising computer readable program code configured to cause performing, when said program code is run on a computing apparatus, at least a method according to any of claims 27 to 38.A computer program product comprising at least one computer readable non-transitory medium having program code stored thereon, the program which when executed by an apparatus cause the apparatus at least to perform a method according to any of claims 27 to 38.
GB1507356.2A 2015-04-30 2015-04-30 An image acquisition technique Active GB2537886B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1507356.2A GB2537886B (en) 2015-04-30 2015-04-30 An image acquisition technique

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1507356.2A GB2537886B (en) 2015-04-30 2015-04-30 An image acquisition technique

Publications (3)

Publication Number Publication Date
GB201507356D0 GB201507356D0 (en) 2015-06-17
GB2537886A true GB2537886A (en) 2016-11-02
GB2537886B GB2537886B (en) 2022-01-05

Family

ID=53488896

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1507356.2A Active GB2537886B (en) 2015-04-30 2015-04-30 An image acquisition technique

Country Status (1)

Country Link
GB (1) GB2537886B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108156366A (en) * 2017-11-30 2018-06-12 维沃移动通信有限公司 A kind of image capturing method and mobile device based on dual camera
CN109769087A (en) * 2017-11-09 2019-05-17 中兴通讯股份有限公司 Image pickup method, device and the mobile terminal remotely taken a group photo
WO2019143385A1 (en) * 2018-01-18 2019-07-25 Google Llc Systems and methods for removing non-stationary objects from imagery
WO2020054949A1 (en) 2018-09-11 2020-03-19 Samsung Electronics Co., Ltd. Electronic device and method for capturing view
WO2023014344A1 (en) * 2021-08-02 2023-02-09 Google Llc Exposure control for image-capture

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040207734A1 (en) * 1998-12-03 2004-10-21 Kazuhito Horiuchi Image processing apparatus for generating a wide dynamic range image
EP1933271A1 (en) * 2005-09-16 2008-06-18 Fujitsu Ltd. Image processing method and image processing device
US20100149350A1 (en) * 2008-12-12 2010-06-17 Sanyo Electric Co., Ltd. Image Sensing Apparatus And Image Sensing Method
EP2199975A1 (en) * 2008-12-16 2010-06-23 Samsung Electronics Co., Ltd. Apparatus and method for blending multiple images
EP2309457A1 (en) * 2009-09-18 2011-04-13 Sony Corporation Image processing apparatus, image capturing apparatus, image processing method, and program
US20110254976A1 (en) * 2009-04-23 2011-10-20 Haim Garten Multiple exposure high dynamic range image capture
US20110292243A1 (en) * 2010-05-31 2011-12-01 Sony Corporation Imaging processing apparatus, camera system, image processing method, and program
US20120044381A1 (en) * 2010-08-23 2012-02-23 Red.Com, Inc. High dynamic range video
GB2490231A (en) * 2011-04-20 2012-10-24 Csr Technology Inc Multiple exposure High Dynamic Range image capture
US20150130967A1 (en) * 2013-11-13 2015-05-14 Nvidia Corporation Adaptive dynamic range imaging

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040207734A1 (en) * 1998-12-03 2004-10-21 Kazuhito Horiuchi Image processing apparatus for generating a wide dynamic range image
EP1933271A1 (en) * 2005-09-16 2008-06-18 Fujitsu Ltd. Image processing method and image processing device
US20100149350A1 (en) * 2008-12-12 2010-06-17 Sanyo Electric Co., Ltd. Image Sensing Apparatus And Image Sensing Method
EP2199975A1 (en) * 2008-12-16 2010-06-23 Samsung Electronics Co., Ltd. Apparatus and method for blending multiple images
US20110254976A1 (en) * 2009-04-23 2011-10-20 Haim Garten Multiple exposure high dynamic range image capture
EP2309457A1 (en) * 2009-09-18 2011-04-13 Sony Corporation Image processing apparatus, image capturing apparatus, image processing method, and program
US20110292243A1 (en) * 2010-05-31 2011-12-01 Sony Corporation Imaging processing apparatus, camera system, image processing method, and program
US20120044381A1 (en) * 2010-08-23 2012-02-23 Red.Com, Inc. High dynamic range video
GB2490231A (en) * 2011-04-20 2012-10-24 Csr Technology Inc Multiple exposure High Dynamic Range image capture
US20150130967A1 (en) * 2013-11-13 2015-05-14 Nvidia Corporation Adaptive dynamic range imaging

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109769087A (en) * 2017-11-09 2019-05-17 中兴通讯股份有限公司 Image pickup method, device and the mobile terminal remotely taken a group photo
CN108156366A (en) * 2017-11-30 2018-06-12 维沃移动通信有限公司 A kind of image capturing method and mobile device based on dual camera
WO2019143385A1 (en) * 2018-01-18 2019-07-25 Google Llc Systems and methods for removing non-stationary objects from imagery
US10482359B2 (en) 2018-01-18 2019-11-19 Google Llc Systems and methods for removing non-stationary objects from imagery
WO2020054949A1 (en) 2018-09-11 2020-03-19 Samsung Electronics Co., Ltd. Electronic device and method for capturing view
EP3808063A4 (en) * 2018-09-11 2021-11-17 Samsung Electronics Co., Ltd. Electronic device and method for capturing view
WO2023014344A1 (en) * 2021-08-02 2023-02-09 Google Llc Exposure control for image-capture

Also Published As

Publication number Publication date
GB201507356D0 (en) 2015-06-17
GB2537886B (en) 2022-01-05

Similar Documents

Publication Publication Date Title
US10389948B2 (en) Depth-based zoom function using multiple cameras
US11756223B2 (en) Depth-aware photo editing
US9918065B2 (en) Depth-assisted focus in multi-camera systems
US11210799B2 (en) Estimating depth using a single camera
CN107409166B (en) Automatic generation of panning shots
US11315274B2 (en) Depth determination for images captured with a moving camera and representing moving features
US10015469B2 (en) Image blur based on 3D depth information
US9544574B2 (en) Selecting camera pairs for stereoscopic imaging
US11102413B2 (en) Camera area locking
EP2768214A2 (en) Method of tracking object using camera and camera system for object tracking
CN107690673B (en) Image processing method and device and server
CN108335323B (en) Blurring method of image background and mobile terminal
KR102225617B1 (en) Method of setting algorithm for image registration
GB2537886A (en) An image acquisition technique
EP3704508B1 (en) Aperture supervision for single-view depth prediction
US20170171456A1 (en) Stereo Autofocus
US20230033956A1 (en) Estimating depth based on iris size
WO2015141185A1 (en) Imaging control device, imaging control method, and storage medium
US10425630B2 (en) Stereo imaging
WO2023191888A1 (en) Correlation-based object anti-spoofing for dual-pixel cameras

Legal Events

Date Code Title Description
732E Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977)

Free format text: REGISTERED BETWEEN 20200820 AND 20200826