WO2015116065A1 - Image processing for an image capture device - Google Patents

Image processing for an image capture device Download PDF

Info

Publication number
WO2015116065A1
WO2015116065A1 PCT/US2014/013608 US2014013608W WO2015116065A1 WO 2015116065 A1 WO2015116065 A1 WO 2015116065A1 US 2014013608 W US2014013608 W US 2014013608W WO 2015116065 A1 WO2015116065 A1 WO 2015116065A1
Authority
WO
WIPO (PCT)
Prior art keywords
interest
area
image
sub
camera
Prior art date
Application number
PCT/US2014/013608
Other languages
French (fr)
Inventor
Tyler Jacob YOUNG
Jerry A. Young
Original Assignee
Hewlett-Packard Development Company, L.P.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett-Packard Development Company, L.P. filed Critical Hewlett-Packard Development Company, L.P.
Priority to PCT/US2014/013608 priority Critical patent/WO2015116065A1/en
Publication of WO2015116065A1 publication Critical patent/WO2015116065A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • H04N23/635Region indicators; Field of view indicators
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/958Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging
    • H04N23/959Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging by adjusting depth of field during image capture, e.g. maximising or setting range based on scene characteristics

Definitions

  • Many mobile devices such as mobile phones and tablets now include built- in cameras that enable the user to capture video and still images.
  • Many such devices also include a touch screen display that enables the user to interact with the camera features of the mobile device.
  • the touch screen can also be used as a view- finder, which can provide a live preview of images before they are captured.
  • FIG. 1 is a block diagram of an example mobile computing device to track a sub-image of a user-selected area of interest
  • FIG. 2 is a drawing of an example mobile computing device having a touch screen, wherein an area of interest has been selected by the user;
  • FIG. 3 is a drawing of an example mobile computing device having a touch screen, wherein a person is within area of interest has been selected by the user and has been tracked to the right side of the touch screen after moving from his or her original location in the image;
  • FIG. 4 is a drawing of an example mobile computing device having a touch screen, wherein an area of interest has been selected by the user and the mobile computing device has been repositioned to change the composition of the image;
  • FIG. 5 is a drawing of an example mobile computing device having a touch screen, wherein a second area of interest has been selected by user and the depth of field imaging setting has been selected by the user;
  • FIG. 6 is a block diagram illustrating an example method for tracking an area of interest in an imaging device.
  • Fig. 7 is a block diagram showing an example tangible machine-readable medium that stores code adapted to define a sub-image within an area of interest to be tracked.
  • the image may be an image received by a camera and displayed on a touch screen.
  • a user can define an area of interest within an image by outlining a portion of the image displayed on the touch screen.
  • the area of interest is characterized according to the image data within the area of interest.
  • the area of interest can then be used to adjust the image processing applied to the image. For example, the part of the image that is within the area of interest can be used as the basis for setting the exposure level or focus parameters.
  • the area of interest can be automatically tracked so that the portion of the image selected by the user remains within the area of interest even if the camera view changes or an object within the camera's view moves.
  • two or more areas of interest can be used as the basis for setting an imaging setting such as depth of field.
  • Fig. 1 is a block diagram of a mobile computing device 102 to track a sub- image of a user-selected area of interest.
  • the mobile computing device 1 02 may include a mobile computing device 1 02 having a processor 104, a memory device 106, a graphics processing unit (GPU) 108, a storage device 1 1 0, and a touch screen 1 12.
  • the storage device 1 10 may also include a tracking module 1 14, in which a sub-image 1 16 may be stored.
  • the controller may also have a boundary module 1 18, in which an area of interest 120 may be stored.
  • the mobile computing device 102 may also contain a camera 122.
  • the processor 1 04 may be a main processor that is adapted to execute the stored instructions.
  • the processor 104 may be a single core processor, a multi-core processor, a computing cluster, or any number of other configurations.
  • the processor 104 may be implemented as Complex Instruction Set Computer (CISC) or Reduced Instruction Set Computer (RISC) processors, x86 Instruction set compatible processors, multi-core, or any other microprocessor or central processing unit (CPU).
  • CISC Complex Instruction Set Computer
  • RISC Reduced Instruction Set Computer
  • the memory device 106 can include random access memory (e.g., SRAM, DRAM, zero capacitor RAM, SONOS, eDRAM, EDO RAM, DDR RAM, RRAM, PRAM, etc.), read only memory (e.g., Mask ROM, PROM, EPROM, EEPROM, etc.), flash memory, or any other suitable memory systems.
  • the mobile computing device 1 02 may include a graphics processing unit, or GPU 1 08.
  • the GPU 108 may share or independently execute the stored instructions.
  • the GPU 108 may execute instructions related to image processing.
  • the GPU 108 may execute instructions requiring high-throughput type computations that exhibit data- parallelism.
  • Storage device 1 10 may be any non-transitory computer readable medium.
  • storage device 1 10 may be a hard disk drive or solid state memory such as flash memory.
  • the mobile computing device 1 02 may include a camera 122 to capture images.
  • the images may be displayed on the touch screen 1 12.
  • the images may be sent to processor 104 and storage device 1 10.
  • the camera 122 may include functionality to adjust focus and determine depth in an image (for example, from stereo or using an active depth sensor) to distinguish distances that may exist between regions that are relatively near to the camera 122 when compared to regions that are relatively far from the camera 122.
  • the camera may be positioned on the front of the mobile computing device 1 02 or on the back of the mobile computing device 102, or both.
  • the mobile computing device 102 may include a tracking module 1 14 that tracks an area of interest of an image displayed on the touch screen display.
  • the tracking module 1 14 may be a set of instructions stored on the storage device 1 10, as shown in Fig. 1 .
  • the instructions when executed by the processor 104, may direct the mobile computing device 102 to perform operations.
  • the instructions are executed by the GPU 108.
  • the tracking module 1 14 may be implemented as logic circuits or computer-readable instructions stored on an integrated circuit such as an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), or other type of processor.
  • the tracking module 1 14 may receive an image from the camera 122. The image may be associated with a preview image to be rendered at touch screen 1 12.
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • the tracking module 1 14 may prompt a user to select an area of interest that may include a sub-image that a user wishes to track at touch screen 1 12.
  • the tracking module 1 14 may store the area of interest 120 onto storage device 1 10.
  • the area of interest 120 may be stored in memory 106.
  • the tracking module 1 14 may also track changes within the area of interest 120. Tracking changes within the area of interest 120 may allow the tracking module 1 14 to process the image based on the area of interest 120. In some examples, tracking changes within the area of interest may include tracking changes to a sub-image within an area of interest.
  • tracking changes to the area of interest 120 may allow the tracking module to determine that a new scene is within the camera's field of view.
  • the field of view of a camera 122 is herein defined as the extent of the observable world that can be recorded by the camera at any time, generally depending on the lens of the camera and its angle of view.
  • tracking changes to the area of interest 120 may include computing an overall percentage of change within the area of interest 120. In some examples, if the overall percentage of change in the area of interest 120 reaches a threshold percentage, then the tracking module 1 14 may clear the area of interest 120 from memory 1 1 6 and/or storage device 1 10 and the tracking module may be disabled.
  • the tracking module prompts the user to select a new area of interest 120 at touch screen 1 12 instead of being disabled.
  • the tracking module 1 14 may start a counter and wait a specific amount of time before being disabled or prompting the user to select a new area of interest 120.
  • the user may force a new scene via the touch screen 1 12 and select a new area of interest 120.
  • the tracking module may also keep track of imaging settings specified by the user for each area of interest 1 20 on the screen.
  • the tracking module may dynamically apply these imaging settings based on the area of interest 120 as each area of interest 120 changes in location and appearance. For example, the tracking module may analyze an area of interest 120 and determine a range of distances of the objects within the area of interest 120 and adjust the camera's focus and aperture settings so that the depth of field includes all the objects within an area of interest 120 and the objects may therefore appear to all be in focus.
  • the imaging settings may additionally include, but are not limited to, white balance and exposure level.
  • the tracking module may apply multiple imaging settings based on one area of interest 120.
  • the tracking modules may apply each imaging setting based on different areas of interest selected by the user.
  • the tracking module 1 14 may also determine a sub-image 1 16 within the selected area of interest 120.
  • the sub-image 1 16 may allow an area of interest 120 to be tracked within a camera's field of view while allowing for an overall percentage of change in the area of interest 120.
  • the tracking of the area of interest may be displayed on the touch screen 1 1 2 as a selection boundary that automatically follows the sub-image 1 16 within the on the touch screen.
  • the tracking of sub-image may allow imaging settings to be applied to part of the area of interest 120 indicated by the sub-image 1 16, without such changes being applied to the rest of the area of interest 120. In some examples, this may allow imaging settings to be applied to the entire image based on the specific characteristics of the sub-image 1 16.
  • Sub-image 1 16 may be determined by analyzing the area of interest. In some examples, the sub-image may be determined by analyzing colors or edges. In some examples, the overall shape of the selected area of interest 1 20 may be analyzed to determine a sub-image with a similar shape. Using a shape to determine the sub-image 1 16 may help a user to select a sub-image out of a relatively crowded picture. In some examples, this sub-image 1 1 6 may be
  • any enclosed shape may be sufficient to select the sub-image a user is interested in selecting.
  • the shape may automatically be enclosed by connecting the beginning and end points of a selection that did not form a bounded shape.
  • the tracking module 1 14 may allow the user via the touch screen 1 12 to invert the sub-image 1 16 within the area of selection to so that the tracking module 1 14 tracks the portion of the area of selection that was initially left out of the sub-image but within the area of interest 120.
  • the tracking module may allow the user to increase or decrease the area of interest 1 20 included in the sub-image via the touch screen 1 12.
  • the sub-image 1 1 6 may be stored onto a storage device 1 10 that allows the sub-image to be tracked for changes. By storing the sub-image to the storage device 1 10, the tracking module may then compare the active area of interest 120 to this stored sub- image. In some examples, the tracking module may continue to continuously scan the field of view for the sub-image 1 16 for a specified amount of time if the sub- image leaves the field of view.
  • the mobile computing device 1 02 may also include a boundary module 1 18 for indicating an area of interest on the touch screen 1 1 2.
  • an area of interest may be indicated by displaying a selection boundary around an area of interest.
  • the area of interest may be indicated by a highlighted portion of the touch screen 1 1 2.
  • the boundary module 1 18 may be stored on a storage device 1 1 0 and include code to direct the operations of processor 104 or GPU 108, as shown in Fig. 1 .
  • the boundary module 1 18 may be implemented as logic circuits or computer-readable instructions stored on an integrated circuit such as an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), or other type of processor.
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • the boundary module 1 18 may be configured to render a selection boundary associated with user selection from touch screen 1 12 around an area of interest 120 on touch screen 1 12. In some examples, the area of interest 120 is then sent to tracking module 1 14 for further processing.
  • the boundary module 1 1 8 may receive a sub- image from the tracking module 1 14.
  • the boundary module 1 18 may include instructions to render the selection boundary until the sub-image 1 16 leaves the field of view. The field of view may or may not coincide with the image displayed on the touch screen 1 12, depending on whether the preview image is digitally zoomed in or cropped.
  • the boundary module may still be tracking the sub-image even if it does not appear in the preview image on the touch screen 1 12.
  • the image on the display may be a cropped portion of the field of view.
  • the boundary module may still track the sub-image if it is within the field of view.
  • the boundary module may also include a lag time function to hold an area of interest 1 20 in memory 106 or storage device 1 10 for a predetermined amount of time once the area of interest 120 has left the field of view of the camera 122.
  • the imaging settings may be based on the last known state of the sub-image.
  • the sub-image may reenter the field of view and the boundary module and tracking module may resume their functions.
  • FIG. 1 The block diagram of Fig. 1 is not intended to indicate that the mobile computing device 102 is to include all of the components shown in Fig. 1 . Further, the mobile computing device 102 may include any number of additional components not shown in Fig. 1 , depending on the details of the specific implementation.
  • FIG. 2 is a drawing of a mobile computing device 1 02 having a touch screen 1 12, wherein an area of interest 202 has been selected by the user.
  • the particular image displayed by the mobile computing device 102 in this figure is generally referred to by the reference number 200.
  • the preview image 200 on the mobile computing device 102 shows a person standing near a tree, in front of some mountains with birds flying overhead.
  • image 200 a user wishes to take a picture of the person.
  • the user points the camera 122 in the direction of the person and the touch screen 1 12 displays a preview image on the screen.
  • the user may then select an area of interest 202 to track.
  • the area of interest 202 may contain a sub-image 1 1 6 that the user is interested in using to apply an imaging setting to the image as a whole.
  • the user may indicate the area of interest 202 by tapping on a sub- image to be tracked. This tapping may then create a shape around an object, indicating an area of interest 202.
  • the user may then expand or decrease the area of interest by stretching or pinching the selection boundary.
  • the user may select an area of interest 202 by touching and dragging along the touch screen diagonally to create a box indicating an area of interest 202.
  • the user may draw a circle with either a finger or stylus around the person to indicate an area of interest 202 around the person.
  • Tracking module 1 14 then begins tracking that area of interest 202 and the boundary module 1 18 displays a selection boundary to indicate the area of interest 202 on the touch screen 1 12.
  • the tracking module then processes the area of interest 202 and determines a sub- image 1 16 within the area of interest 202 to track.
  • the tracking module may track the sub-image within the field of view by continuously scanning the field of view for an image similar to the sub-image.
  • a sub-image 1 16 which in this example may correspond to a person.
  • the tracking module may determine the person is the relevant sub-image and the selection boundary 204 may follow the person when he or she moves or the user moves the camera, or both.
  • the user may have selected the area of interest 202 around the person in order to apply an imaging setting to the person.
  • the imaging setting of the camera 122 may be adjusted by the tracking module based on sub-image 1 16 within the area of interest 202.
  • the user may want to include the flower in the area of interest.
  • the user may include the flower in the interest by enlarging the area of interest 1 20 based on user manipulation of the selection boundary.
  • the flower may be included by the user through pinching or expanding with two fingers.
  • the flower may be included in the area of interest 120 through visual sliders that may be used to adjust size of area of interest 120.
  • the tracking module 1 14 may process the flower as being included in the sub-image. For example, the user may want to have both the flower and person in focus, and will select the focus imaging setting to be applied to the image based on the area of interest 120.
  • FIG. 3 is a drawing of a mobile computing device 1 02 having a touch screen 1 12, wherein a person is within area of interest 302 has been selected by the user and has been tracked to the right side of the touch screen 1 12 after moving from his or her original location in the image.
  • the particular image displayed by the mobile computing device 102 in this figure is generally referred to by the reference number 300.
  • the selection boundary 204 may still track the person.
  • the person is determined by tracking module 1 14 to be a relevant sub-image to track.
  • the selection boundary 204 may stop following the person when the person leaves the field of view of the camera to the right.
  • a timer may be begin when the person leaves the field of view and the camera may resume tracking if the person returns to the field of view within a predetermined amount of time.
  • the tracking module may continue to continuously search the field of view for the sub-image for a specified time once the sub-image has left the field of view. If the specified time has expired, the tracking module may prompt user to select a new area of interest 120 and determine a new sub-image 1 14 within the area of interest.
  • the user may be attempting to take a picture of a small child that is running around.
  • the child is in front of the camera 122 at first and the user indicates an area of interest 120 around the child. However, the child then abruptly decides to stand and run around the user, leaving the field of view for short amounts of time.
  • the tracking module 1 14 and boundary module 1 18 may keep tracking the child once the child reenters the field of view because the user has set a 10 second lag time setting for the tracking module to stop tracking an area of interest 120 once it leaves the field of view.
  • the tracking module 1 14 stops tracking the child as soon as the child leaves the camera's field of view because no such lag time has been preset.
  • the user may apply an imaging setting in association with the selection boundary 204 based on the area of interest.
  • this may be an autofocus feature that may keep the person in focus.
  • this may be a white balance feature that is adjusted when then person walks beneath a tree into the shade.
  • the user may specify both the white balance and the autofocus to be tracked based on the contents of the same selection boundary.
  • the user may select the white balance to be adjusted based upon a selection boundary that the user may draw around a mountain and the autofocus setting to be adjusted in association with the selection boundary around the person.
  • the imaging settings may be applied based on the sub-image within each area of interest 120.
  • a depth of field setting may be applied to the area of interest 120, wherein the aperture setting of the camera is adjusted based upon the distance range of the entire area of interest 120.
  • the person and the background may then both appear to be in focus by adjusting the aperture and focus of the camera appropriately.
  • the tracking of the sub-image may be disabled based on changes to the sub-image within the area of interest 120. For example, if the sub-image within the area of interest 120 disappears or changes by more than 30 %, then the selection boundary will disappear from the touch screen 1 12. In the example of Fig. 4, for example, the selection boundary 204 may stop tracking the person when he or she partially hides behind the tree on the left. In this example, the sub-image within the area of interest 120 has changed beyond a threshold amount, for example 30 %.
  • the tracking module 1 14 may then cause the processor 104 of the mobile computing device 1 02 to prompt the user, via the touch screen 1 12, to select a new area of interest 1 20. In some examples, this might be accompanied by "NS" on some portion of the touch screen 1 1 2 to indicate a new scene is being previewed.
  • the new scene feature may be manually initiated by the user through a touch of the touch screen.
  • FIG. 4 is a drawing of a mobile computing device 1 02 having a touch screen 1 12, wherein an area of interest 402 has been selected by the user and the mobile computing device 102 has been repositioned to change the composition of the image.
  • the particular image displayed by the mobile computing device 102 in this figure is generally referred to by the reference number 400.
  • the mobile computing device 1 02 has been displaced to the right and rotated to the left to include another tree and a cloud in the background of the preview image.
  • the person in Fig. 4 has not physically moved from his or her original location next to the flower as shown in Fig. 2.
  • the person remains in place next to the flower, while the background now includes the sun.
  • the area of interest 402 may again be somewhat different than area the area of interest 202 described above because of the change in the background directly behind the person.
  • the selection boundary 204 is still surrounding and tracking the person despite the relocation of the mobile computing device 102 and the change in the camera's field of view.
  • a depth of field focus mechanism was applied in association with the selection boundary and is still focusing on the person within the selection boundary.
  • User determines that the area of interest 120 is too large, such that the autofocus mechanism is causing too much of the preview image to be in focus.
  • user may reduce the area of interest 120 by, for example, pinching the selection boundary on the touch screen 1 12.
  • the tracking module 1 14 may now process this reduced area of interest and apply a larger aperture, and the person is now in focus with a pleasantly blurred background.
  • FIG. 5 is a drawing of a mobile computing device 1 02 having a touch screen 1 12, wherein a second area of interest has been selected by user and the depth of field imaging setting has been selected by the user.
  • the particular image displayed by the mobile computing device 102 in this figure is generally referred to by the reference number 500.
  • the user has selected an area of interest 502 indicated on the touch screen 1 1 2 by another selection boundary 502.
  • this selection boundary may appear differently from the selection boundary around the other area of interest 202.
  • the area of interest may be indicated by being highlighted.
  • a menu 504 may appear once the user has indicated an area of interest.
  • the user may specify a camera setting for each selected area of interest by using a camera setting to specify from the menu 504.
  • Some imaging settings may use multiple areas of interest. For example, a user may select a depth of field imaging setting on a second area of interest 502. As shown in image 500, a menu 504 has appeared on the touch screen 102 asking the user to select from a variety of imaging settings.
  • the menu 504 includes depth of field imaging setting 506.
  • the depth of field imaging setting may appear grey if only one area of interest has been selected. In some examples, the depth of field setting may not appear in menu 504 if only one area of interest is selected. In some examples when the imaging setting has been selected, this setting is instantly applied and updated to the image. For example, a user may click on focus and the autofocus mechanism will automatically be applied and tracked to that area of interest. In the example of 500, the depth of field imaging setting has been applied to 202 and 502, making areas of interest 202 and 502 appear in focus and the tree and mountains appear blurred.
  • the selection boundary around an area of interest may appear differently depending upon which imaging setting a user selects. The user may then be able to distinguish which imaging settings are being applied to specific areas of interest. For example, applying an imaging setting may make the selection boundary a different color. Additional areas of interest and imaging settings may then be added or subtracted.
  • the depth of field imaging setting has caused the tracking module to process both area of interest 202 and area of interest 502 and determine an appropriate depth of field and focus point so that both area of interest 202 and area of interest 502 are in focus.
  • the user may add another area of interest, such as a tree, and include the tree in the depth of field.
  • the user may decide to stop tracking area of interest 502 and add an area of interest over the mountains to track white balance.
  • Fig. 6 is a block diagram illustrating a method for tracking an area of interest 120 in an imaging device.
  • the method is generally referred to by the reference number 600 and may begin at block 602.
  • the method may include receiving a first image from a camera and displaying the first image on a touch screen 1 12.
  • the image may be provided from camera 122.
  • the method may include receiving a user selection of a first area of interest within the image and identifying a first sub-image within the first area of interest.
  • the user may select an area of interest 120 using a stylus or finger on touch screen 1 12.
  • the method may include processing the image based on the first area of interest. In some examples, this may include processing the image based upon the sub-image within the area of interest 120. In some examples, this may include processing the image based on the entire area of interest 120.
  • the method may include moving the area of interest 120 relative to a field of view of the camera so that the sub-image 1 16 stays within the first area of interest 120.
  • this tracking may be performed by a boundary module 1 18 of the controller and processor 104 or GPU 108 that automatically moves a boundary indicating the area of interest 120 to follow the sub- image.
  • movement of the area of interest would be perceived by the user as movement of the selection boundary to a different location on the display screen of the computing device.
  • the method may include receiving a second user selection of a second area of interest within the first processed image and identifying a second sub-image within the second area of interest. In some examples, this may include displaying another selection boundary around the area of interest 502 on touch screen 1 12. In some examples, the selection boundary may controlled by a boundary module 1 18. In some examples, this selection boundary may appear differently from the first selection boundary to allow the user to distinguish the two. In some examples, multiple selection boundaries may be created, one for each area of interest.
  • the method may include processing the image based on the second area of interest.
  • processing the image based on the second area of interest may take into account that a first area of interest exists.
  • the first area of interest 202 may be used for processing one imaging setting such as focus and the second area of interest 502 may be used for processing another imaging setting such as white balance.
  • the first area of interest 202 and the second area of interest 502 may be used to determine a single imaging setting that such as depth of field, in which a range of distance from the camera is placed into focus.
  • the method may include moving the second area of interest relative to the field of view of the camera so that the second sub-image stays within the second area of interest.
  • the method may include adding or subtracting an area of interest based on user manipulation of a selection boundary. In some examples, this may take the form of another selection boundary that moves around the display of the touch screen 102.
  • Fig. 7 is a block diagram showing a tangible, non-transitory, computer- readable medium that stores code adapted to define a sub-image within an area of interest 120 to be tracked.
  • the computer-readable medium is generally referred to by the reference number 700.
  • the computer-readable medium 700 can comprise Random Access Memory (RAM), a hard disk drive, an array of hard disk drives, an optical drive, an array of optical drives, a non-volatile memory, a Universal Serial Bus (USB) flash drive, a DVD, a CD, and the like.
  • RAM Random Access Memory
  • USB Universal Serial Bus
  • the computer-readable medium 700 can be accessed by a processor 1 04 over a computer bus 702.
  • a first block 1 18 can include a boundary module 1 1 8 to render a selection boundary associated with user selection from touch screen 1 12 around the area of interest 120 on touch screen 1 12.
  • the boundary module 1 18 may also render the selection boundary that automatically follows a sub-image until the sub-image leaves the field of view.
  • a second block 1 14 can include a tracking module to prompt a user to select an area of interest 120 at touch screen 1 1 2.
  • the tracking module 1 14 may to determine a sub-image within a selected area of interest 120.
  • the software components can be stored in any order or configuration.
  • the computer-readable medium 600 is a hard drive
  • the software components can be stored in non-contiguous, or even overlapping, sectors.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)

Abstract

The present disclosure provides techniques for processing images captured by an imaging device. An example of a device includes a camera to capture images, a touch screen to display the images captured by the camera and receive user input, and a processor. The processor is to receive a user selection of an area of interest within the image and identifying a sub-image within the area of interest. The processor also processes the image based on the area of interest. The processor also moves the area of interest relative to a field of view of the camera so that the sub-image stays within the area of interest.

Description

IMAGE PROCESSING FOR AN IMAGE CAPTURE DEVICE
BACKGROUND
[0001] Many mobile devices such as mobile phones and tablets now include built- in cameras that enable the user to capture video and still images. Many such devices also include a touch screen display that enables the user to interact with the camera features of the mobile device. The touch screen can also be used as a view- finder, which can provide a live preview of images before they are captured.
BRIEF DESCRIPTION OF THE DRAWINGS
[0002] Various features and advantages of the invention will become apparent from the following description of examples of the invention, given by way of example only, which is made with reference to the accompanying drawings, of which:
[0003] Fig. 1 is a block diagram of an example mobile computing device to track a sub-image of a user-selected area of interest;
[0004] Fig. 2 is a drawing of an example mobile computing device having a touch screen, wherein an area of interest has been selected by the user;
[0005] Fig. 3 is a drawing of an example mobile computing device having a touch screen, wherein a person is within area of interest has been selected by the user and has been tracked to the right side of the touch screen after moving from his or her original location in the image;
[0006] Fig. 4 is a drawing of an example mobile computing device having a touch screen, wherein an area of interest has been selected by the user and the mobile computing device has been repositioned to change the composition of the image;
[0007] Fig. 5 is a drawing of an example mobile computing device having a touch screen, wherein a second area of interest has been selected by user and the depth of field imaging setting has been selected by the user;
[0008] Fig. 6 is a block diagram illustrating an example method for tracking an area of interest in an imaging device; and
[0009] Fig. 7 is a block diagram showing an example tangible machine-readable medium that stores code adapted to define a sub-image within an area of interest to be tracked. DETAILED DESCRIPTION OF SPECIFIC EMBODIMENTS
[0010] Described herein are techniques relating to the definition and tracking of an area of interest within an image displayed on a touch screen of a mobile device. The image may be an image received by a camera and displayed on a touch screen. In some examples, a user can define an area of interest within an image by outlining a portion of the image displayed on the touch screen. The area of interest is characterized according to the image data within the area of interest. The area of interest can then be used to adjust the image processing applied to the image. For example, the part of the image that is within the area of interest can be used as the basis for setting the exposure level or focus parameters. Additionally, the area of interest can be automatically tracked so that the portion of the image selected by the user remains within the area of interest even if the camera view changes or an object within the camera's view moves. In some examples, two or more areas of interest can be used as the basis for setting an imaging setting such as depth of field.
[0011] Fig. 1 is a block diagram of a mobile computing device 102 to track a sub- image of a user-selected area of interest. In some examples, the mobile computing device 1 02 may include a mobile computing device 1 02 having a processor 104, a memory device 106, a graphics processing unit (GPU) 108, a storage device 1 1 0, and a touch screen 1 12. The storage device 1 10 may also include a tracking module 1 14, in which a sub-image 1 16 may be stored. The controller may also have a boundary module 1 18, in which an area of interest 120 may be stored. The mobile computing device 102 may also contain a camera 122.
[0012] In some examples, the processor 1 04 may be a main processor that is adapted to execute the stored instructions. The processor 104 may be a single core processor, a multi-core processor, a computing cluster, or any number of other configurations. The processor 104 may be implemented as Complex Instruction Set Computer (CISC) or Reduced Instruction Set Computer (RISC) processors, x86 Instruction set compatible processors, multi-core, or any other microprocessor or central processing unit (CPU).
[0013] In some examples, the memory device 106 can include random access memory (e.g., SRAM, DRAM, zero capacitor RAM, SONOS, eDRAM, EDO RAM, DDR RAM, RRAM, PRAM, etc.), read only memory (e.g., Mask ROM, PROM, EPROM, EEPROM, etc.), flash memory, or any other suitable memory systems. [0014] In some examples, the mobile computing device 1 02 may include a graphics processing unit, or GPU 1 08. The GPU 108 may share or independently execute the stored instructions. In some examples, the GPU 108 may execute instructions related to image processing. In some examples, the GPU 108 may execute instructions requiring high-throughput type computations that exhibit data- parallelism.
[0015] Storage device 1 10 may be any non-transitory computer readable medium. In some examples, storage device 1 10 may be a hard disk drive or solid state memory such as flash memory.
[0016] The mobile computing device 1 02 may include a camera 122 to capture images. In some examples, the images may be displayed on the touch screen 1 12. In some examples, the images may be sent to processor 104 and storage device 1 10. In some examples, the camera 122 may include functionality to adjust focus and determine depth in an image (for example, from stereo or using an active depth sensor) to distinguish distances that may exist between regions that are relatively near to the camera 122 when compared to regions that are relatively far from the camera 122. The camera may be positioned on the front of the mobile computing device 1 02 or on the back of the mobile computing device 102, or both. In some examples, the mobile computing device 102 may include a tracking module 1 14 that tracks an area of interest of an image displayed on the touch screen display. For example, the tracking module 1 14 may be a set of instructions stored on the storage device 1 10, as shown in Fig. 1 . The instructions, when executed by the processor 104, may direct the mobile computing device 102 to perform operations. In some examples, the instructions are executed by the GPU 108. In some examples, the tracking module 1 14 may be implemented as logic circuits or computer-readable instructions stored on an integrated circuit such as an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), or other type of processor. The tracking module 1 14 may receive an image from the camera 122. The image may be associated with a preview image to be rendered at touch screen 1 12. The tracking module 1 14 may prompt a user to select an area of interest that may include a sub-image that a user wishes to track at touch screen 1 12. In some examples, the tracking module 1 14 may store the area of interest 120 onto storage device 1 10. In some examples, the area of interest 120 may be stored in memory 106. [0017] The tracking module 1 14 may also track changes within the area of interest 120. Tracking changes within the area of interest 120 may allow the tracking module 1 14 to process the image based on the area of interest 120. In some examples, tracking changes within the area of interest may include tracking changes to a sub-image within an area of interest.
[0018] In some examples, tracking changes to the area of interest 120 may allow the tracking module to determine that a new scene is within the camera's field of view. The field of view of a camera 122 is herein defined as the extent of the observable world that can be recorded by the camera at any time, generally depending on the lens of the camera and its angle of view. In some examples, tracking changes to the area of interest 120 may include computing an overall percentage of change within the area of interest 120. In some examples, if the overall percentage of change in the area of interest 120 reaches a threshold percentage, then the tracking module 1 14 may clear the area of interest 120 from memory 1 1 6 and/or storage device 1 10 and the tracking module may be disabled. In some examples, the tracking module prompts the user to select a new area of interest 120 at touch screen 1 12 instead of being disabled. In some examples, the tracking module 1 14 may start a counter and wait a specific amount of time before being disabled or prompting the user to select a new area of interest 120. In some examples, the user may force a new scene via the touch screen 1 12 and select a new area of interest 120.
[0019] In some examples, the tracking module may also keep track of imaging settings specified by the user for each area of interest 1 20 on the screen. The tracking module may dynamically apply these imaging settings based on the area of interest 120 as each area of interest 120 changes in location and appearance. For example, the tracking module may analyze an area of interest 120 and determine a range of distances of the objects within the area of interest 120 and adjust the camera's focus and aperture settings so that the depth of field includes all the objects within an area of interest 120 and the objects may therefore appear to all be in focus. The imaging settings may additionally include, but are not limited to, white balance and exposure level. In some examples, the tracking module may apply multiple imaging settings based on one area of interest 120. In some examples the tracking modules may apply each imaging setting based on different areas of interest selected by the user. [0020] In some examples, as shown in Fig. 1 , the tracking module 1 14 may also determine a sub-image 1 16 within the selected area of interest 120. In some examples, the sub-image 1 16 may allow an area of interest 120 to be tracked within a camera's field of view while allowing for an overall percentage of change in the area of interest 120. In some examples, the tracking of the area of interest may be displayed on the touch screen 1 1 2 as a selection boundary that automatically follows the sub-image 1 16 within the on the touch screen. In some examples, the tracking of sub-image may allow imaging settings to be applied to part of the area of interest 120 indicated by the sub-image 1 16, without such changes being applied to the rest of the area of interest 120. In some examples, this may allow imaging settings to be applied to the entire image based on the specific characteristics of the sub-image 1 16.
[0021] Sub-image 1 16 may be determined by analyzing the area of interest. In some examples, the sub-image may be determined by analyzing colors or edges. In some examples, the overall shape of the selected area of interest 1 20 may be analyzed to determine a sub-image with a similar shape. Using a shape to determine the sub-image 1 16 may help a user to select a sub-image out of a relatively crowded picture. In some examples, this sub-image 1 1 6 may be
determined by contrasting the relative distance of objects within the area of interest 120 and grouping together objects that are equally distant from the mobile computing device 1 02. In these examples, any enclosed shape may be sufficient to select the sub-image a user is interested in selecting. In some examples, the shape may automatically be enclosed by connecting the beginning and end points of a selection that did not form a bounded shape. In some examples, the tracking module 1 14 may allow the user via the touch screen 1 12 to invert the sub-image 1 16 within the area of selection to so that the tracking module 1 14 tracks the portion of the area of selection that was initially left out of the sub-image but within the area of interest 120. In some examples, the tracking module may allow the user to increase or decrease the area of interest 1 20 included in the sub-image via the touch screen 1 12. The sub-image 1 1 6 may be stored onto a storage device 1 10 that allows the sub-image to be tracked for changes. By storing the sub-image to the storage device 1 10, the tracking module may then compare the active area of interest 120 to this stored sub- image. In some examples, the tracking module may continue to continuously scan the field of view for the sub-image 1 16 for a specified amount of time if the sub- image leaves the field of view.
[0022] The mobile computing device 1 02 may also include a boundary module 1 18 for indicating an area of interest on the touch screen 1 1 2. In some examples, an area of interest may be indicated by displaying a selection boundary around an area of interest. In some examples, the area of interest may be indicated by a highlighted portion of the touch screen 1 1 2. In some examples, the boundary module 1 18 may be stored on a storage device 1 1 0 and include code to direct the operations of processor 104 or GPU 108, as shown in Fig. 1 . In some examples, the boundary module 1 18 may be implemented as logic circuits or computer-readable instructions stored on an integrated circuit such as an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), or other type of processor. The boundary module 1 18 may be configured to render a selection boundary associated with user selection from touch screen 1 12 around an area of interest 120 on touch screen 1 12. In some examples, the area of interest 120 is then sent to tracking module 1 14 for further processing. The boundary module 1 1 8 may receive a sub- image from the tracking module 1 14. The boundary module 1 18 may include instructions to render the selection boundary until the sub-image 1 16 leaves the field of view. The field of view may or may not coincide with the image displayed on the touch screen 1 12, depending on whether the preview image is digitally zoomed in or cropped. In some examples, when a preview image is digitally zoomed in, the field of view of the camera 122 is greater than the digitally zoomed preview image, but the boundary module may still be tracking the sub-image even if it does not appear in the preview image on the touch screen 1 12. In some examples, the image on the display may be a cropped portion of the field of view. The boundary module may still track the sub-image if it is within the field of view. The boundary module may also include a lag time function to hold an area of interest 1 20 in memory 106 or storage device 1 10 for a predetermined amount of time once the area of interest 120 has left the field of view of the camera 122. In some examples, once the sub-image has left the field of view, the imaging settings may be based on the last known state of the sub-image. In some examples, the sub-image may reenter the field of view and the boundary module and tracking module may resume their functions.
[0023] The block diagram of Fig. 1 is not intended to indicate that the mobile computing device 102 is to include all of the components shown in Fig. 1 . Further, the mobile computing device 102 may include any number of additional components not shown in Fig. 1 , depending on the details of the specific implementation.
[0024] Fig. 2 is a drawing of a mobile computing device 1 02 having a touch screen 1 12, wherein an area of interest 202 has been selected by the user. The particular image displayed by the mobile computing device 102 in this figure is generally referred to by the reference number 200.
[0025] In this example, the preview image 200 on the mobile computing device 102 shows a person standing near a tree, in front of some mountains with birds flying overhead. In image 200, a user wishes to take a picture of the person. The user points the camera 122 in the direction of the person and the touch screen 1 12 displays a preview image on the screen. The user may then select an area of interest 202 to track. The area of interest 202 may contain a sub-image 1 1 6 that the user is interested in using to apply an imaging setting to the image as a whole. In some examples, the user may indicate the area of interest 202 by tapping on a sub- image to be tracked. This tapping may then create a shape around an object, indicating an area of interest 202. The user may then expand or decrease the area of interest by stretching or pinching the selection boundary. In some examples, the user may select an area of interest 202 by touching and dragging along the touch screen diagonally to create a box indicating an area of interest 202. In some examples, the user may draw a circle with either a finger or stylus around the person to indicate an area of interest 202 around the person. Tracking module 1 14 then begins tracking that area of interest 202 and the boundary module 1 18 displays a selection boundary to indicate the area of interest 202 on the touch screen 1 12. The tracking module then processes the area of interest 202 and determines a sub- image 1 16 within the area of interest 202 to track.
[0026] In some examples, the tracking module may track the sub-image within the field of view by continuously scanning the field of view for an image similar to the sub-image. In the example of Fig. 2, within area of interest 202 is a sub-image 1 16, which in this example may correspond to a person. For example, because the person is in the area of interest 202 as selected by the user, the tracking module may determine the person is the relevant sub-image and the selection boundary 204 may follow the person when he or she moves or the user moves the camera, or both. In some examples, the user may have selected the area of interest 202 around the person in order to apply an imaging setting to the person. The imaging setting of the camera 122 may be adjusted by the tracking module based on sub-image 1 16 within the area of interest 202.
[0027] In some examples, the user may want to include the flower in the area of interest. For example, the user may include the flower in the interest by enlarging the area of interest 1 20 based on user manipulation of the selection boundary. In some examples, the flower may be included by the user through pinching or expanding with two fingers. In some examples, the flower may be included in the area of interest 120 through visual sliders that may be used to adjust size of area of interest 120. Once the user expands the area of interest 1 20 to include the flower, the tracking module 1 14 may process the flower as being included in the sub-image. For example, the user may want to have both the flower and person in focus, and will select the focus imaging setting to be applied to the image based on the area of interest 120.
[0028] Fig. 3 is a drawing of a mobile computing device 1 02 having a touch screen 1 12, wherein a person is within area of interest 302 has been selected by the user and has been tracked to the right side of the touch screen 1 12 after moving from his or her original location in the image. The particular image displayed by the mobile computing device 102 in this figure is generally referred to by the reference number 300.
[0029] In image 300, although the area of interest 302 may be slightly different because the background has changed with the movement of the person, the selection boundary 204 may still track the person. In some examples, the person is determined by tracking module 1 14 to be a relevant sub-image to track. The selection boundary 204 may stop following the person when the person leaves the field of view of the camera to the right. In some examples, a timer may be begin when the person leaves the field of view and the camera may resume tracking if the person returns to the field of view within a predetermined amount of time. As discussed above, the tracking module may continue to continuously search the field of view for the sub-image for a specified time once the sub-image has left the field of view. If the specified time has expired, the tracking module may prompt user to select a new area of interest 120 and determine a new sub-image 1 14 within the area of interest.
[0030] In some examples, the user may be attempting to take a picture of a small child that is running around. The child is in front of the camera 122 at first and the user indicates an area of interest 120 around the child. However, the child then abruptly decides to stand and run around the user, leaving the field of view for short amounts of time. In one example, the tracking module 1 14 and boundary module 1 18 may keep tracking the child once the child reenters the field of view because the user has set a 10 second lag time setting for the tracking module to stop tracking an area of interest 120 once it leaves the field of view. In some examples, the tracking module 1 14 stops tracking the child as soon as the child leaves the camera's field of view because no such lag time has been preset.
[0031] In some examples, the user may apply an imaging setting in association with the selection boundary 204 based on the area of interest. In some examples, this may be an autofocus feature that may keep the person in focus. In some examples, this may be a white balance feature that is adjusted when then person walks beneath a tree into the shade. In some examples, the user may specify both the white balance and the autofocus to be tracked based on the contents of the same selection boundary. In some examples, the user may select the white balance to be adjusted based upon a selection boundary that the user may draw around a mountain and the autofocus setting to be adjusted in association with the selection boundary around the person. In some examples, the imaging settings may be applied based on the sub-image within each area of interest 120. In some examples, a depth of field setting may be applied to the area of interest 120, wherein the aperture setting of the camera is adjusted based upon the distance range of the entire area of interest 120. In some examples, the person and the background may then both appear to be in focus by adjusting the aperture and focus of the camera appropriately.
[0032] In some examples, the tracking of the sub-image may be disabled based on changes to the sub-image within the area of interest 120. For example, if the sub-image within the area of interest 120 disappears or changes by more than 30 %, then the selection boundary will disappear from the touch screen 1 12. In the example of Fig. 4, for example, the selection boundary 204 may stop tracking the person when he or she partially hides behind the tree on the left. In this example, the sub-image within the area of interest 120 has changed beyond a threshold amount, for example 30 %. In some examples, the tracking module 1 14 may then cause the processor 104 of the mobile computing device 1 02 to prompt the user, via the touch screen 1 12, to select a new area of interest 1 20. In some examples, this might be accompanied by "NS" on some portion of the touch screen 1 1 2 to indicate a new scene is being previewed. In some examples, the new scene feature may be manually initiated by the user through a touch of the touch screen.
[0033] Fig. 4 is a drawing of a mobile computing device 1 02 having a touch screen 1 12, wherein an area of interest 402 has been selected by the user and the mobile computing device 102 has been repositioned to change the composition of the image. The particular image displayed by the mobile computing device 102 in this figure is generally referred to by the reference number 400.
[0034] As shown in Fig. 4, the mobile computing device 1 02 has been displaced to the right and rotated to the left to include another tree and a cloud in the background of the preview image. The person in Fig. 4 has not physically moved from his or her original location next to the flower as shown in Fig. 2. The person remains in place next to the flower, while the background now includes the sun. The area of interest 402 may again be somewhat different than area the area of interest 202 described above because of the change in the background directly behind the person. In some examples, the selection boundary 204 is still surrounding and tracking the person despite the relocation of the mobile computing device 102 and the change in the camera's field of view. In some examples, a depth of field focus mechanism was applied in association with the selection boundary and is still focusing on the person within the selection boundary. User then determines that the area of interest 120 is too large, such that the autofocus mechanism is causing too much of the preview image to be in focus. In this example, user may reduce the area of interest 120 by, for example, pinching the selection boundary on the touch screen 1 12. The tracking module 1 14 may now process this reduced area of interest and apply a larger aperture, and the person is now in focus with a pleasantly blurred background.
[0035] Fig. 5 is a drawing of a mobile computing device 1 02 having a touch screen 1 12, wherein a second area of interest has been selected by user and the depth of field imaging setting has been selected by the user. The particular image displayed by the mobile computing device 102 in this figure is generally referred to by the reference number 500.
[0036] As shown in Fig. 5, the user has selected an area of interest 502 indicated on the touch screen 1 1 2 by another selection boundary 502. In some examples, this selection boundary may appear differently from the selection boundary around the other area of interest 202. In some examples, the area of interest may be indicated by being highlighted. In some examples, a menu 504 may appear once the user has indicated an area of interest. The user may specify a camera setting for each selected area of interest by using a camera setting to specify from the menu 504. Some imaging settings may use multiple areas of interest. For example, a user may select a depth of field imaging setting on a second area of interest 502. As shown in image 500, a menu 504 has appeared on the touch screen 102 asking the user to select from a variety of imaging settings. In the example of 500, user has selected the focus setting for area of interest 202. The menu 504 includes depth of field imaging setting 506. In some examples, the depth of field imaging setting may appear grey if only one area of interest has been selected. In some examples, the depth of field setting may not appear in menu 504 if only one area of interest is selected. In some examples when the imaging setting has been selected, this setting is instantly applied and updated to the image. For example, a user may click on focus and the autofocus mechanism will automatically be applied and tracked to that area of interest. In the example of 500, the depth of field imaging setting has been applied to 202 and 502, making areas of interest 202 and 502 appear in focus and the tree and mountains appear blurred.
[0037] In some examples, the selection boundary around an area of interest may appear differently depending upon which imaging setting a user selects. The user may then be able to distinguish which imaging settings are being applied to specific areas of interest. For example, applying an imaging setting may make the selection boundary a different color. Additional areas of interest and imaging settings may then be added or subtracted. In the example of 500, the depth of field imaging setting has caused the tracking module to process both area of interest 202 and area of interest 502 and determine an appropriate depth of field and focus point so that both area of interest 202 and area of interest 502 are in focus. In some examples, the user may add another area of interest, such as a tree, and include the tree in the depth of field. In some examples, the user may decide to stop tracking area of interest 502 and add an area of interest over the mountains to track white balance.
[0038] Fig. 6 is a block diagram illustrating a method for tracking an area of interest 120 in an imaging device. The method is generally referred to by the reference number 600 and may begin at block 602. At block 602, the method may include receiving a first image from a camera and displaying the first image on a touch screen 1 12. The image may be provided from camera 122.
[0039] At block 604, the method may include receiving a user selection of a first area of interest within the image and identifying a first sub-image within the first area of interest. In some examples, the user may select an area of interest 120 using a stylus or finger on touch screen 1 12.
[0040] At block 606, the method may include processing the image based on the first area of interest. In some examples, this may include processing the image based upon the sub-image within the area of interest 120. In some examples, this may include processing the image based on the entire area of interest 120.
[0041] At block 608, the method may include moving the area of interest 120 relative to a field of view of the camera so that the sub-image 1 16 stays within the first area of interest 120. In some examples, this tracking may be performed by a boundary module 1 18 of the controller and processor 104 or GPU 108 that automatically moves a boundary indicating the area of interest 120 to follow the sub- image. In some examples, if a selection boundary is being displayed for the area of interest 120, movement of the area of interest would be perceived by the user as movement of the selection boundary to a different location on the display screen of the computing device.
[0042] At block 610, the method may include receiving a second user selection of a second area of interest within the first processed image and identifying a second sub-image within the second area of interest. In some examples, this may include displaying another selection boundary around the area of interest 502 on touch screen 1 12. In some examples, the selection boundary may controlled by a boundary module 1 18. In some examples, this selection boundary may appear differently from the first selection boundary to allow the user to distinguish the two. In some examples, multiple selection boundaries may be created, one for each area of interest.
[0043] At block 612, the method may include processing the image based on the second area of interest. In some examples, processing the image based on the second area of interest may take into account that a first area of interest exists. For example, the first area of interest 202 may be used for processing one imaging setting such as focus and the second area of interest 502 may be used for processing another imaging setting such as white balance. In some examples, the first area of interest 202 and the second area of interest 502 may be used to determine a single imaging setting that such as depth of field, in which a range of distance from the camera is placed into focus.
[0044] At block 614, the method may include moving the second area of interest relative to the field of view of the camera so that the second sub-image stays within the second area of interest.
[0045] At block 616, the method may include adding or subtracting an area of interest based on user manipulation of a selection boundary. In some examples, this may take the form of another selection boundary that moves around the display of the touch screen 102.
[0046] Fig. 7 is a block diagram showing a tangible, non-transitory, computer- readable medium that stores code adapted to define a sub-image within an area of interest 120 to be tracked. The computer-readable medium is generally referred to by the reference number 700. The computer-readable medium 700 can comprise Random Access Memory (RAM), a hard disk drive, an array of hard disk drives, an optical drive, an array of optical drives, a non-volatile memory, a Universal Serial Bus (USB) flash drive, a DVD, a CD, and the like. In one embodiment of the present invention, the computer-readable medium 700 can be accessed by a processor 1 04 over a computer bus 702.
[0047] The various software components discussed herein can be stored on the tangible, non-transitory computer-readable medium 700 as indicated in Fig. 7. For example, a first block 1 18 can include a boundary module 1 1 8 to render a selection boundary associated with user selection from touch screen 1 12 around the area of interest 120 on touch screen 1 12. The boundary module 1 18 may also render the selection boundary that automatically follows a sub-image until the sub-image leaves the field of view. A second block 1 14 can include a tracking module to prompt a user to select an area of interest 120 at touch screen 1 1 2. The tracking module 1 14 may to determine a sub-image within a selected area of interest 120.
[0048] Although shown as contiguous blocks, the software components can be stored in any order or configuration. For example, if the computer-readable medium 600 is a hard drive, the software components can be stored in non-contiguous, or even overlapping, sectors.
[0049] The present techniques are not restricted to the particular details listed herein. Indeed, those skilled in the art having the benefit of this disclosure will appreciate that many other variations from the foregoing description and drawings may be made within the scope of the present techniques. Accordingly, it is the following claims including any amendments thereto that define the scope of the present techniques.

Claims

CLAIMS What is claimed is:
1 . A device comprising:
a camera to capture images;
a touch screen to display the images captured by the camera and receive user input; and,
a processor to:
receive an image from the camera and send the image to the touch screen;
receive a user selection of a first area of interest within the image and identify a sub-image within the first area of interest;
process the image based on the first area of interest; and, move the first area of interest relative to a field of view of the camera so that the sub-image stays within the first area of interest.
2. The device of claim 1 , wherein the processor is to further:
receive a second user selection of a second area of interest within the image and identify a second sub-image within the second area of interest;
process the image based upon the first area of interest and the second area of interest; and,
move the second area of interest relative to a field of view of the
camera so that the second sub-image stays within the second area of interest.
3. The device of claim 1 , wherein processing the image comprises applying imaging settings based on the sub-image portion of the image.
4. The device of claim 2, wherein processing the image comprises applying imaging settings to the image based on both the first area of interest and the second area of interest.
5. The device of claim 1 , wherein the controller further:
tracks the sub-image for movement and change;
ceases tracking the sub-image once the controller determines that the sub-image has changed beyond a predetermined change threshold; and,
prompts the user to reselect the area of interest.
6. The device of claim 1 , wherein the processor is to further cause a selection boundary to be displayed around the area of interest on the touch screen.
7. The device of claim 4, wherein the imaging settings comprise aperture and autofocus adjustment.
8. A method, comprising:
receiving a first image from a camera and displaying the first image on a touch screen;
receiving a user selection of a first area of interest within the image and
identifying a first sub-image within the first area of interest ;
processing the image based on the first area of interest; and,
moving the area of interest relative to a field of view of the camera so that the first sub-image stays within the first area of interest.
9. The method of claim 8, further comprising:
receiving a second user selection of a second area of interest within the first processed image and identifying a second sub-image within the second area of interest;
processing the image based on the second area of interest; and,
moving the second area of interest relative to the field of view of the camera so that the second sub-image stays within the second area of interest.
10. The method of claim 8, wherein processing the image comprises applying imaging settings to the image based on the first area of interest.
1 1 . The method of claim 9, further comprising adding or subtracting an area of interest based on user manipulation of a selection boundary.
12. The method of claim 8, wherein the user selection of an area of interest within the image is received though use of a stylus or finger on the touch screen.
13. A non-transitory, computer-readable medium comprising instructions to direct a processor to:
receive an image from a camera and send the image to a touch screen;
receive a user selection of an area of interest within the image and identify a sub-image within the area of interest;
process the image based on the area of interest; and,
move the area of interest relative to a field of view of the camera so that the sub-image stays within the area of interest.
14. The non-transitory, computer-readable medium of claim 13, wherein the sub-image is defined by a predetermined set of factors.
15. The non-transitory, computer-readable medium of claim 13, wherein the area of interest is indicated by a selection boundary on the touch screen that is created and modified by the user.
PCT/US2014/013608 2014-01-29 2014-01-29 Image processing for an image capture device WO2015116065A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2014/013608 WO2015116065A1 (en) 2014-01-29 2014-01-29 Image processing for an image capture device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2014/013608 WO2015116065A1 (en) 2014-01-29 2014-01-29 Image processing for an image capture device

Publications (1)

Publication Number Publication Date
WO2015116065A1 true WO2015116065A1 (en) 2015-08-06

Family

ID=53757475

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2014/013608 WO2015116065A1 (en) 2014-01-29 2014-01-29 Image processing for an image capture device

Country Status (1)

Country Link
WO (1) WO2015116065A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090303338A1 (en) * 2008-06-06 2009-12-10 Texas Instruments Incorporated Detailed display of portion of interest of areas represented by image frames of a video signal
JP2010136187A (en) * 2008-12-05 2010-06-17 Toshiba Corp Mobile terminal
US20100232704A1 (en) * 2009-03-11 2010-09-16 Sony Ericsson Mobile Communications Ab Device, method and computer program product
KR20110020522A (en) * 2009-08-24 2011-03-03 삼성전자주식회사 Method and apparatus for controlling zoom using touch screen
US20110069180A1 (en) * 2009-09-23 2011-03-24 Microsoft Corporation Camera-based scanning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090303338A1 (en) * 2008-06-06 2009-12-10 Texas Instruments Incorporated Detailed display of portion of interest of areas represented by image frames of a video signal
JP2010136187A (en) * 2008-12-05 2010-06-17 Toshiba Corp Mobile terminal
US20100232704A1 (en) * 2009-03-11 2010-09-16 Sony Ericsson Mobile Communications Ab Device, method and computer program product
KR20110020522A (en) * 2009-08-24 2011-03-03 삼성전자주식회사 Method and apparatus for controlling zoom using touch screen
US20110069180A1 (en) * 2009-09-23 2011-03-24 Microsoft Corporation Camera-based scanning

Similar Documents

Publication Publication Date Title
EP3127320B1 (en) System and method for multi-focus imaging
US10116879B2 (en) Method and apparatus for obtaining an image with motion blur
US9569854B2 (en) Image processing method and apparatus
US20190213791A1 (en) Information processing apparatus relating to generation of virtual viewpoint image, method and storage medium
CA2941469C (en) Method, device, and program for image processing
US10175764B2 (en) Method of controlling an operation of a camera apparatus and a camera apparatus
US10516830B2 (en) Guided image composition on mobile devices
CA2942559C (en) Systems and methods for processing and editing all celestial sphere images
KR20180018561A (en) Apparatus and method for scaling video by selecting and tracking image regions
KR102407190B1 (en) Image capture apparatus and method for operating the image capture apparatus
US10491804B2 (en) Focus window determining method, apparatus, and device
KR102150470B1 (en) Method for setting shooting condition and electronic device performing thereof
US9521329B2 (en) Display device, display method, and computer-readable recording medium
CN109831704B (en) Video editing method, video editing device, computer equipment and storage medium
JP2019536118A (en) Virtual reality device and adjustment method thereof
WO2018014517A1 (en) Information processing method, device and storage medium
EP3109695B1 (en) Method and electronic device for automatically focusing on moving object
GB2537886A (en) An image acquisition technique
WO2018054097A1 (en) Self-portrait line-of-sight alignment method, device, and terminal
WO2015116065A1 (en) Image processing for an image capture device
JP6880697B2 (en) Display device, display method and program
US11405555B2 (en) Information processing apparatus, image capturing apparatus, information processing method, and storage medium
US20150179110A1 (en) Method for processing information and electronic device
CN117837158A (en) Video processing device and method, shooting device, movable platform and readable storage medium
US10425630B2 (en) Stereo imaging

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14881255

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14881255

Country of ref document: EP

Kind code of ref document: A1