WO2008094951A1 - Image editing system and method - Google Patents

Image editing system and method Download PDF

Info

Publication number
WO2008094951A1
WO2008094951A1 PCT/US2008/052367 US2008052367W WO2008094951A1 WO 2008094951 A1 WO2008094951 A1 WO 2008094951A1 US 2008052367 W US2008052367 W US 2008052367W WO 2008094951 A1 WO2008094951 A1 WO 2008094951A1
Authority
WO
WIPO (PCT)
Prior art keywords
interest
image
points
display
crop area
Prior art date
Application number
PCT/US2008/052367
Other languages
French (fr)
Inventor
Andrew Gavin
Scott Shumaker
Ben Stragnell
Original Assignee
Flektor, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Flektor, Inc. filed Critical Flektor, Inc.
Publication of WO2008094951A1 publication Critical patent/WO2008094951A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/387Composing, repositioning or otherwise geometrically modifying originals
    • H04N1/3872Repositioning or masking
    • H04N1/3873Repositioning or masking defined only by a limited number of coordinate points or parameters, e.g. corners, centre; for trimming

Definitions

  • the present invention relates to an image editing system, and more particularly, to a system for detecting and editing points of interest on images and manipulating those images using the points of interest for specific applications such as cropping, animation, and navigation..
  • FIG. 1 is a block diagram of a web-based video editing system according to a first embodiment of the present invention.
  • FIG. Ia is a block diagram of a web-based video editing system according to a second embodiment of the present invention.
  • FIG. 2 is a block diagram of one embodiment of an image editing system of the web- based video editing system of FIG. 1.
  • FIG. 3 is a flowchart showing a method of operation for the detection subsystem.
  • the subsystem receives data pertaining to a particular image in the system.
  • FIG. 3a shows a sample image containing points of interest. Ln this example, the image contains two figures with recognizable human faces.
  • FIG. 4 shows a flowchart for a method of operation to automatically crop an image using data for predetermined points of interest in the image.
  • FIG. 5a is a visual representation of the method by which image data in a landscape orientation is processed for photo-cropping.
  • FIG. 5b is a visual representation of a method by which image data in a portrait orientation is processed for photo-cropping.
  • FIG. 5c is a visual representation of a method by which image data is processed for photo-cropping.
  • FIG. 1 is a block diagram of a web-based video editing system according to a first embodiment of the present invention.
  • the editing system includes one or more communication devices 1 10 each having a graphical user interface 115, a server 120 having a connection manager
  • the communication devices include, but are not limited to, a personal computer, a mobile telephone, a PDA, or any other communication device configured to operate as a client computer to the server.
  • the network to which the server and devices are coupled may be a wireless or a wireline network and may range in size from a local area network to a wide area network to the Internet. A dedicated open socket connection exists between the connection manager and the client computers.
  • one or more client computers are configured to transmit information to and receive information from the server, hi some embodiments, each of the client computers is configured to send a query for information and the server is configured to respond to the query by sending the requested information to the client computer. In some embodiments, one or more of the client computers is configured to transmit commands to the server and the server is configured to perform functions in response to the command.
  • each of the client computers is configured with an application for displaying multimedia on the graphical user interface of the client computer.
  • the application may be Adobe Flash® or any other application capable of displaying multimedia.
  • connection manager is configured to determine the condition of the server and perform asynchronous messaging to one or more of the client computers over the dedicated open socket connection.
  • the content of the messages is indicative of the state of the server.
  • the server is configured to receive requests from one or more of the client computers and perform functions in response to the received requests.
  • the server performs any number of functions typically performed in the server of a web-based video editing system.
  • the server also provides an image editing system for the web-based video editing system.
  • FIG. Ia is a block diagram of some embodiments of the web-based video editing system of FIG. 1. hi these embodiments, the system does not include a connection manager 130a for communication between the client computer 110a and the server 120a. Otherwise, client computer 110a is an instance of client computer 110, graphical user interface 115a is an instance of grpahical user interface 115, server 120a is an instance of server 120, image editing system 140a is an instance of image editing system 140, and internet 150a is an instance of internet 150. All the components in FIG. Ia are thus identical to those in FIG. 1, and they are all configured to operate as described in FIG. 1.
  • the combination of data representing specified portions of an image is sometimes referred to as a point of interest.
  • These points of interest may take the shape of a square, rectangle, or other quadrilateral, a circle, an ellipse, or any other closed two-dimensional shape.
  • a user operates the client computer 110 to manage and manipulate points of interest on a computer image. The user can add, reposition, and resize points of interest through the graphical user interface 115.
  • the server 120 automatically detects points of interest by using algorithms such as face detection and feature recognition.
  • the server-generated points of interest are also presented to the user in a graphical fashion, allowing the user to adjust their size and position and remove false positives or undesired interest points in an identical fashion to the points added by the user. Once these points of interest have been saved, they are stored in a database on the server along with the image and used appropriately as the image is displayed in different contexts.
  • users can choose to present the image in an animated fashion, and the client computer will use the points of interest to intelligently pan around the image and focus on parts of the image.
  • the points of interest become 'hot spots' suitable for web-based navigation.
  • FIG. 2 is a block diagram of one embodiment of an image-editing system 140 of the web-based video editing system of FIG. 1.
  • the image editing system includes a detection subsystem 210.
  • the detection subsystem includes a processor 212, memory
  • the memory 214 and computer code product including instruction stored in the memory and adapted to cause the processor, and thereby the detection subsystem, to receive and process user point of interest detection and selection requests.
  • the memory also stores information indicative of the user selection requests.
  • the memory may be any type of read- write memory, including, but not limited to, random access memory.
  • the user input received by the detection subsystem includes the identity of the user, the image to be cropped, the size to which the image is to be cropped, and the points of interest edited by the user and/or generated by the server and stored on the server in an earlier stage.
  • Each point of interest consists of a rectangular region of the image that encompasses the point of interest, although in some embodiments this point could be a circle (position and radius) or any other closed two-dimensional shape.
  • the data is be stored in the editing system server (not shown), the detection subsystem memory 214, or at a remote location connected to the network of FIG. 1.
  • the data is provided by the web-based video editing system or is data generated by the user.
  • the data may include uncropped images, cropped images, meta data for cropping the images, and points of interest.
  • the system includes an animation subsystem configured to examine the points of interest and present an intelligent fly through of the image.
  • the system includes a navigation subsystem that allows a user to annotate the points of interest with text boxes and hyperlinks.
  • FIG. 3 is a flowchart showing a method of operation for the detection subsystem.
  • the subsystem receives 310 data pertaining to a particular image in the system. This data is a pointer to a file on the server, a pointer to a file stored in a remote location connected to the server by the network in FIG. 1 , the actual data for the image, or any other data that would allow the subsystem access to the image data.
  • the subsystem then processes 320 the image using a facial recognition detection algorithm to determine all of the faces in the image and creates points of interest for each of the detected faces.
  • the subsystem either transmits (not shown) the data to the image editing system, or the subsystem transmits 330 the data to the client computer.
  • the client computer displays the image alone with markers indicating each of the automatically selected points of interest.
  • the system receives 340 input from the user indicating any changes to the selected points of interest that he desires. For example, the user may remove some of the preselected points of interest, or the user may add additional points of interest.
  • the additional points of interest may include any particular features of the image in addition to the automatically detected faces that the user wishes to have included in the cropped image.
  • the subsystem transmits 350 the data for the points of interest to the image editing system.
  • FIG. 3a shows a sample image containing points of interest. In this example, the image 300 contains two figures 310 with recognizable human faces.
  • the detection subsystem When the detection subsystem processes this image, it automatically applies points of interest 320 to these faces.
  • the image also contains points of interest 330 added by the user so that a pet 340 and a painting 350 are also marked as being important in the image and thus represent visuals within in the image that the user desires to include in the final cropped image.
  • FIG. 4 shows a flowchart for a method of operation to automatically crop an image using data for predetermined points of interest in the image.
  • the image editing system receives 410 data pertaining to points of interest either automatically generated or generated by the user and data pertaining to the image to be cropped.
  • the system further receives 420 data pertaining to the size and orientation to which the image is to be cropped. Based on this data, the system determines 430 whether to crop the image along its height or along its width.
  • the system analyzes every valid configuration of the crop rectangle by beginning 440 at one side of the image.
  • the system examines 450 the locations of the points of interest in relation to the position of the crop rectangle, and assigns 460 a score to the current crop rectangle configuration.
  • the system In assigning the score, the system tries to maximize the area of points of interest displayed inside the crop rectangle, to avoid partially cropping a point of interest and cutting it off.
  • the system also adjusts the score to display the points of interest in accordance with basic photographic rules, such as the rule of thirds, centering, framing, balance, rotation such that all faces in the image are vertical, use of the golden-ratio, etc.
  • basic photographic rules such as the rule of thirds, centering, framing, balance, rotation such that all faces in the image are vertical, use of the golden-ratio, etc.
  • the photographic rules contribute just enough to the score to allow the system to choose between crop configurations that would otherwise be tied.
  • the system determines 470 whether it has reached the other end of the image. If the system has not reached the other end, it moves 480 the crop area by one pixel, analyzes 450 the current position, and assigns 460 it a score. If instead, the system determines 470 that it has reached the end of the image, it selects 490 and stores the crop configuration with the highest score. The system may then crop the image accordingly.
  • the system also allows the user to preview the cropped image overlaid on the original image and adjust the cropped region.
  • FIG. 5a is a visual representation of the method by which image data in a landscape orientation is processed for photo-cropping.
  • the image 51 Oa contains three points of interest 520a.
  • the system processes this image for cropping, it begins with the cropping area 530a at the left side of the image and walks it over to the right side pixel by pixel.
  • the cropping area is in position 530a, two points of interest are captured; however, when it is in position 540a, only one point of interest is captured.
  • the system crops this image it will crop it at position 530a.
  • FIG. 5b is a visual representation of a method by which image data in a portrait orientation is processed for photo-cropping.
  • the image 51 Ob contains five points of interest 520b, 53Ob, 540b, 550b, and 560b.
  • the cropping area 590b moves from top to bottom in search of the area that contains the most points of interest.
  • position 570b contains three complete points of interest and therefore has the highest concentration.
  • the next highest concentration is found in position 580b, which only has two complete points of interest.
  • FIG. 531 is a visual representation of a method by which image data in a portrait orientation is processed for photo-cropping.
  • the image 51 Ob contains five points of interest 520b, 53Ob, 540b, 550b, and 560b.
  • the cropping area 590b moves from top to bottom in search of the area that contains the most points of interest.
  • position 570b contains three complete points of interest and therefore has the highest concentration.
  • the next highest concentration is found in position 580b
  • 5c is a visual representation of a method by which image data is processed for photo-cropping.
  • the image 510c in FIG. 5c is processed much in the same manner as discussed in reference to FIGs. 5a and 5b, above.
  • the point of interest 530c is only partially contained within the cropping area.
  • position 550c both the points of interest are completely contained in the cropping area.
  • Position 550c would thus have a higher score than position 560c, and the system would therefore crop this image at position 550c.
  • the points of interest are used as hot-spots for navigation.
  • Users can associate hyperlinks or text with a hot-spot through a graphical user interface.
  • a viewer watches the image on a client machine
  • positioning the mouse cursor over a hot-spot can display a text popup, and clicking o the hotspot can perform a navigation action, such as opening a new web- page.
  • the points of interest are used as reference points for automatic animation of the image.
  • the user may have the system assign hot-spots and/or perform animation on the cropped image, or the user may use the system solely to detect points of interest and then use an uncropped image for hot-spots and/or animation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

An image editing system for use with a web-based video editing system is provided. The image editing system comprises: a detection system configured to receive image data; detect regions of the image representing faces; store the data representing the coordinates of the faces as points of interest; and receive user input adding additional points of interest, modifying the points of interest, or deleting the pre-selected points of interest; and a cropping subsystem configured to determine the portion of the image containing the maximum number of points of interest that will fit within the crop area; and crop the image.

Description

IMAGE EDITING SYSTEM AND METHOD
FIELD OF THE INVENTION
[0001] The present invention relates to an image editing system, and more particularly, to a system for detecting and editing points of interest on images and manipulating those images using the points of interest for specific applications such as cropping, animation, and navigation..
BRIEF DESCRIPTION OF THE DRAWINGS
[0002] FIG. 1 is a block diagram of a web-based video editing system according to a first embodiment of the present invention.
[0003] FIG. Ia is a block diagram of a web-based video editing system according to a second embodiment of the present invention.
[0004] FIG. 2 is a block diagram of one embodiment of an image editing system of the web- based video editing system of FIG. 1.
[0005] FIG. 3 is a flowchart showing a method of operation for the detection subsystem. In some embodiments, the subsystem receives data pertaining to a particular image in the system. [0006] FIG. 3a shows a sample image containing points of interest. Ln this example, the image contains two figures with recognizable human faces.
[0007] FIG. 4 shows a flowchart for a method of operation to automatically crop an image using data for predetermined points of interest in the image.
[0008] FIG. 5a is a visual representation of the method by which image data in a landscape orientation is processed for photo-cropping.
[0009] FIG. 5b is a visual representation of a method by which image data in a portrait orientation is processed for photo-cropping. [OOIOJ FIG. 5c is a visual representation of a method by which image data is processed for photo-cropping.
DETAILED DESCRIPTION OF THE INVENTION
[0011] FIG. 1 is a block diagram of a web-based video editing system according to a first embodiment of the present invention. The editing system includes one or more communication devices 1 10 each having a graphical user interface 115, a server 120 having a connection manager
130 and a image editing system 140 operating on the server, and a network 150 over which the one or more communication devices and the server communicate. The communication devices include, but are not limited to, a personal computer, a mobile telephone, a PDA, or any other communication device configured to operate as a client computer to the server. The network to which the server and devices are coupled may be a wireless or a wireline network and may range in size from a local area network to a wide area network to the Internet. A dedicated open socket connection exists between the connection manager and the client computers.
[0012] hi some embodiments of the system, one or more client computers are configured to transmit information to and receive information from the server, hi some embodiments, each of the client computers is configured to send a query for information and the server is configured to respond to the query by sending the requested information to the client computer. In some embodiments, one or more of the client computers is configured to transmit commands to the server and the server is configured to perform functions in response to the command.
[0013] In some embodiments, each of the client computers is configured with an application for displaying multimedia on the graphical user interface of the client computer. The application may be Adobe Flash® or any other application capable of displaying multimedia.
[0014] The connection manager is configured to determine the condition of the server and perform asynchronous messaging to one or more of the client computers over the dedicated open socket connection. In some embodiments, the content of the messages is indicative of the state of the server.
[0015J The server is configured to receive requests from one or more of the client computers and perform functions in response to the received requests. The server performs any number of functions typically performed in the server of a web-based video editing system. The server also provides an image editing system for the web-based video editing system.
[0016] FIG. Ia is a block diagram of some embodiments of the web-based video editing system of FIG. 1. hi these embodiments, the system does not include a connection manager 130a for communication between the client computer 110a and the server 120a. Otherwise, client computer 110a is an instance of client computer 110, graphical user interface 115a is an instance of grpahical user interface 115, server 120a is an instance of server 120, image editing system 140a is an instance of image editing system 140, and internet 150a is an instance of internet 150. All the components in FIG. Ia are thus identical to those in FIG. 1, and they are all configured to operate as described in FIG. 1.
[0017] In some embodiments, still referring to FIGs. 1 and Ia, the combination of data representing specified portions of an image is sometimes referred to as a point of interest. These points of interest may take the shape of a square, rectangle, or other quadrilateral, a circle, an ellipse, or any other closed two-dimensional shape. A user operates the client computer 110 to manage and manipulate points of interest on a computer image. The user can add, reposition, and resize points of interest through the graphical user interface 115. In some embodiments, the server 120 automatically detects points of interest by using algorithms such as face detection and feature recognition. The server-generated points of interest are also presented to the user in a graphical fashion, allowing the user to adjust their size and position and remove false positives or undesired interest points in an identical fashion to the points added by the user. Once these points of interest have been saved, they are stored in a database on the server along with the image and used appropriately as the image is displayed in different contexts.
[0018] In some embodiments, still referring to FIGs. 1 and Ia, when the image needs to be displayed with a different aspect-ratio, the points of interest are used to crop the image to best fit the new aspect ratio.
[0019] In some embodiments, users can choose to present the image in an animated fashion, and the client computer will use the points of interest to intelligently pan around the image and focus on parts of the image.
[0020] In some embodiments, the points of interest become 'hot spots' suitable for web-based navigation.
[0021] FIG. 2 is a block diagram of one embodiment of an image-editing system 140 of the web-based video editing system of FIG. 1. The image editing system includes a detection subsystem 210. In some embodiments, the detection subsystem includes a processor 212, memory
214, and computer code product including instruction stored in the memory and adapted to cause the processor, and thereby the detection subsystem, to receive and process user point of interest detection and selection requests. The memory also stores information indicative of the user selection requests. The memory may be any type of read- write memory, including, but not limited to, random access memory.
[0022] In some embodiments, the user input received by the detection subsystem includes the identity of the user, the image to be cropped, the size to which the image is to be cropped, and the points of interest edited by the user and/or generated by the server and stored on the server in an earlier stage. Each point of interest consists of a rectangular region of the image that encompasses the point of interest, although in some embodiments this point could be a circle (position and radius) or any other closed two-dimensional shape. [0023] The data is be stored in the editing system server (not shown), the detection subsystem memory 214, or at a remote location connected to the network of FIG. 1. The data is provided by the web-based video editing system or is data generated by the user. The data may include uncropped images, cropped images, meta data for cropping the images, and points of interest. [0024] In some embodiments, the system includes an animation subsystem configured to examine the points of interest and present an intelligent fly through of the image. In some embodiments, the system includes a navigation subsystem that allows a user to annotate the points of interest with text boxes and hyperlinks.
[0025] FIG. 3 is a flowchart showing a method of operation for the detection subsystem. In some embodiments, the subsystem receives 310 data pertaining to a particular image in the system. This data is a pointer to a file on the server, a pointer to a file stored in a remote location connected to the server by the network in FIG. 1 , the actual data for the image, or any other data that would allow the subsystem access to the image data. The subsystem then processes 320 the image using a facial recognition detection algorithm to determine all of the faces in the image and creates points of interest for each of the detected faces. Depending on the embodiment, the subsystem either transmits (not shown) the data to the image editing system, or the subsystem transmits 330 the data to the client computer. If the subsystem transmits 330 the data to the client computer, the client computer displays the image alone with markers indicating each of the automatically selected points of interest. The system then receives 340 input from the user indicating any changes to the selected points of interest that he desires. For example, the user may remove some of the preselected points of interest, or the user may add additional points of interest. The additional points of interest may include any particular features of the image in addition to the automatically detected faces that the user wishes to have included in the cropped image. Once the subsystem receives 340 this input, it transmits 350 the data for the points of interest to the image editing system. [0026] FIG. 3a shows a sample image containing points of interest. In this example, the image 300 contains two figures 310 with recognizable human faces. When the detection subsystem processes this image, it automatically applies points of interest 320 to these faces. In some embodiments where the user may add or delete points of interest, the image also contains points of interest 330 added by the user so that a pet 340 and a painting 350 are also marked as being important in the image and thus represent visuals within in the image that the user desires to include in the final cropped image.
[0027] FIG. 4 shows a flowchart for a method of operation to automatically crop an image using data for predetermined points of interest in the image. In some embodiments, the image editing system receives 410 data pertaining to points of interest either automatically generated or generated by the user and data pertaining to the image to be cropped. The system further receives 420 data pertaining to the size and orientation to which the image is to be cropped. Based on this data, the system determines 430 whether to crop the image along its height or along its width. The system then analyzes every valid configuration of the crop rectangle by beginning 440 at one side of the image. The system examines 450 the locations of the points of interest in relation to the position of the crop rectangle, and assigns 460 a score to the current crop rectangle configuration. In assigning the score, the system tries to maximize the area of points of interest displayed inside the crop rectangle, to avoid partially cropping a point of interest and cutting it off. In some embodiments, the system also adjusts the score to display the points of interest in accordance with basic photographic rules, such as the rule of thirds, centering, framing, balance, rotation such that all faces in the image are vertical, use of the golden-ratio, etc. When a point of interest is entirely within the crop rectangle, its contribution to the score is proportional to its area. Points of interest entirely outside the crop rectangle do not contribute to the score, and points of interest only partially inside the crop rectangle subtract from the overall score. In some embodiments applying photographic rules, such as the rule of thirds, the photographic rules contribute small amounts to the score. In some embodiments, the photographic rules contribute just enough to the score to allow the system to choose between crop configurations that would otherwise be tied. Once the system has assigned 460 a score to the current configuration, it determines 470 whether it has reached the other end of the image. If the system has not reached the other end, it moves 480 the crop area by one pixel, analyzes 450 the current position, and assigns 460 it a score. If instead, the system determines 470 that it has reached the end of the image, it selects 490 and stores the crop configuration with the highest score. The system may then crop the image accordingly. [0028] In some embodiments, the system also allows the user to preview the cropped image overlaid on the original image and adjust the cropped region.
[0029] FIG. 5a is a visual representation of the method by which image data in a landscape orientation is processed for photo-cropping. The image 51 Oa contains three points of interest 520a. When the system processes this image for cropping, it begins with the cropping area 530a at the left side of the image and walks it over to the right side pixel by pixel. As can be seen in the figure, when the cropping area is in position 530a, two points of interest are captured; however, when it is in position 540a, only one point of interest is captured. Thus, when the system crops this image, it will crop it at position 530a.
[0030] FIG. 5b is a visual representation of a method by which image data in a portrait orientation is processed for photo-cropping. The image 51 Ob contains five points of interest 520b, 53Ob, 540b, 550b, and 560b. When the image is processed for cropping, the cropping area 590b moves from top to bottom in search of the area that contains the most points of interest. In this case, position 570b contains three complete points of interest and therefore has the highest concentration. The next highest concentration is found in position 580b, which only has two complete points of interest. Thus, when the system crops this image, it will crop it at the coordinates corresponding to the cropping area in position 570b. [0031] FIG. 5c is a visual representation of a method by which image data is processed for photo-cropping. The image 510c in FIG. 5c is processed much in the same manner as discussed in reference to FIGs. 5a and 5b, above. In this image, while there are two positions 550c and 560c for the cropping area where two points of interest would be captured, in position 560c, the point of interest 530c is only partially contained within the cropping area. In position 550c, both the points of interest are completely contained in the cropping area. Position 550c would thus have a higher score than position 560c, and the system would therefore crop this image at position 550c. [0032] In some embodiments, the points of interest are used as hot-spots for navigation. Users can associate hyperlinks or text with a hot-spot through a graphical user interface. When a viewer watches the image on a client machine, positioning the mouse cursor over a hot-spot can display a text popup, and clicking o the hotspot can perform a navigation action, such as opening a new web- page.
[0033] In some embodiments, the points of interest are used as reference points for automatic animation of the image. Different animation styles exist and the user can choose the desired style. For example, one animation style consists of focusing on a point of interest and gradually zooming into it. Another style consists of a slow pan between all of the points of interest in a picture. There are dozens of such styles, all which use the points of interest to generate more intelligent animation. [0034] In some embodiments, the user may have the system assign hot-spots and/or perform animation on the cropped image, or the user may use the system solely to detect points of interest and then use an uncropped image for hot-spots and/or animation.

Claims

WHAT IS CLAIMED IS:
1. A system for editing images comprising: a processor that receives data representative of an image, the processor coupled to a display; and a user input connected to the processor; wherein the processor is configured to: detect regions of the image representing faces; store data representing coordinates of any regions detected as representing faces as points of interest; cause the display of the image on the display with an indication of the regions with coordinates stored as points of interest; when data indicative of adding a user selected region of the image as an additional point of interest is received through the user input, store data representing the coordinates of the user selected region as a point of interest; when data indicative of deleting a region corresponding to an indication of a point of interest on the display is received through the user input, remove data representing the coordinates of the region corresponding to the indication of a point of interest on the display; determine a crop area for the image, the crop area having predetermined dimensional characteristics, such that the crop area will best preserve points of interest according to a predetermined set of rules, and cause the display of an indication of the determined crop area on the display.
2. The system of claim 1 wherein the predetermined set of rules includes maximizing the number of points of interest in the crop area.
3. The system of claim 2 wherein the predetermined set of rules includes minimizing the intersection of borders of the crop area with stored points of interest.
4. The system of claim 1 wherein the predetermined set of rules includes minimizing the intersection of borders of the crop area with stored points of interest.
5. The system of claim 1 wherein the predetermined dimensional characteristics of the crop area are determined based on data received by the processor from the user input.
6. The system of claim 1 wherein the predetermined set of rules are determined, at least in part, based on data received by the processor from the user input.
7. A method for editing images using a processor that receives data representative of an image, the processor coupled to a display and a user input connected to the processor , the method comprising the processor: detecting regions of the image representing faces; storing data representing the coordinates of any regions detected as representing faces as points of interest; causing the display of the image on the display with an indication of the regions with coordinates stored as points of interest; when data indicative of adding a user selected region of the image as an additional point of interest is received through the user input, storing data representing the coordinates of the user selected region as a point of interest; when data indicative of deleting a region corresponding to an indication of a point of interest on the display is received through the user input, removing data representing the coordinates of the region corresponding to the indication of a point of interest on the display; and determining a crop area for the image, the crop area having predetermined dimensional characteristics, such that the crop area will best preserve points of interest according to a predetermined set of rules, causing the display of an indication of the determined crop area on the display.
8. The method of claim 7 wherein the predetermined set of rules includes maximizing the number of points of interest in the crop area.
9. The method of claim 8 wherein the predetermined set of rules includes minimizing the intersection of borders of the crop area with stored points of interest.
10. The method of claim 7 wherein the predetermined set of rules includes minimizing the intersection of borders of the crop area with stored points of interest.
11. The method of claim 7 wherein the predetermined dimensional characteristics of the crop area are determined based on data received by the processor from the user input.
12. The method of claim 7 wherein the predetermined set of rules are determined, at least in part, based on data received by the processor from the user input.
PCT/US2008/052367 2007-01-29 2008-01-29 Image editing system and method WO2008094951A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US89820107P 2007-01-29 2007-01-29
US60/898,201 2007-01-29

Publications (1)

Publication Number Publication Date
WO2008094951A1 true WO2008094951A1 (en) 2008-08-07

Family

ID=39674483

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2008/052367 WO2008094951A1 (en) 2007-01-29 2008-01-29 Image editing system and method

Country Status (1)

Country Link
WO (1) WO2008094951A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013049374A3 (en) * 2011-09-27 2013-05-23 Picsured, Inc. Photograph digitization through the use of video photography and computer vision technology

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040228528A1 (en) * 2003-02-12 2004-11-18 Shihong Lao Image editing apparatus, image editing method and program
US20050025387A1 (en) * 2003-07-31 2005-02-03 Eastman Kodak Company Method and computer program product for producing an image of a desired aspect ratio
US20050278636A1 (en) * 2004-06-09 2005-12-15 Canon Kabushiki Kaisha Image processing apparatus, image processing method, program for implementing the method, and storage medium storing the program
US20060238827A1 (en) * 2005-04-20 2006-10-26 Fuji Photo Film Co., Ltd. Image processing apparatus, image processing system, and image processing program storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040228528A1 (en) * 2003-02-12 2004-11-18 Shihong Lao Image editing apparatus, image editing method and program
US20050025387A1 (en) * 2003-07-31 2005-02-03 Eastman Kodak Company Method and computer program product for producing an image of a desired aspect ratio
US20050278636A1 (en) * 2004-06-09 2005-12-15 Canon Kabushiki Kaisha Image processing apparatus, image processing method, program for implementing the method, and storage medium storing the program
US20060238827A1 (en) * 2005-04-20 2006-10-26 Fuji Photo Film Co., Ltd. Image processing apparatus, image processing system, and image processing program storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013049374A3 (en) * 2011-09-27 2013-05-23 Picsured, Inc. Photograph digitization through the use of video photography and computer vision technology
US20140348394A1 (en) * 2011-09-27 2014-11-27 Picsured, Inc. Photograph digitization through the use of video photography and computer vision technology

Similar Documents

Publication Publication Date Title
US8218830B2 (en) Image editing system and method
US11706521B2 (en) User interfaces for capturing and managing visual media
US20230319394A1 (en) User interfaces for capturing and managing visual media
EP3736676B1 (en) User interfaces for capturing and managing visual media
US8749587B2 (en) System and method for content based automatic zooming for document viewing on small displays
US7471827B2 (en) Automatic browsing path generation to present image areas with high attention value as a function of space and time
US11567624B2 (en) Techniques to modify content and view content on mobile devices
US7006091B2 (en) Method and system for optimizing the display of a subject of interest in a digital image
US20210019946A1 (en) System and method for augmented reality scenes
US8601393B2 (en) System and method for supporting document navigation on mobile devices using segmentation and keyphrase summarization
EP1630704A2 (en) Image file management apparatus and method, program, and storage medium
CN109215017B (en) Picture processing method and device, user terminal, server and storage medium
US20150007024A1 (en) Method and apparatus for generating image file
US20050116966A1 (en) Web imaging serving technology
Xie et al. Learning user interest for image browsing on small-form-factor devices
US20090100333A1 (en) Visualizing circular graphic objects
CN111612873A (en) GIF picture generation method and device and electronic equipment
US8532435B1 (en) System and method for automatically adapting images
EP2747404B1 (en) Image processing terminal, image processing system, and computer-readable storage medium storing control program of image processing terminal
JP2007133878A (en) Method, device, system and program for browsing multiple images
EP1755051A1 (en) Method and apparatus for accessing data using a symbolic representation space
US10304232B2 (en) Image animation in a presentation document
JP2003303333A (en) Image display control device
US20120306736A1 (en) System and method to control surveillance cameras via a footprint
JP2019101559A (en) Information processing apparatus, information processing method and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08714118

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112 (1) EPC, EPO FORM 1205A DATED 03-12-2009

122 Ep: pct application non-entry in european phase

Ref document number: 08714118

Country of ref document: EP

Kind code of ref document: A1