WO2013141025A1 - Image processing device, image processing method, program, and recording medium - Google Patents

Image processing device, image processing method, program, and recording medium Download PDF

Info

Publication number
WO2013141025A1
WO2013141025A1 PCT/JP2013/056200 JP2013056200W WO2013141025A1 WO 2013141025 A1 WO2013141025 A1 WO 2013141025A1 JP 2013056200 W JP2013056200 W JP 2013056200W WO 2013141025 A1 WO2013141025 A1 WO 2013141025A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
additional information
image processing
processing apparatus
image
Prior art date
Application number
PCT/JP2013/056200
Other languages
French (fr)
Japanese (ja)
Inventor
圭史 赤田
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2013141025A1 publication Critical patent/WO2013141025A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/14Display of multiple viewports
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4882Data services, e.g. news ticker for displaying messages, e.g. warnings, reminders
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/12Overlay of images, i.e. displayed pixel being the result of switching between the corresponding input pixels
    • G09G2340/125Overlay of images, i.e. displayed pixel being the result of switching between the corresponding input pixels wherein one of the images is motion video
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/39Control of the bit-mapped memory
    • G09G5/395Arrangements specially adapted for transferring the contents of the bit-mapped memory to the screen
    • G09G5/397Arrangements specially adapted for transferring the contents of two or more bit-mapped memories to the screen simultaneously, e.g. for mixing or overlay

Definitions

  • the present disclosure relates to an image processing device, an image processing method, a program, and a recording medium.
  • JP 2010-44606 A Japanese Unexamined Patent Publication No. 2009-200784
  • Patent Documents 1 and 2 synthesize additional information such as advertisements with still images.
  • a still image the composition of the subject and background in the image is constant and does not change.
  • the moving image the composition of the subject and the background in the image changes with time. For this reason, it is necessary to appropriately determine an area in which additional information is to be inserted, and there is a problem in that the technique disclosed in Patent Document 1 relating to still images cannot be applied to moving images.
  • one of the objects of the present disclosure is to provide, for example, an image processing apparatus, an image processing method, a program, and a recording medium that appropriately determine a region in which additional information is inserted in a moving image.
  • the present disclosure provides, for example, A detection unit that analyzes a plurality of frame images and detects an object; And an area control unit that determines an area into which the additional information is inserted so as to avoid the detected object.
  • the present disclosure for example, Detect objects by analyzing multiple frame images, This is an image processing method in an image processing apparatus that determines an area into which additional information is inserted so as to avoid a detected object.
  • the present disclosure for example, Detect objects by analyzing multiple frame images, A program for causing a computer to execute an image processing method in an image processing apparatus that determines an area into which additional information is inserted so as to avoid a detected object. A recording medium on which this program is recorded may be used.
  • FIG. 1 is a diagram illustrating an example of the configuration of an image processing apparatus.
  • 2A, 2B, and 2C are diagrams for explaining an example of an insertion region into which additional information is inserted.
  • 3A, 3B, and 3C are diagrams illustrating an example of moving image data in which additional information is inserted.
  • FIG. 4A, FIG. 4B, and FIG. 4C are diagrams for explaining an example of a state in which an insertion area into which additional information is inserted changes.
  • FIG. 5 is a flowchart illustrating an example of a processing flow in the first embodiment.
  • 6A, 6B, and 6C are diagrams illustrating an example of a screen for selecting an insertion region.
  • FIG. 7 is a diagram illustrating an example of the configuration of the imaging apparatus.
  • FIG. 1 is a diagram illustrating an example of the configuration of an image processing apparatus.
  • 2A, 2B, and 2C are diagrams for explaining an example of an insertion region into which additional information is inserted.
  • FIG. 8 is a flowchart illustrating an example of a processing flow in the second embodiment.
  • 9A, 9B, and 9C are diagrams illustrating an example of moving image data in which additional information is inserted into another insertion area.
  • 10A, 10B, and 10C are diagrams illustrating an example of additional information that is inserted so as to overlap an object.
  • 11A, 11B, and 11C are diagrams for explaining that the display mode of the additional information is changed and displayed.
  • the image processing device is realized as, for example, a personal computer, a tablet computer device, a portable terminal, or a television device. Furthermore, it may be realized as an editing device used in a broadcasting station or a content server that distributes content.
  • FIG. 1 shows an example of the configuration of the image processing apparatus 100.
  • the image processing apparatus 100 includes a CPU (Central Processing Unit) 101, a flash memory 102, a display control unit 103, a display unit 104, an operation input unit 105, a work memory 106, and an image processing unit 107. These units are connected via a bus 108.
  • the configuration of the image processing apparatus 100 is an example, and can be changed as appropriate.
  • an audio processing unit that processes audio data and a speaker that reproduces audio data may be added to the image processing apparatus 100.
  • CPU 101 controls each unit of the image processing apparatus 100. For example, a predetermined process is executed according to an operation signal supplied from the operation input unit 105.
  • the flash memory 102 is composed of, for example, a nonvolatile memory.
  • moving image data 102a including a plurality of frame images is stored.
  • the moving image data 102a may be supplied from another device via, for example, wired or wireless.
  • Additional information is stored in the flash memory 102.
  • the additional information is, for example, advertisement data 102b that is image data and signature data 102c that is text data.
  • the content of the additional information can be changed as appropriate.
  • Either one of the advertisement data 102b and the signature data 102c may be added information.
  • the additional information is inserted into a predetermined area of the moving image data. Note that an area in which additional information is inserted is appropriately referred to as an insertion area.
  • the display control unit 103 is a driver for driving the display unit 104.
  • the display control unit 103 generates video data based on moving image data supplied in accordance with the control of the image processing unit 107.
  • the display control unit 103 supplies video data to the display unit 104. Display based on the video data generated by the display control unit 103 is performed on the display unit 104.
  • the display unit 104 is a display panel such as an LCD (Liquid Crystal Display) or an organic EL (Electroluminescence).
  • the display unit 104 is configured as a touch panel, for example.
  • An instruction to the image processing apparatus 100 can be performed by touching a predetermined area of the touch panel.
  • an insertion area can be selected by touching a predetermined insertion area from among a plurality of insertion areas displayed on the display unit 104.
  • an object can be selected by touching a predetermined object from among a plurality of objects displayed on the display unit 104.
  • the operation input unit 105 is a general term for a keyboard, a mouse, buttons, switches, and the like. In response to an operation on the operation input unit 105, an operation signal is generated. The generated operation signal is supplied to the CPU 101 via the bus 108. The CPU 101 executes processing according to the supplied operation signal.
  • the work memory 106 includes, for example, a RAM (Random Access Memory), and is used as a work area when the CPU 101 and the image processing unit 107 execute processing.
  • a RAM Random Access Memory
  • the image processing unit 107 performs processing on a plurality of frame images.
  • the function of the image processing unit 107 may be incorporated in the CPU 101.
  • the image processing unit 107 analyzes a plurality of frame images constituting the moving image data 102a and detects one or a plurality of objects. And the area
  • To avoid the object means to include at least one of the case where the additional information does not overlap at all with the object and the case where the additional information partially overlaps the object.
  • the image processing unit 107 holds a pattern for detecting an object.
  • the pattern is, for example, a pattern such as a person, a vehicle, a building, a mountain, a river, or a tree. If the image processing unit 107 does not hold a pattern, the image processing unit 107 may detect a specific object such as a person by detecting skin color, for example.
  • the image processing unit 107 of the image processing apparatus 100 reads the moving image data 102 a from the flash memory 102.
  • the moving image data 102a selected by the user is read.
  • the image processing unit 107 expands the read moving image data 102 a in the work memory 106.
  • the image processing unit 107 performs processing for detecting an object for the moving image data 102 a developed in the work memory 106. For example, the image processing unit 107 sets a predetermined size window (search window) and moves the window within the frame image. For example, pixel value data obtained by integral conversion of pixel values in the window is compared with pixel value data for each pattern. By performing comparison processing while appropriately moving the window in the frame image, it is determined whether or not there is an object that substantially matches the pattern in the frame image.
  • search window For example, pixel value data obtained by integral conversion of pixel values in the window is compared with pixel value data for each pattern.
  • the image processing unit 107 determines an area in which the object exists based on the barycentric coordinates of the window and the window size.
  • the area where the object exists may be expressed as (x, y, t) using the barycentric coordinates (x, y) and the time information t of the frame image. Since the window size is usually a fixed size, the area where the object exists can be obtained based on the barycentric coordinates (x, y).
  • the image processing unit 107 performs processing for detecting an object for all frame images constituting the moving image data 102a.
  • processing for detecting an object is an example, and the present invention is not limited to this.
  • a process for detecting a known object can be applied.
  • the image processing unit 107 determines an area into which additional information is inserted so as to avoid the detected object. For example, the image processing unit 107 determines an area where no object exists in each frame image. Based on the area where no object exists and the size of the additional information, the area into which the additional information is inserted is determined.
  • FIG. 2 schematically shows how the configuration in the screen of the moving image data 102a changes over time. As time passes, the configuration in the screen changes as shown in FIGS. 2A, 2B, and 2C. In FIG. 2, three frame images are illustrated for convenience of explanation, but the moving image data 102a is actually composed of many frame images.
  • a mountain 10, a person 11, a person 12, a train 13, a track 14, and a tree 15 are detected as examples of objects.
  • a mountain 10, a train 13, a track 14, a tree 15, a person 16, and a person 17 are detected as examples of objects.
  • a mountain 10 a person 11, a person 12, a train 13, a track 14, a tree 15, a person 16, and a person 17 is detected.
  • an insertion area for inserting additional information is determined so as to avoid these detected objects.
  • the image processing unit 107 determines an area 50 in which no object exists in all frame images of the moving image data 102a as an insertion area.
  • the image processing unit 107 determines an area 50 in which no object exists in all frame images of the moving image data 102a as an insertion area.
  • FIG. 3 shows an example in which additional information is inserted into the determined insertion area 50.
  • the additional information is, for example, a comment (text data) edited by the user.
  • image data such as advertisement data and illustration data may be inserted.
  • the insertion area 50 is set near the upper left corner of the screen, for example.
  • the text data “Memories of last summer” is inserted into the insertion area 50 of each frame image.
  • the moving image data 102a in which the additional information is inserted is stored in the work memory 106.
  • the moving image data 102a in which the additional information is inserted may be transmitted from the work memory 106 to the display control unit 103, and the moving image data 102a in which the additional information is inserted may be displayed on the display unit 104.
  • the moving image data 102a is composed of 600 frame images.
  • an insertion region 50a near the upper left corner of the screen is determined as an insertion region from the first frame image to the 300th frame image.
  • the object (airplane 18) is framed in the area of the insertion area 50a.
  • an insertion area 50b is newly determined near the upper right corner of the screen, and additional information is inserted into the insertion area 50b.
  • the additional information is inserted into the insertion area 50b up to the 420th frame image.
  • the airplane 18 moves to the insertion area 50b, and the insertion area 50b and the airplane 18 overlap each other. For this reason, for example, a new insertion area 50c is set near the center left side of the screen. Additional information is inserted into the insertion area 50c. As described above, even when the area where the object does not exist changes with time, the insertion area can be appropriately changed.
  • the additional information when the additional information can be displayed in a predetermined insertion area for a predetermined time (for example, about several seconds), the display of the additional information may be terminated without changing the insertion area.
  • the additional information can be presented to the user for a predetermined time, and thereafter, the area where the additional information is displayed can be prevented from frequently changing.
  • FIG. 5 is a flowchart illustrating an example of the flow of processing performed by the image processing apparatus 100. The processing illustrated in FIG. 5 is performed by the image processing unit 107 unless otherwise specified.
  • step S10 all frame images constituting the moving image data 102a are read from the flash memory 102. All the read frame images are stored in the work memory 106. Then, the process proceeds to step S11.
  • step S11 the image processing unit 107 performs detection processing on each frame image and detects an object in each frame image. Then, the process proceeds to step S12.
  • step S12 the result of the process in step S11 is stored in the work memory 106. Then, the process proceeds to step S13.
  • step S13 an area where an object exists in each frame image is determined based on the result of the detection process. Then, the process proceeds to step S14.
  • step S14 an insertion area for inserting additional information is determined. For example, in all frame images, an area where no object exists is determined as an insertion area into which additional information is inserted. Then, the process proceeds to step S15.
  • step S15 a composition process for inserting additional information into the determined insertion area is performed.
  • the opacity of the portion corresponding to the insertion area in each frame image is set to the minimum. Additional information is inserted in the area where the opacity is set to the minimum.
  • the moving image data 102a subjected to the synthesis process is stored in the work memory 106.
  • the moving image data 102a subjected to the synthesis process may be stored in the flash memory 102. Then, the process proceeds to step S16.
  • step S ⁇ b> 16 the moving image data 102 a subjected to the additional information combining process is supplied to the display control unit 103.
  • the display control unit 103 converts the moving image data 102a subjected to the additional information combining process into video data of a predetermined format.
  • Video data is supplied from the display control unit 103 to the display unit 104, and display based on the video data is performed. That is, the moving image data 102a in which the additional information is inserted is reproduced. Then, the process proceeds to step S17.
  • step S17 it is determined whether or not an instruction to correct the insertion area has been given. If a plurality of insertion area candidates are determined in step S14, additional information can be inserted into the insertion area selected from the plurality of insertion area candidates.
  • insertion area candidates (insertion area 60a, insertion area 60b, and insertion area 60c) are displayed on the display unit 104.
  • a plurality of insertion region candidates may be displayed as moving image data as shown in FIGS. 6A, 6B, and 6C. Since the plurality of insertion area candidate areas themselves do not change, for example, a plurality of insertion area candidates may be displayed using a predetermined frame image shown in FIG. 6A.
  • step S17 If the instruction to correct the insertion area is not given in step S17, the process ends. If an instruction to correct the insertion area is given in step S17, the process proceeds to step S18.
  • the instruction to correct the insertion area is made, for example, by touching the insertion area desired by the user among the insertion area 60a, the insertion area 60b, and the insertion area 60c.
  • a desired insertion region may be selected by removing unnecessary insertion region candidates by a flick operation (an operation for flipping toward the outside of the screen). Then, the additional information is inserted into the insertion area (modified insertion area) selected by the user. Then, the process proceeds to step S16.
  • step S16 the moving image data 102a in which the additional information is inserted into the corrected insertion area is reproduced. And the determination process by step S17 is performed again.
  • priority information related to selection of the insertion area may be set. For example, when a plurality of insertion area candidates are determined, a setting may be made such that the insertion area near the upper left corner of the screen is preferentially selected. Furthermore, only an insertion area existing in a specific area may be extracted. For example, the screen may be equally divided into four, and the insertion area in the upper left area may be determined. If there is no insertion area in the upper left area, the insertion area in the upper right area may be determined next.
  • Second Embodiment> "Configuration of the imaging device" Next, a second embodiment will be described. In the second embodiment, the present disclosure is applied to an imaging apparatus having an imaging unit.
  • FIG. 7 shows an example of the configuration of the imaging apparatus 200.
  • the imaging apparatus 200 includes a CPU 201, a flash memory 202, a display control unit 203, a display unit 204, an operation input unit 205, a work memory 206, and an image processing unit 207. These units are connected via a bus 208.
  • the configuration of the imaging apparatus 200 is an example, and can be changed as appropriate.
  • an audio processing unit that processes audio data and a speaker that reproduces audio data may be added to the imaging apparatus 200.
  • the CPU 201 controls each unit of the imaging device 200. For example, the CPU 201 executes a predetermined process in response to an operation signal supplied from the operation input unit 205.
  • the CPU 201 has an image signal processing function.
  • the CPU 201 performs a CDS (Correlated Double Sampling) process for improving the S / N ratio (Signal to Ratio) on the frame image captured by the imaging unit 210 and an AGC (Automatic Gain Control) process for controlling the gain. Perform analog signal processing.
  • CDS Correlated Double Sampling
  • AGC Automatic Gain Control
  • the frame image subjected to the analog signal processing is converted into digital data by an A / D conversion (Analog to Digital) function of the CPU 201.
  • the CPU 201 performs image signal processing such as demosaic processing, AF (Auto Focus), AE (Auto Exposure), and AWB (Auto White Balance) on the frame image that has been converted into digital data.
  • image signal processing such as demosaic processing, AF (Auto Focus), AE (Auto Exposure), and AWB (Auto White Balance) on the frame image that has been converted into digital data.
  • the frame image subjected to the image signal processing is appropriately compressed and stored in the flash memory 202 in real time.
  • the CPU 201 reads a predetermined number of frame images from the flash memory 202 every time a predetermined number of frame images are accumulated in the flash memory 202.
  • a predetermined number of frame images read from the flash memory 202 are supplied to the work memory 206.
  • the predetermined number is, for example, 10 frames. Of course, the predetermined number can be changed as appropriate.
  • the frame image that has been subjected to the image signal processing may be directly supplied to the work memory 206.
  • processing by an image processing unit 207 which will be described later, may be performed.
  • the flash memory 202 is composed of, for example, a nonvolatile memory.
  • a frame image captured in real time by the imaging unit 210 is stored in the flash memory 202.
  • the moving image data 202a is composed of a plurality of frame images captured in real time by the imaging unit 210.
  • Additional information is stored in the flash memory 202.
  • the additional information is, for example, advertisement data 202b that is image data and signature data 202c that is text data.
  • the content of the additional information can be changed as appropriate.
  • Either one of the advertisement data 202b and the signature data 202c may be added information.
  • the additional information is inserted into a predetermined area of the moving image data.
  • the display control unit 203 is a driver for driving the display unit 204.
  • the display control unit 203 generates video data based on moving image data supplied by the control of the image processing unit 207.
  • the display control unit 203 supplies the video data to the display unit 204. Display based on the video data generated by the display control unit 203 is performed on the display unit 204.
  • the display unit 204 is a display panel such as an LCD or an organic EL.
  • the display unit 204 is configured as a touch panel, for example. By touching a predetermined area of the touch panel, an instruction to the imaging apparatus 200 can be given.
  • the operation input unit 205 is a general term for a keyboard, a mouse, buttons, switches, and the like. An operation signal is generated in response to an operation on the operation input unit 205. The generated operation signal is supplied to the CPU 201 via the bus 208. The CPU 201 executes processing according to the supplied operation signal.
  • the work memory 206 includes, for example, a RAM, and is used as a work area when the CPU 201 and the image processing unit 207 execute processing.
  • the work memory stores a predetermined number of frame images to be processed by the image processing unit 207.
  • the image processing unit 207 performs processing on a plurality of frame images.
  • the function of the image processing unit 207 may be incorporated in the CPU 201.
  • the image processing unit 207 analyzes a predetermined number of frame images captured via the imaging unit 210 and detects one or more objects. Then, an area for inserting additional information is determined so as to avoid the detected object. That is, the image processing unit 207 functions as an example of a detection unit and a region control unit.
  • the image processing unit 207 holds a pattern for detecting an object.
  • the pattern is, for example, a pattern such as a person, a vehicle, a building, a mountain, a river, or a tree. If the image processing unit 207 does not hold the pattern, the image processing unit 207 may detect a specific object such as a person by skin color detection or the like.
  • the imaging device 200 includes a lens 209 and an imaging unit 210.
  • the lens 209 and the imaging unit 210 constitute an imaging unit in the claims.
  • the lens 209 collects a light image from the subject.
  • the imaging unit 210 includes, for example, a diaphragm for adjusting the amount of light and an imaging element.
  • the light amount of the optical image is adjusted by the diaphragm.
  • the condensed light image is supplied to the image sensor.
  • the optical image is photoelectrically converted by the imaging device, and analog image data that is an electrical signal is generated.
  • the image pickup device includes a CCD (Charge Coupled Device) sensor, a CMOS (Complementary Metal Oxide Semiconductor) sensor, and the like.
  • the imaging device 200 captures a frame image at a predetermined frame rate.
  • the frame rate varies depending on the imaging device.
  • the frame rate of the imaging device 200 is, for example, 60 f / s (frames per second).
  • Imaging operation is performed using the imaging device 200.
  • a frame image is acquired via the imaging unit 210 by an imaging operation.
  • the CPU 201 performs predetermined image signal processing on the frame image.
  • a frame image subjected to predetermined image signal processing is stored in the work memory 206.
  • the image processing unit 207 performs a process of detecting an object using the 10-frame frame image. Then, the image processing unit 207 determines an insertion area into which the additional information is inserted so as to avoid the detected object. For example, an area where no object exists in all 10 frame images is determined as an insertion area.
  • the image processing unit 207 reads additional information such as advertisement data from the flash memory 202.
  • the read additional information is inserted into the insertion area of each frame image.
  • the image processing unit 207 supplies the frame image for 10 frames to the display control unit 203 after inserting the additional information into the insertion area of each frame image. Then, the same processing is performed on the next 10 frame images.
  • the display control unit 203 generates video data based on the 10-frame image, and supplies the generated video data to the display unit 204. Display based on the video data is performed on the display unit 204.
  • the frame image captured in real time may be displayed on the display unit 204.
  • the insertion area is determined using the past 10 frames in time.
  • additional information may be inserted into an area corresponding to the determined insertion area. It is unlikely that the insertion area will be changed significantly in a short time of 10 frames. For this reason, there is no problem even if an insertion region determined using a past frame image in time is applied to a frame image displayed in real time.
  • weighting may be performed so that the weight of the frame image close in time to the current timing becomes heavy.
  • the most appropriate insertion region candidate may be determined as the insertion region in the frame image having a heavy weight.
  • FIG. 8 is a flowchart illustrating an example of a processing flow in the imaging apparatus 200. The processing in FIG. 8 is performed by the image processing unit 207 unless otherwise specified.
  • step S20 an imaging operation using the imaging device 200 is performed.
  • moving image data for a predetermined time is stored in the work memory 206. That is, a predetermined number of frame images corresponding to a predetermined time are stored in the work memory 206. For example, a frame image of 10 frames is stored in the work memory 206. Then, the process proceeds to step S21.
  • step S21 detection processing for analyzing each of the 10 frame images stored in the work memory 206 and detecting an object is executed. Then, the process proceeds to step S22. In step S ⁇ b> 22, the result of the detection process is stored in the work memory 206. Then, the process proceeds to step S23.
  • step S23 an area where the object exists in each frame image is determined. Then, the process proceeds to step S24.
  • step S24 an insertion area into which the additional information is inserted is determined so as to avoid the detected object. For example, in all 10 frame images, an area where no object exists is determined as an insertion area. Then, the process proceeds to step S25.
  • step S25 a composition process for inserting additional information into the insertion area is performed. For example, the opacity of the area corresponding to the insertion area in each frame image is set to the minimum. Then, additional information is inserted into the insertion area. The process proceeds to step S26.
  • step S26 the frame image of 10 frames subjected to the synthesis process is supplied to the display control unit 203.
  • the display control unit 203 generates video data based on a frame image of 10 frames.
  • Video data is supplied to the display unit 204, and display based on the video data is performed. That is, moving image data in which additional information is inserted is reproduced. Then, the process proceeds to step S27.
  • step S27 it is determined whether or not the imaging operation has been completed. When the imaging operation ends, the process ends. If the imaging operation has not ended, the process returns to step S20. Then, similar processing is performed on the next 10 frame images in time.
  • a screen for selecting objects may be displayed in which all objects are displayed in a list.
  • a mountain 10 For example, assume that a mountain 10, a person (person 11, person 12), a train 13, and a track 14 are selected as objects to be avoided.
  • the tree 15 is not selected.
  • additional information 70 may be inserted into an area where the tree 15 exists.
  • the object to be avoided is selected by an operation by the user, for example.
  • An object existing near the center in the image may be automatically selected as an object to be avoided.
  • the additional information may be inserted into a region close to the selected object when the object is selected. For example, when an object to be noticed in a moving image is selected, additional information can be effectively presented to the user using the attention to the object.
  • Additional information may be inserted so that the additional information partially overlaps the object.
  • the additional information 80 may be inserted so as to overlap the vicinity of the body of the object (person 11, person 12, person 16, and person 17). In this case, it is preferable to prevent the additional information 80 from overlapping an important part of the object (for example, a human face). Furthermore, the degree of overlap where the additional information overlaps the object may be set.
  • a depth map may be generated for each frame image. Using the depth map, it is determined whether the detected object exists on the back side or the near side in the depth direction. That is, the depth direction z is determined.
  • the object existing on the back side may be regarded as an unimportant object, and the additional information may be inserted so that the additional information overlaps all or part of the object existing on the back side. Thereby, even when the moving image data is three-dimensional data, the additional information can be appropriately inserted.
  • the display mode of the additional information may be changed according to the insertion area.
  • the size and shape of the additional information may be changed. For example, as shown in FIGS. 11A, 11B, and 11C, it is assumed that an appropriate insertion area is determined near the lower right of the screen. In such a case, for example, the additional information 90 displayed over a plurality of levels is generated. Additional information 90 is inserted into the insertion area near the lower right of the screen.
  • the motion vector for each object may be obtained, and the motion vector for the entire screen may be obtained from all the motion vectors. Then, the insertion area into which the additional information is inserted may be changed based on the motion vector of the entire screen. Since the area in which the additional information is displayed changes according to the movement in the screen, the user can perform a display without a sense of incongruity.
  • the additional information to be inserted is not limited to one additional information. A plurality of additional information may be inserted.
  • Additional information may be related to the object.
  • an advertisement related to the object may be the additional information.
  • a location where the image processing device or the imaging device exists may be acquired from the GPS or compass information, and additional information related to the location may be inserted.
  • the image processing apparatus can be applied to a video production system. For example, when displaying a pop-up menu, the menu can be displayed without hindering the display of main objects of the moving image being played. Furthermore, an area for inserting a comment at the time of image editing may be recommended.
  • AR Arxented Reality
  • additional information such as signatures and advertisements can be displayed on the moving image data displayed in real time without obstructing the display of main objects of the moving image data.
  • the present disclosure can be realized not only as a device but also as a method, a program, and a recording medium.
  • This disclosure can also be applied to a so-called cloud system in which the exemplified processing is distributed and processed by a plurality of devices.
  • the present disclosure can be realized as a system in which the exemplified process is executed and an apparatus in which at least a part of the exemplified process is executed.
  • This indication can also take the following composition.
  • a detection unit that analyzes a plurality of frame images and detects an object;
  • An image processing apparatus comprising: an area control unit that determines an area into which additional information is inserted so as to avoid the detected object.
  • the area controller is Detecting a plurality of candidate areas for inserting the additional information;
  • the image processing apparatus according to (1) wherein an area selected from the plurality of area candidates is determined as an area into which the additional information is inserted.
  • the plurality of region candidates are displayed on the display unit, The image processing apparatus according to (2), wherein the area control unit determines an area selected by a user operation from among the plurality of area candidates displayed on the display unit as an area into which the additional information is inserted. .
  • the detection unit detects a plurality of the objects, The image processing apparatus according to any one of (1) to (3), wherein the area control unit determines an area into which the additional information is inserted so as to avoid a selected object among the plurality of objects.
  • the plurality of objects are displayed on a display unit, The image processing according to (4), wherein the area control unit determines an area into which the additional information is inserted so as to avoid an object selected by a user operation from among a plurality of objects displayed on the display unit. apparatus.
  • (6) The image processing apparatus according to any one of (1) to (5), wherein the area control unit determines an area into which the additional information is inserted so that the additional information partially overlaps the detected object.
  • the area control unit determines an area in which the additional information is to be inserted so that the additional information partially overlaps an object existing on the back side in the depth direction among the detected objects (6).
  • Image processing apparatus (8) The image processing apparatus according to any one of (1) to (7), wherein a display mode of the additional information is changed according to the determined area. (9) The image processing apparatus according to any one of (1) to (8), further including a storage unit that stores the plurality of frame images. (10) Having an imaging unit, The image processing device according to any one of (1) to (8), wherein the plurality of frame images are a predetermined number of frame images captured by the imaging unit.
  • the image processing apparatus according to any one of (1) to (10), further including a display unit that displays a moving image in which the additional information is inserted in the area.
  • (12) Detect objects by analyzing multiple frame images, An image processing method in an image processing apparatus for determining an area into which additional information is inserted so as to avoid the detected object.
  • (12) Detect objects by analyzing multiple frame images, A program for causing a computer to execute an image processing method in an image processing apparatus that determines an area into which additional information is inserted so as to avoid the detected object.
  • DESCRIPTION OF SYMBOLS 100 ... Image processing apparatus 102 ... Flash memory 104 ... Display part 106 ... Work memory 107 ... Image processing part 200 ... Imaging device 202 ... Flash memory 204 ... Display part 206... Work memory 207... Image processing unit 209... Lens 210.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • Image Processing (AREA)

Abstract

This image processing device has a detector for analyzing a plurality of frame images and detecting an object, and a region controller for determining a region into which added information is to be inserted so that the detected object is avoided.

Description

画像処理装置、画像処理方法、プログラムおよび記録媒体Image processing apparatus, image processing method, program, and recording medium
 本開示は、画像処理装置、画像処理方法、プログラムおよび記録媒体に関する。 The present disclosure relates to an image processing device, an image processing method, a program, and a recording medium.
 特許文献1や特許文献2に記載されているように、静止画像に対して、広告等の付加情報を合成することが行われている。 As described in Patent Document 1 and Patent Document 2, additional information such as an advertisement is synthesized with a still image.
特開2010−44606号公報JP 2010-44606 A 特開2009−200784号公報Japanese Unexamined Patent Publication No. 2009-200784
 特許文献1や特許文献2は、静止画像に対して広告等の付加情報を合成する。静止画像では、画像内における被写体や背景の構成は一定であり変化を伴わない。一方、動画像では、時間の変化に応じて画像内における被写体や背景の構成が変化する。このため、付加情報を挿入すべき領域を適切に決定する必要があり、静止画像に関する特許文献1等の技術を動画像に適用できないという問題があった。 Patent Documents 1 and 2 synthesize additional information such as advertisements with still images. In a still image, the composition of the subject and background in the image is constant and does not change. On the other hand, in the moving image, the composition of the subject and the background in the image changes with time. For this reason, it is necessary to appropriately determine an area in which additional information is to be inserted, and there is a problem in that the technique disclosed in Patent Document 1 relating to still images cannot be applied to moving images.
 したがって、本開示の目的の一つは、例えば、動画像において付加情報を挿入する領域を適切に決定する画像処理装置、画像処理方法、プログラムおよび記録媒体を提供することにある。 Therefore, one of the objects of the present disclosure is to provide, for example, an image processing apparatus, an image processing method, a program, and a recording medium that appropriately determine a region in which additional information is inserted in a moving image.
 上述した課題を解決するために、本開示は、例えば、
 複数のフレーム画像を解析し、オブジェクトを検出する検出部と、
 検出されたオブジェクトを避けるように、付加情報を挿入する領域を決定する領域制御部と
 を有する画像処理装置である。
In order to solve the above-described problem, the present disclosure provides, for example,
A detection unit that analyzes a plurality of frame images and detects an object;
And an area control unit that determines an area into which the additional information is inserted so as to avoid the detected object.
 本開示は、例えば、
 複数のフレーム画像を解析することでオブジェクトを検出し、
 検出されたオブジェクトを避けるように、付加情報を挿入する領域を決定する画像処理装置における画像処理方法である。
The present disclosure, for example,
Detect objects by analyzing multiple frame images,
This is an image processing method in an image processing apparatus that determines an area into which additional information is inserted so as to avoid a detected object.
 本開示は、例えば、
 複数のフレーム画像を解析することでオブジェクトを検出し、
 検出されたオブジェクトを避けるように、付加情報を挿入する領域を決定する画像処理装置における画像処理方法を、コンピュータに実行させるためのプログラムである。
 このプログラムが記録された記録媒体でもよい。
The present disclosure, for example,
Detect objects by analyzing multiple frame images,
A program for causing a computer to execute an image processing method in an image processing apparatus that determines an area into which additional information is inserted so as to avoid a detected object.
A recording medium on which this program is recorded may be used.
 少なくとも一つの実施形態によれば、動画像において付加情報を挿入する領域を適切に決定できる。 According to at least one embodiment, it is possible to appropriately determine a region for inserting additional information in a moving image.
図1は、画像処理装置の構成の一例を示す図である。
図2A,図2Bおよび図2Cは、付加情報を挿入する挿入領域の一例を説明するための図である。
図3A,図3Bおよび図3Cは、付加情報が挿入された動画データの一例を示す図である。
図4A,図4Bおよび図4Cは、付加情報を挿入する挿入領域が変化する様子の一例を説明するための図である。
図5は、第1の実施形態における処理の流れの一例を示すフローチャートである。
図6A,図6Bおよび図6Cは、挿入領域を選択する画面の一例を示す図である。
図7は、撮像装置の構成の一例を示す図である。
図8は、第2の実施形態における処理の流れの一例を示すフローチャートである。
図9A,図9Bおよび図9Cは、別の挿入領域に付加情報が挿入された動画データの一例を示す図である。
図10A,図10Bおよび図10Cは、オブジェクトに重なるようにして挿入された付加情報の一例を示す図である。
図11A,図11Bおよび図11Cは、付加情報の表示態様を変更して表示することを説明するための図である。
FIG. 1 is a diagram illustrating an example of the configuration of an image processing apparatus.
2A, 2B, and 2C are diagrams for explaining an example of an insertion region into which additional information is inserted.
3A, 3B, and 3C are diagrams illustrating an example of moving image data in which additional information is inserted.
FIG. 4A, FIG. 4B, and FIG. 4C are diagrams for explaining an example of a state in which an insertion area into which additional information is inserted changes.
FIG. 5 is a flowchart illustrating an example of a processing flow in the first embodiment.
6A, 6B, and 6C are diagrams illustrating an example of a screen for selecting an insertion region.
FIG. 7 is a diagram illustrating an example of the configuration of the imaging apparatus.
FIG. 8 is a flowchart illustrating an example of a processing flow in the second embodiment.
9A, 9B, and 9C are diagrams illustrating an example of moving image data in which additional information is inserted into another insertion area.
10A, 10B, and 10C are diagrams illustrating an example of additional information that is inserted so as to overlap an object.
11A, 11B, and 11C are diagrams for explaining that the display mode of the additional information is changed and displayed.
 以下、本開示の実施形態について図面を参照しながら説明する。なお、説明は、以下の順序で行う。
<1.第1の実施形態>
<2.第2の実施形態>
<3.変形例>
 なお、以下に説明する実施形態等は本開示の好適な具体例であり、本開示の内容がこれらの実施形態等に限定されるものではない。
Hereinafter, embodiments of the present disclosure will be described with reference to the drawings. The description will be given in the following order.
<1. First Embodiment>
<2. Second Embodiment>
<3. Modification>
The embodiments described below are suitable specific examples of the present disclosure, and the contents of the present disclosure are not limited to these embodiments.
<1.第1の実施形態>
「画像処理装置の構成」
 始めに、第1の実施形態について説明する。第1の実施形態は、本開示を、画像処理装置に適用したものである。画像処理装置は、例えば、パーソナルコンピュータやタブレット型のコンピュータ装置、携帯端末、テレビジョン装置として実現される。さらに、放送局で使用される編集装置や、コンテンツを配信するコンテンツサーバとして実現されてもよい。
<1. First Embodiment>
"Configuration of image processing device"
First, the first embodiment will be described. In the first embodiment, the present disclosure is applied to an image processing apparatus. The image processing device is realized as, for example, a personal computer, a tablet computer device, a portable terminal, or a television device. Furthermore, it may be realized as an editing device used in a broadcasting station or a content server that distributes content.
 図1は、画像処理装置100の構成の一例を示す。画像処理装置100は、CPU(Central Processing Unit)101、フラッシュメモリ102、表示制御部103、表示部104、操作入力部105、ワークメモリ106および画像処理部107を含む構成とされる。これらの各部がバス108を介して接続されている。なお、画像処理装置100の構成は一例であり、適宜、変更することができる。例えば、音声データを処理する音声処理部や、音声データを再生するスピーカが画像処理装置100に追加されてもよい。 FIG. 1 shows an example of the configuration of the image processing apparatus 100. The image processing apparatus 100 includes a CPU (Central Processing Unit) 101, a flash memory 102, a display control unit 103, a display unit 104, an operation input unit 105, a work memory 106, and an image processing unit 107. These units are connected via a bus 108. Note that the configuration of the image processing apparatus 100 is an example, and can be changed as appropriate. For example, an audio processing unit that processes audio data and a speaker that reproduces audio data may be added to the image processing apparatus 100.
 CPU101は、画像処理装置100の各部を制御する。例えば、操作入力部105から供給される操作信号に応じて、所定の処理を実行する。 CPU 101 controls each unit of the image processing apparatus 100. For example, a predetermined process is executed according to an operation signal supplied from the operation input unit 105.
 フラッシュメモリ102は、例えば、不揮発性のメモリからなる。フラッシュメモリ102に、例えば、複数のフレーム画像からなる動画データ102aが記憶される。なお、動画データ102aは、例えば、有線や無線を介して、他の装置から供給されるようにしてもよい。 The flash memory 102 is composed of, for example, a nonvolatile memory. In the flash memory 102, for example, moving image data 102a including a plurality of frame images is stored. Note that the moving image data 102a may be supplied from another device via, for example, wired or wireless.
 フラッシュメモリ102には、付加情報が記憶される。付加情報は、例えば、画像データである広告データ102bおよびテキストデータである署名データ102cである。付加情報の内容は、適宜、変更できる。広告データ102bおよび署名データ102cのいずれか一方が付加情報とされてもよい。付加情報は、動画データの所定の領域に挿入される。なお、付加情報が挿入される領域を挿入領域と適宜、称する。 Additional information is stored in the flash memory 102. The additional information is, for example, advertisement data 102b that is image data and signature data 102c that is text data. The content of the additional information can be changed as appropriate. Either one of the advertisement data 102b and the signature data 102c may be added information. The additional information is inserted into a predetermined area of the moving image data. Note that an area in which additional information is inserted is appropriately referred to as an insertion area.
 表示制御部103は、表示部104を駆動するためのドライバである。表示制御部103は、例えば、画像処理部107の制御に応じて供給される動画データに基づいてビデオデータを生成する。表示制御部103は、ビデオデータを表示部104に供給する。表示部104に、表示制御部103により生成されたビデオデータに基づく表示がなされる。 The display control unit 103 is a driver for driving the display unit 104. For example, the display control unit 103 generates video data based on moving image data supplied in accordance with the control of the image processing unit 107. The display control unit 103 supplies video data to the display unit 104. Display based on the video data generated by the display control unit 103 is performed on the display unit 104.
 表示部104は、LCD(Liquid Crystal Display)や有機EL(Electroluminescence)などの表示パネルである。表示部104は、例えば、タッチパネルとして構成される。タッチパネルの所定領域を触れることで、画像処理装置100に対する指示を行うことができる。例えば、表示部104に表示された複数の挿入領域の中から所定の挿入領域を触れることで、挿入領域を選択できる。さらに、表示部104に表示された複数のオブジェクトの中から所定のオブジェクトを触れることで、オブジェクトを選択できる。 The display unit 104 is a display panel such as an LCD (Liquid Crystal Display) or an organic EL (Electroluminescence). The display unit 104 is configured as a touch panel, for example. An instruction to the image processing apparatus 100 can be performed by touching a predetermined area of the touch panel. For example, an insertion area can be selected by touching a predetermined insertion area from among a plurality of insertion areas displayed on the display unit 104. Furthermore, an object can be selected by touching a predetermined object from among a plurality of objects displayed on the display unit 104.
 操作入力部105は、キーボードやマウス、ボタン類、スイッチ類などを総称したものである。操作入力部105に対する操作に応じて、操作信号が生成される。生成された操作信号がバス108を介してCPU101に供給される。CPU101は、供給された操作信号に応じた処理を実行する。 The operation input unit 105 is a general term for a keyboard, a mouse, buttons, switches, and the like. In response to an operation on the operation input unit 105, an operation signal is generated. The generated operation signal is supplied to the CPU 101 via the bus 108. The CPU 101 executes processing according to the supplied operation signal.
 ワークメモリ106は、例えば、RAM(Random Access Memory)からなり、CPU101や画像処理部107が処理を実行する際の作業領域として使用される。 The work memory 106 includes, for example, a RAM (Random Access Memory), and is used as a work area when the CPU 101 and the image processing unit 107 execute processing.
 画像処理部107は、複数のフレーム画像に対する処理を行う。画像処理部107の機能がCPU101に組み込まれていてもよい。画像処理部107は、例えば、動画データ102aを構成する複数のフレーム画像を解析し、1または複数のオブジェクトを検出する。そして、検出されたオブジェクトを避けるように、付加情報を挿入する領域を決定する。すなわち、画像処理部107が、検出部および領域制御部の一例として機能する。オブジェクトを避けるように、とは、オブジェクトに対して付加情報が全く重ならないことと、オブジェクトに付加情報が部分的に重なることの少なくとも一方を含む意味である。 The image processing unit 107 performs processing on a plurality of frame images. The function of the image processing unit 107 may be incorporated in the CPU 101. For example, the image processing unit 107 analyzes a plurality of frame images constituting the moving image data 102a and detects one or a plurality of objects. And the area | region which inserts additional information is determined so that the detected object may be avoided. That is, the image processing unit 107 functions as an example of a detection unit and a region control unit. To avoid the object means to include at least one of the case where the additional information does not overlap at all with the object and the case where the additional information partially overlaps the object.
 画像処理部107は、オブジェクトを検出するためのパターンを保持している。パターンは、例えば、人物や乗物、建物、山や川、樹木などのパターンである。なお、画像処理部107は、パターンを保持していない場合は、例えば、肌色検出などによって人物等の特定のオブジェクトを検出するようにしてもよい。 The image processing unit 107 holds a pattern for detecting an object. The pattern is, for example, a pattern such as a person, a vehicle, a building, a mountain, a river, or a tree. If the image processing unit 107 does not hold a pattern, the image processing unit 107 may detect a specific object such as a person by detecting skin color, for example.
「画像処理装置の動作」
 画像処理装置100の動作の概要の一例について説明する。画像処理装置100の画像処理部107は、フラッシュメモリ102から動画データ102aを読み出す。フラッシュメモリ102に複数の動画データ102aが記憶されている場合には、例えば、ユーザによって選択された動画データ102aが読み出される。画像処理部107は、読み出した動画データ102aをワークメモリ106に展開する。
"Operation of image processing device"
An example of an outline of the operation of the image processing apparatus 100 will be described. The image processing unit 107 of the image processing apparatus 100 reads the moving image data 102 a from the flash memory 102. When a plurality of moving image data 102a is stored in the flash memory 102, for example, the moving image data 102a selected by the user is read. The image processing unit 107 expands the read moving image data 102 a in the work memory 106.
 画像処理部107は、ワークメモリ106に展開された動画データ102aに対して、オブジェクトを検出する処理を行う。画像処理部107は、例えば、所定サイズのウィンドウ(探索窓)を設定し、ウィンドウをフレーム画像内で移動させる。ウィンドウ内の画素値を例えば、積分変換することで得られる画素値データを、パターン毎の画素値データと比較する。ウィンドウをフレーム画像内で適宜、移動させつつ比較処理を行うことで、フレーム画像内にパターンと略一致するオブジェクトが存在するか否かを判定する。 The image processing unit 107 performs processing for detecting an object for the moving image data 102 a developed in the work memory 106. For example, the image processing unit 107 sets a predetermined size window (search window) and moves the window within the frame image. For example, pixel value data obtained by integral conversion of pixel values in the window is compared with pixel value data for each pattern. By performing comparison processing while appropriately moving the window in the frame image, it is determined whether or not there is an object that substantially matches the pattern in the frame image.
 そして、フレーム画像内にパターンと略一致するオブジェクトが存在する場合には、画像処理部107は、ウィンドウの重心座標とウィンドウサイズに基づいて、オブジェクトが存在する領域を決定する。重心座標(x,y)とフレーム画像の時間情報tを用いて、オブジェクトが存在する領域を(x,y、t)と表現するようにしてもよい。ウィンドウサイズは、通常、一定のサイズであることから、重心座標(x,y)に基づいて、オブジェクトが存在する領域を求めることができる。画像処理部107は、動画データ102aを構成する全てのフレーム画像に対してオブジェクトを検出する処理を行う。 If there is an object that substantially matches the pattern in the frame image, the image processing unit 107 determines an area in which the object exists based on the barycentric coordinates of the window and the window size. The area where the object exists may be expressed as (x, y, t) using the barycentric coordinates (x, y) and the time information t of the frame image. Since the window size is usually a fixed size, the area where the object exists can be obtained based on the barycentric coordinates (x, y). The image processing unit 107 performs processing for detecting an object for all frame images constituting the moving image data 102a.
 なお、上述したオブジェクトを検出する処理は一例であり、これに限定されることはない。公知のオブジェクトを検出する処理を適用できる。 Note that the above-described processing for detecting an object is an example, and the present invention is not limited to this. A process for detecting a known object can be applied.
 次に、画像処理部107は、検出されたオブジェクトを避けるようにして、付加情報を挿入する領域を決定する。例えば、画像処理部107は、各フレーム画像においてオブジェクトが存在しない領域を判別する。オブジェクトが存在しない領域と、付加情報のサイズとに基づいて、付加情報を挿入する領域を決定する。 Next, the image processing unit 107 determines an area into which additional information is inserted so as to avoid the detected object. For example, the image processing unit 107 determines an area where no object exists in each frame image. Based on the area where no object exists and the size of the additional information, the area into which the additional information is inserted is determined.
 図2は、時間の経過にともなって、動画データ102aにおける画面内の構成が変化する様子を模式的に示したものである。時間の経過にともなって、図2A、図2Bおよび図2Cのように画面内の構成が変化する。なお、図2では、説明の便宜を考慮して3枚のフレーム画像を例示しているが、動画データ102aは実際には多くのフレーム画像から構成される。 FIG. 2 schematically shows how the configuration in the screen of the moving image data 102a changes over time. As time passes, the configuration in the screen changes as shown in FIGS. 2A, 2B, and 2C. In FIG. 2, three frame images are illustrated for convenience of explanation, but the moving image data 102a is actually composed of many frame images.
 図2Aに示すフレーム画像では、オブジェクトの一例として、山10、人物11、人物12、電車13、線路14および樹木15が検出される。図2Aに示すフレーム画像から所定の時間経過した後の、図2Bに示すフレーム画像では、オブジェクトの一例として、山10、電車13、線路14、樹木15、人物16、人物17が検出される。図2Bに示すフレーム画像から所定の時間経過した後の、図2Cに示すフレーム画像では、オブジェクトの一例として、山10、人物11、人物12、電車13、線路14、樹木15、人物16、人物17が検出される。 In the frame image shown in FIG. 2A, a mountain 10, a person 11, a person 12, a train 13, a track 14, and a tree 15 are detected as examples of objects. In the frame image shown in FIG. 2B after a predetermined time has elapsed from the frame image shown in FIG. 2A, a mountain 10, a train 13, a track 14, a tree 15, a person 16, and a person 17 are detected as examples of objects. In the frame image shown in FIG. 2C after a predetermined time has elapsed from the frame image shown in FIG. 2B, as an example of the object, a mountain 10, a person 11, a person 12, a train 13, a track 14, a tree 15, a person 16, and a person 17 is detected.
 これらの検出されたオブジェクトを避けるように、付加情報を挿入する挿入領域が決定される。画像処理部107は、例えば、動画データ102aの全てのフレーム画像において、オブジェクトが存在しない領域50を、挿入領域として決定する。オブジェクトが存在しない領域に付加情報を挿入することで、動画データ102aを構成するフレーム画像が変化しても付加情報が表示される領域は変化しない。このため、オブジェクトの表示を阻害することなく、かつ、付加情報をユーザが見易くすることができる。 -An insertion area for inserting additional information is determined so as to avoid these detected objects. For example, the image processing unit 107 determines an area 50 in which no object exists in all frame images of the moving image data 102a as an insertion area. By inserting the additional information in the area where no object exists, the area in which the additional information is displayed does not change even if the frame image constituting the moving image data 102a changes. For this reason, it is possible to make it easy for the user to view the additional information without obstructing the display of the object.
 図3は、決定された挿入領域50に付加情報を挿入した例を示す。付加情報は、例えば、ユーザによって編集されたコメント(テキストデータ)である。もちろん、広告データやイラストデータ等の画像データが挿入されてもよい。挿入領域50は、例えば、画面における左上のコーナ付近に設定される。 FIG. 3 shows an example in which additional information is inserted into the determined insertion area 50. The additional information is, for example, a comment (text data) edited by the user. Of course, image data such as advertisement data and illustration data may be inserted. The insertion area 50 is set near the upper left corner of the screen, for example.
 例えば、「去年の夏の思い出」というテキストデータが各フレーム画像の挿入領域50に挿入される。付加情報が挿入された動画データ102aがワークメモリ106に記憶される。付加情報が挿入された動画データ102aがワークメモリ106から表示制御部103に伝送され、付加情報が挿入された動画データ102aが表示部104に表示されるようにしてもよい。 For example, the text data “Memories of last summer” is inserted into the insertion area 50 of each frame image. The moving image data 102a in which the additional information is inserted is stored in the work memory 106. The moving image data 102a in which the additional information is inserted may be transmitted from the work memory 106 to the display control unit 103, and the moving image data 102a in which the additional information is inserted may be displayed on the display unit 104.
 なお、図4に示すように、時間の経過にともなって、付加情報を挿入する領域を変化させるようにしてもよい。例えば、動画データ102aが600のフレーム画像から構成されるとする。600のフレーム画像を解析した結果、1番目のフレーム画像から300番目のフレーム画像までの挿入領域として、画面の左上のコーナ付近の挿入領域50aが決定される。 In addition, as shown in FIG. 4, you may make it change the area | region which inserts additional information with progress of time. For example, assume that the moving image data 102a is composed of 600 frame images. As a result of analyzing 600 frame images, an insertion region 50a near the upper left corner of the screen is determined as an insertion region from the first frame image to the 300th frame image.
 301番目以降のフレーム画像では、挿入領域50aの領域にオブジェクト(飛行機18)がフレームインする。このため、例えば、画面の右上のコーナ付近に、挿入領域50bが新たに決定され、挿入領域50bに付加情報が挿入される。例えば、420番目のフレーム画像までは、挿入領域50bに付加情報が挿入される。 In the 301st and subsequent frame images, the object (airplane 18) is framed in the area of the insertion area 50a. For this reason, for example, an insertion area 50b is newly determined near the upper right corner of the screen, and additional information is inserted into the insertion area 50b. For example, the additional information is inserted into the insertion area 50b up to the 420th frame image.
 421番目以降のフレーム画像では、挿入領域50bに飛行機18が移動し、挿入領域50bと飛行機18とが重なる状態になる。このため、例えば、画面の中央左寄り付近に新たなに挿入領域50cが設定される。挿入領域50cに付加情報が挿入される。このように、時間の経過にともなって、オブジェクトが存在しない領域が変化する場合でも、挿入領域を適切に変更することができる。 In the 421st and subsequent frame images, the airplane 18 moves to the insertion area 50b, and the insertion area 50b and the airplane 18 overlap each other. For this reason, for example, a new insertion area 50c is set near the center left side of the screen. Additional information is inserted into the insertion area 50c. As described above, even when the area where the object does not exist changes with time, the insertion area can be appropriately changed.
 なお、例えば、所定の挿入領域で付加情報を所定時間(例えば、数秒程度)表示できる場合には、挿入領域を変更せずに、付加情報の表示を終了するようにしてもよい。これにより、所定時間の間、付加情報をユーザに提示できるとともに、その後に、付加情報が表示される領域が頻繁に変化することを防止できる。 Note that, for example, when the additional information can be displayed in a predetermined insertion area for a predetermined time (for example, about several seconds), the display of the additional information may be terminated without changing the insertion area. Thus, the additional information can be presented to the user for a predetermined time, and thereafter, the area where the additional information is displayed can be prevented from frequently changing.
「処理の流れ」
 図5は、画像処理装置100により行われる処理の流れの一例を示すフローチャートである。図5に例示する処理は、特に断らない限り、画像処理部107によって行われる。
"Process flow"
FIG. 5 is a flowchart illustrating an example of the flow of processing performed by the image processing apparatus 100. The processing illustrated in FIG. 5 is performed by the image processing unit 107 unless otherwise specified.
 ステップS10では、動画データ102aを構成する全てのフレーム画像がフラッシュメモリ102から読み出される。読み出された全てのフレーム画像がワークメモリ106に記憶される。そして、処理がステップS11に進む。 In step S10, all frame images constituting the moving image data 102a are read from the flash memory 102. All the read frame images are stored in the work memory 106. Then, the process proceeds to step S11.
 ステップS11において、画像処理部107は、各フレーム画像に対して検出処理を行い、各フレーム画像におけるオブジェクトを検出する。そして、処理がステップS12に進む。ステップS12では、ステップS11による処理の結果がワークメモリ106に保存される。そして、処理がステップS13に進む。 In step S11, the image processing unit 107 performs detection processing on each frame image and detects an object in each frame image. Then, the process proceeds to step S12. In step S12, the result of the process in step S11 is stored in the work memory 106. Then, the process proceeds to step S13.
 ステップS13では、検出処理の結果に基づいて、各フレーム画像におけるオブジェクトが存在する領域を判別する。そして、処理がステップS14に進む。 In step S13, an area where an object exists in each frame image is determined based on the result of the detection process. Then, the process proceeds to step S14.
 ステップS14では、付加情報を挿入する挿入領域が決定される。例えば、全てのフレーム画像において、オブジェクトが存在しない領域が、付加情報を挿入する挿入領域として決定される。そして、処理がステップS15に進む。 In step S14, an insertion area for inserting additional information is determined. For example, in all frame images, an area where no object exists is determined as an insertion area into which additional information is inserted. Then, the process proceeds to step S15.
 ステップS15では、決定された挿入領域に付加情報を挿入する合成処理が行われる。例えば、各フレーム画像における挿入領域に対応する箇所の不透明度が最小に設定される。不透明度が最小に設定された領域に、付加情報が挿入される。合成処理が施された動画データ102aが、ワークメモリ106に記憶される。合成処理が施された動画データ102aがフラッシュメモリ102に記憶されてもよい。そして、処理がステップS16に進む。 In step S15, a composition process for inserting additional information into the determined insertion area is performed. For example, the opacity of the portion corresponding to the insertion area in each frame image is set to the minimum. Additional information is inserted in the area where the opacity is set to the minimum. The moving image data 102a subjected to the synthesis process is stored in the work memory 106. The moving image data 102a subjected to the synthesis process may be stored in the flash memory 102. Then, the process proceeds to step S16.
 ステップS16では、付加情報の合成処理が施された動画データ102aが、表示制御部103に供給される。表示制御部103は、付加情報の合成処理が施された動画データ102aを所定のフォーマットのビデオデータに変換する。ビデオデータが表示制御部103から表示部104に供給され、ビデオデータに基づく表示がなされる。すなわち、付加情報が挿入された動画データ102aが再生される。そして、処理がステップS17に進む。 In step S <b> 16, the moving image data 102 a subjected to the additional information combining process is supplied to the display control unit 103. The display control unit 103 converts the moving image data 102a subjected to the additional information combining process into video data of a predetermined format. Video data is supplied from the display control unit 103 to the display unit 104, and display based on the video data is performed. That is, the moving image data 102a in which the additional information is inserted is reproduced. Then, the process proceeds to step S17.
 ステップS17では、挿入領域を修正する指示がなされたか否かが判断される。ステップS14で挿入領域の候補が複数、判別された場合は、複数の挿入領域の候補の中から選択された挿入領域に付加情報を挿入することができる。 In step S17, it is determined whether or not an instruction to correct the insertion area has been given. If a plurality of insertion area candidates are determined in step S14, additional information can be inserted into the insertion area selected from the plurality of insertion area candidates.
 例えば、図6に示すように、挿入領域の候補(挿入領域60a、挿入領域60bおよび挿入領域60c)が表示部104に表示される。複数の挿入領域の候補は、図6A、図6Bおよび図6Cに示すように動画データとして表示されてもよい。複数の挿入領域の候補の領域自体は変化しないことから、例えば、図6Aに示す所定のフレーム画像を使用して、複数の挿入領域の候補が表示されるようにしてもよい。 For example, as shown in FIG. 6, insertion area candidates (insertion area 60a, insertion area 60b, and insertion area 60c) are displayed on the display unit 104. A plurality of insertion region candidates may be displayed as moving image data as shown in FIGS. 6A, 6B, and 6C. Since the plurality of insertion area candidate areas themselves do not change, for example, a plurality of insertion area candidates may be displayed using a predetermined frame image shown in FIG. 6A.
 ステップS17において、挿入領域を修正する指示がなされない場合は、処理が終了する。ステップS17において、挿入領域を修正する指示がなされた場合は、処理がステップS18に進む。挿入領域を修正する指示は、例えば、挿入領域60a、挿入領域60bおよび挿入領域60cのうち、ユーザが所望する挿入領域を触れることでなされる。不要な挿入領域の候補をフリック操作(画面の外に向かってはじくような操作)で除去することで、所望の挿入領域の選択がなされてもよい。そして、ユーザによって選択された挿入領域(修正後の挿入領域)に付加情報が挿入される。そして、処理がステップS16に進む。 If the instruction to correct the insertion area is not given in step S17, the process ends. If an instruction to correct the insertion area is given in step S17, the process proceeds to step S18. The instruction to correct the insertion area is made, for example, by touching the insertion area desired by the user among the insertion area 60a, the insertion area 60b, and the insertion area 60c. A desired insertion region may be selected by removing unnecessary insertion region candidates by a flick operation (an operation for flipping toward the outside of the screen). Then, the additional information is inserted into the insertion area (modified insertion area) selected by the user. Then, the process proceeds to step S16.
 ステップS16では、修正後の挿入領域に付加情報が挿入された動画データ102aが再生される。そして、再び、ステップS17による判定処理が行われる。 In step S16, the moving image data 102a in which the additional information is inserted into the corrected insertion area is reproduced. And the determination process by step S17 is performed again.
 なお、挿入領域の選択に関する優先情報を設定できるようにしてもよい。例えば、挿入領域の候補が複数、判別された場合には、画面の左上のコーナ付近の挿入領域を優先的に選択するといった設定をできるようにしてもよい。さらに、特定の領域に存在する挿入領域のみを抽出するようにしてもよい。例えば、画面を均等に四分割し、そのうち、左上の領域における挿入領域を判別するようにしてもよい。左上の領域に挿入領域が存在しない場合は、次に、右上の領域における挿入領域を判別するようにしてもよい。 Note that priority information related to selection of the insertion area may be set. For example, when a plurality of insertion area candidates are determined, a setting may be made such that the insertion area near the upper left corner of the screen is preferentially selected. Furthermore, only an insertion area existing in a specific area may be extracted. For example, the screen may be equally divided into four, and the insertion area in the upper left area may be determined. If there is no insertion area in the upper left area, the insertion area in the upper right area may be determined next.
<2.第2の実施形態>
「撮像装置の構成」
 次に、第2の実施形態について説明する。第2の実施形態は、本開示を、撮像部を有する撮像装置に適用したものである。
<2. Second Embodiment>
"Configuration of the imaging device"
Next, a second embodiment will be described. In the second embodiment, the present disclosure is applied to an imaging apparatus having an imaging unit.
 図7は、撮像装置200の構成の一例を示す。撮像装置200は、CPU201、フラッシュメモリ202、表示制御部203、表示部204、操作入力部205、ワークメモリ206および画像処理部207を含む構成とされる。これらの各部がバス208を介して接続されている。なお、撮像装置200の構成は一例であり、適宜、変更することができる。例えば、音声データを処理する音声処理部や、音声データを再生するスピーカが撮像装置200に追加されてもよい。 FIG. 7 shows an example of the configuration of the imaging apparatus 200. The imaging apparatus 200 includes a CPU 201, a flash memory 202, a display control unit 203, a display unit 204, an operation input unit 205, a work memory 206, and an image processing unit 207. These units are connected via a bus 208. Note that the configuration of the imaging apparatus 200 is an example, and can be changed as appropriate. For example, an audio processing unit that processes audio data and a speaker that reproduces audio data may be added to the imaging apparatus 200.
 CPU201は、撮像装置200の各部を制御する。例えば、CPU201は、操作入力部205から供給される操作信号に応じて、所定の処理を実行する。 The CPU 201 controls each unit of the imaging device 200. For example, the CPU 201 executes a predetermined process in response to an operation signal supplied from the operation input unit 205.
 さらに、CPU201は、画像信号処理の機能を有する。CPU201は、撮像部210により取り込まれたフレーム画像に対して、S/N比(Signal to Ratio)を良好にするCDS(Correlated Double Sampling)処理や利得を制御するAGC(Automatic Gain Control)処理などのアナログ信号処理を行う。 Furthermore, the CPU 201 has an image signal processing function. The CPU 201 performs a CDS (Correlated Double Sampling) process for improving the S / N ratio (Signal to Ratio) on the frame image captured by the imaging unit 210 and an AGC (Automatic Gain Control) process for controlling the gain. Perform analog signal processing.
 アナログ信号処理が施されたフレーム画像が、CPU201が有するA/D変換(Analog to Digital)機能によってデジタルデータに変換される。CPU201は、デジタルデータとされたフレーム画像に対してデモザイク処理やAF(Auto Focus)、AE(Auto Exposure)、AWB(Auto White Balance)などの画像信号処理を施す。画像信号処理が施されたフレーム画像が適宜、圧縮され、リアルタイムにフラッシュメモリ202に記憶される。 The frame image subjected to the analog signal processing is converted into digital data by an A / D conversion (Analog to Digital) function of the CPU 201. The CPU 201 performs image signal processing such as demosaic processing, AF (Auto Focus), AE (Auto Exposure), and AWB (Auto White Balance) on the frame image that has been converted into digital data. The frame image subjected to the image signal processing is appropriately compressed and stored in the flash memory 202 in real time.
 なお、CPU201は、フラッシュメモリ202に、所定数のフレーム画像が蓄積される毎に、所定数のフレーム画像をフラッシュメモリ202から読み出す。フラッシュメモリ202から読み出された所定数のフレーム画像がワークメモリ206に供給される。所定数は、例えば、10フレームである。もちろん、所定数は適宜、変更できる。 Note that the CPU 201 reads a predetermined number of frame images from the flash memory 202 every time a predetermined number of frame images are accumulated in the flash memory 202. A predetermined number of frame images read from the flash memory 202 are supplied to the work memory 206. The predetermined number is, for example, 10 frames. Of course, the predetermined number can be changed as appropriate.
 なお、画像信号処理が施されたフレーム画像が、直接、ワークメモリ206に供給されるようにしてもよい。ワークメモリ206に所定数のフレーム画像が蓄積される毎に、後述する画像処理部207による処理が行われるようにしてもよい。 Note that the frame image that has been subjected to the image signal processing may be directly supplied to the work memory 206. Each time a predetermined number of frame images are accumulated in the work memory 206, processing by an image processing unit 207, which will be described later, may be performed.
 フラッシュメモリ202は、例えば、不揮発性のメモリからなる。フラッシュメモリ202に、例えば、撮像部210によってリアルタイムに取り込まれるフレーム画像が記憶される。撮像部210によってリアルタイムに取り込まれる、複数のフレーム画像により動画データ202aが構成される。 The flash memory 202 is composed of, for example, a nonvolatile memory. For example, a frame image captured in real time by the imaging unit 210 is stored in the flash memory 202. The moving image data 202a is composed of a plurality of frame images captured in real time by the imaging unit 210.
 フラッシュメモリ202には、付加情報が記憶される。付加情報は、例えば、画像データである広告データ202bおよびテキストデータである署名データ202cである。付加情報の内容は、適宜、変更できる。広告データ202bおよび署名データ202cのいずれか一方が付加情報とされてもよい。付加情報は、動画データの所定の領域に挿入される。 Additional information is stored in the flash memory 202. The additional information is, for example, advertisement data 202b that is image data and signature data 202c that is text data. The content of the additional information can be changed as appropriate. Either one of the advertisement data 202b and the signature data 202c may be added information. The additional information is inserted into a predetermined area of the moving image data.
 表示制御部203は、表示部204を駆動するためのドライバである。表示制御部203は、例えば、画像処理部207の制御によって供給される動画データに基づいてビデオデータを生成する。表示制御部203は、ビデオデータを表示部204に供給する。表示部204に、表示制御部203により生成されたビデオデータに基づく表示がなされる。 The display control unit 203 is a driver for driving the display unit 204. For example, the display control unit 203 generates video data based on moving image data supplied by the control of the image processing unit 207. The display control unit 203 supplies the video data to the display unit 204. Display based on the video data generated by the display control unit 203 is performed on the display unit 204.
 表示部204は、LCDや有機ELなどの表示パネルである。表示部204は、例えば、タッチパネルとして構成される。タッチパネルの所定領域を触れることで、撮像装置200に対する指示を行うことができる。 The display unit 204 is a display panel such as an LCD or an organic EL. The display unit 204 is configured as a touch panel, for example. By touching a predetermined area of the touch panel, an instruction to the imaging apparatus 200 can be given.
 操作入力部205は、キーボードやマウス、ボタン類、スイッチ類などを総称したものである。操作入力部205に対する操作に応じて、操作信号が生成される。生成された操作信号がバス208を介してCPU201に供給される。CPU201は、供給された操作信号に応じた処理を実行する。 The operation input unit 205 is a general term for a keyboard, a mouse, buttons, switches, and the like. An operation signal is generated in response to an operation on the operation input unit 205. The generated operation signal is supplied to the CPU 201 via the bus 208. The CPU 201 executes processing according to the supplied operation signal.
 ワークメモリ206は、例えば、RAMからなり、CPU201や画像処理部207が処理を実行する際の作業領域として使用される。ワークメモリには、画像処理部207の処理対象である所定数のフレーム画像が記憶される。 The work memory 206 includes, for example, a RAM, and is used as a work area when the CPU 201 and the image processing unit 207 execute processing. The work memory stores a predetermined number of frame images to be processed by the image processing unit 207.
 画像処理部207は、複数のフレーム画像に対する処理を行う。画像処理部207の機能がCPU201に組み込まれていてもよい。画像処理部207は、撮像部210を介して取り込まれた所定数のフレーム画像を解析し、1または複数のオブジェクトを検出する。そして、検出されたオブジェクトを避けるようにして、付加情報を挿入する領域を決定する。すなわち、画像処理部207が、検出部および領域制御部の一例として機能する。 The image processing unit 207 performs processing on a plurality of frame images. The function of the image processing unit 207 may be incorporated in the CPU 201. The image processing unit 207 analyzes a predetermined number of frame images captured via the imaging unit 210 and detects one or more objects. Then, an area for inserting additional information is determined so as to avoid the detected object. That is, the image processing unit 207 functions as an example of a detection unit and a region control unit.
 画像処理部207は、オブジェクトを検出するためのパターンを保持している。パターンは、例えば、人物や乗物、建物、山や川、樹木などのパターンである。画像処理部207は、パターンを保持していない場合は、肌色検出などにより、人物等の特定のオブジェクトを検出するようにしてもよい。 The image processing unit 207 holds a pattern for detecting an object. The pattern is, for example, a pattern such as a person, a vehicle, a building, a mountain, a river, or a tree. If the image processing unit 207 does not hold the pattern, the image processing unit 207 may detect a specific object such as a person by skin color detection or the like.
 撮像装置200は、レンズ209および撮像部210を含む。例えば、レンズ209および撮像部210により、特許請求の範囲における撮像部が構成される。レンズ209は、被写体からの光画像が集光される。 The imaging device 200 includes a lens 209 and an imaging unit 210. For example, the lens 209 and the imaging unit 210 constitute an imaging unit in the claims. The lens 209 collects a light image from the subject.
 撮像部210は、例えば、光量を調節する絞りおよび撮像素子を含む。絞りによって光画像の光量が調節される。集光された光画像が撮像素子に供給される。撮像素子によって光画像が光電変換され、電気信号であるアナログの画像データが生成される。撮像素子は、CCD(Charge Coupled Device)センサやCMOS(Complementary Metal Oxide Semiconductor)センサなどから構成される。 The imaging unit 210 includes, for example, a diaphragm for adjusting the amount of light and an imaging element. The light amount of the optical image is adjusted by the diaphragm. The condensed light image is supplied to the image sensor. The optical image is photoelectrically converted by the imaging device, and analog image data that is an electrical signal is generated. The image pickup device includes a CCD (Charge Coupled Device) sensor, a CMOS (Complementary Metal Oxide Semiconductor) sensor, and the like.
 撮像装置200は、所定のフレームレートでもって、フレーム画像を取り込む。フレームレートは、撮像装置によって異なるものである。撮像装置200のフレームレートは、例えば、60f/s(frames per second)とされる。 The imaging device 200 captures a frame image at a predetermined frame rate. The frame rate varies depending on the imaging device. The frame rate of the imaging device 200 is, for example, 60 f / s (frames per second).
「撮像装置の動作」
 撮像装置200の動作の概要の一例に説明する。撮像装置200を使用して、撮像操作がなされる。撮像操作により撮像部210を介してフレーム画像が取得される。フレーム画像に対して、所定の画像信号処理がCPU201により施される。所定の画像信号処理を施されたフレーム画像がワークメモリ206に記憶される。
"Operation of the imaging device"
An example of the outline of the operation of the imaging apparatus 200 will be described. An imaging operation is performed using the imaging device 200. A frame image is acquired via the imaging unit 210 by an imaging operation. The CPU 201 performs predetermined image signal processing on the frame image. A frame image subjected to predetermined image signal processing is stored in the work memory 206.
 画像処理部207は、ワークメモリ206に、例えば、10フレームのフレーム画像が蓄積されると、10フレームのフレーム画像を使用して、オブジェクトを検出する処理を行う。そして、画像処理部207は、検出されたオブジェクトを避けるように、付加情報を挿入する挿入領域を決定する。例えば、10フレームのフレーム画像の全てにおいてオブジェクトが存在しない領域を、挿入領域として決定する。 For example, when a 10-frame frame image is stored in the work memory 206, the image processing unit 207 performs a process of detecting an object using the 10-frame frame image. Then, the image processing unit 207 determines an insertion area into which the additional information is inserted so as to avoid the detected object. For example, an area where no object exists in all 10 frame images is determined as an insertion area.
 画像処理部207は、フラッシュメモリ202から、広告データ等の付加情報を読み出す。読み出された付加情報がそれぞれのフレーム画像の挿入領域に挿入される。画像処理部207は、各フレーム画像の挿入領域に付加情報を挿入した後に、10フレーム分のフレーム画像を表示制御部203に供給する。そして、次の10フレームのフレーム画像に対して同様の処理を行う。 The image processing unit 207 reads additional information such as advertisement data from the flash memory 202. The read additional information is inserted into the insertion area of each frame image. The image processing unit 207 supplies the frame image for 10 frames to the display control unit 203 after inserting the additional information into the insertion area of each frame image. Then, the same processing is performed on the next 10 frame images.
 表示制御部203は、10フレームのフレーム画像に基づくビデオデータを生成し、生成したビデオデータを表示部204に供給する。表示部204に、ビデオデータに基づく表示がなされる。 The display control unit 203 generates video data based on the 10-frame image, and supplies the generated video data to the display unit 204. Display based on the video data is performed on the display unit 204.
 なお、10フレーム単位で処理を行っているため、ユーザが実際に撮像している対象と、表示部204に表示される動画像との間にずれ(遅延)が生じる。しかしながら、10フレーム分の遅延時間は極めて短時間であるため、実際上の不都合は生じない。 Note that since processing is performed in units of 10 frames, a shift (delay) occurs between the target that the user is actually capturing and the moving image displayed on the display unit 204. However, since the delay time for 10 frames is extremely short, practical inconvenience does not occur.
 表示部204にリアルタイムで撮像されたフレーム画像を表示してもよい。挿入領域は、時間的に過去の10フレームを使用して決定される。表示部204に表示されるリアルタイムのフレーム画像において、決定された挿入領域に対応する領域に付加情報が挿入されるようにしてもよい。10フレーム分の短時間で挿入領域が大幅に変更される可能性は低い。このため、時間的に過去のフレーム画像を使用して決定された挿入領域を、リアルタイムで表示されるフレーム画像に対して適用しても問題は生じない。 The frame image captured in real time may be displayed on the display unit 204. The insertion area is determined using the past 10 frames in time. In the real-time frame image displayed on the display unit 204, additional information may be inserted into an area corresponding to the determined insertion area. It is unlikely that the insertion area will be changed significantly in a short time of 10 frames. For this reason, there is no problem even if an insertion region determined using a past frame image in time is applied to a frame image displayed in real time.
 過去のフレーム画像を使用して挿入領域を決定する場合は、現在のタイミングに時間的に近いフレーム画像の重みが重くなるように重み付けを行ってもよい。挿入領域の候補が複数、存在する場合に、重みが重いフレーム画像において、最も適切な挿入領域の候補を挿入領域として決定するようにしてもよい。 When determining an insertion area using a past frame image, weighting may be performed so that the weight of the frame image close in time to the current timing becomes heavy. When there are a plurality of insertion region candidates, the most appropriate insertion region candidate may be determined as the insertion region in the frame image having a heavy weight.
「処理の流れ」
 図8は、撮像装置200における処理の流れの一例を示すフローチャートである。図8における処理は、特に断らない限り、画像処理部207によって行われる。
"Process flow"
FIG. 8 is a flowchart illustrating an example of a processing flow in the imaging apparatus 200. The processing in FIG. 8 is performed by the image processing unit 207 unless otherwise specified.
 ステップS20では、撮像装置200を使用した撮像動作がなされる。撮像動作に応じて、所定時間分の動画データがワークメモリ206に記憶される。すなわち、所定時間分に対応する所定数のフレーム画像がワークメモリ206に記憶される。例えば、10フレームのフレーム画像がワークメモリ206に記憶される。そして、処理がステップS21に進む。 In step S20, an imaging operation using the imaging device 200 is performed. Depending on the imaging operation, moving image data for a predetermined time is stored in the work memory 206. That is, a predetermined number of frame images corresponding to a predetermined time are stored in the work memory 206. For example, a frame image of 10 frames is stored in the work memory 206. Then, the process proceeds to step S21.
 ステップS21では、ワークメモリ206に記憶された10フレームのフレーム画像のそれぞれを解析し、オブジェクトを検出する検出処理が実行される。そして、処理がステップS22に進む。ステップS22では、検出処理の結果がワークメモリ206に保存される。そして、処理がステップS23に進む。 In step S21, detection processing for analyzing each of the 10 frame images stored in the work memory 206 and detecting an object is executed. Then, the process proceeds to step S22. In step S <b> 22, the result of the detection process is stored in the work memory 206. Then, the process proceeds to step S23.
 ステップS23において、各フレーム画像においてオブジェクトが存在する領域が判別される。そして、処理がステップS24に進む。ステップS24は、検出されたオブジェクトを避けるように、付加情報を挿入する挿入領域が決定される。例えば、10フレームのフレーム画像の全てにおいて、オブジェクトが存在しない領域が挿入領域として決定される。そして、処理がステップS25に進む。 In step S23, an area where the object exists in each frame image is determined. Then, the process proceeds to step S24. In step S24, an insertion area into which the additional information is inserted is determined so as to avoid the detected object. For example, in all 10 frame images, an area where no object exists is determined as an insertion area. Then, the process proceeds to step S25.
 ステップS25では、付加情報を挿入領域に挿入する合成処理が行われる。例えば、各フレーム画像における、挿入領域に対応する領域の不透明度が最小に設定される。そして、挿入領域に付加情報が挿入される。処理がステップS26に進む。 In step S25, a composition process for inserting additional information into the insertion area is performed. For example, the opacity of the area corresponding to the insertion area in each frame image is set to the minimum. Then, additional information is inserted into the insertion area. The process proceeds to step S26.
 ステップS26では、合成処理が施された10フレームのフレーム画像が表示制御部203に供給される。表示制御部203は、10フレームのフレーム画像に基づくビデオデータを生成する。ビデオデータが表示部204に供給され、ビデオデータに基づく表示がなされる。すなわち、付加情報が挿入された動画データが再生される。そして、処理がステップS27に進む。 In step S26, the frame image of 10 frames subjected to the synthesis process is supplied to the display control unit 203. The display control unit 203 generates video data based on a frame image of 10 frames. Video data is supplied to the display unit 204, and display based on the video data is performed. That is, moving image data in which additional information is inserted is reproduced. Then, the process proceeds to step S27.
 ステップS27では、撮像動作が終了したか否かが判断される。撮像動作が終了すれば、処理が終了する。撮像動作が終了していなければ、処理がステップS20に戻る。そして、時間的に次の10フレームのフレーム画像に対して、同様の処理が行われる。 In step S27, it is determined whether or not the imaging operation has been completed. When the imaging operation ends, the process ends. If the imaging operation has not ended, the process returns to step S20. Then, similar processing is performed on the next 10 frame images in time.
 このように、リアルタイムに入力される動画データに対しても、適切な領域に付加情報を挿入できる。 As described above, additional information can be inserted into an appropriate area even for moving image data input in real time.
<3.変形例>
 以上、本開示の一実施形態について説明したが、本開示は、上述した実施形態に限られることなく、種々の変形が可能である。
<3. Modification>
Although one embodiment of the present disclosure has been described above, the present disclosure is not limited to the above-described embodiment, and various modifications can be made.
 複数のオブジェクトが検出された場合に、避けるべきオブジェクトを選択できるようにしてもよい。全てのオブジェクトが一覧表示された、オブジェクトを選択するための画面が表示されるようにしてもよい。 ∙ When multiple objects are detected, it may be possible to select an object to be avoided. A screen for selecting objects may be displayed in which all objects are displayed in a list.
 例えば、山10、人物(人物11、人物12)、電車13および線路14が避けるべきオブジェクトとして選択されたとする。樹木15は、選択されない。このような場合は、図9A、図9Bおよび図9Cに示すように、樹木15が存在する領域に付加情報70が挿入されるようにしてもよい。 For example, assume that a mountain 10, a person (person 11, person 12), a train 13, and a track 14 are selected as objects to be avoided. The tree 15 is not selected. In such a case, as shown in FIGS. 9A, 9B, and 9C, additional information 70 may be inserted into an area where the tree 15 exists.
 避けるべきオブジェクトは、例えば、ユーザによる操作によって選択される。画像内の中心付近に存在するオブジェクトが、避けるべきオブジェクトとして自動的に選択されるようにしてもよい。さらに、オブジェクトが選択され、選択されたオブジェクトに近接する領域に付加情報が挿入されるようにしてもよい。例えば、動画像において注目されるオブジェクトが選択された場合には、オブジェクトに対する注目を利用して、付加情報をユーザに効果的に提示できる。 The object to be avoided is selected by an operation by the user, for example. An object existing near the center in the image may be automatically selected as an object to be avoided. Further, the additional information may be inserted into a region close to the selected object when the object is selected. For example, when an object to be noticed in a moving image is selected, additional information can be effectively presented to the user using the attention to the object.
 オブジェクトに付加情報が部分的に重なるように、付加情報が挿入されるようにしてもよい。例えば、図10A、図10Bおよび図10Cに示すように、オブジェクト(人物11、人物12、人物16および人物17)の身体付近に付加情報80が重なるように挿入されてもよい。この場合は、オブジェクトの重要な部分(例えば、人物の顔)に付加情報80が重ならないようにすることが好ましい。さらに、付加情報がオブジェクトに重なる重なりの程度を設定できるようにしてもよい。 Additional information may be inserted so that the additional information partially overlaps the object. For example, as shown in FIGS. 10A, 10B, and 10C, the additional information 80 may be inserted so as to overlap the vicinity of the body of the object (person 11, person 12, person 16, and person 17). In this case, it is preferable to prevent the additional information 80 from overlapping an important part of the object (for example, a human face). Furthermore, the degree of overlap where the additional information overlaps the object may be set.
 各フレーム画像のそれぞれについて、デプスマップを生成するようにしてもよい。デブスマップを使用して、検出されたオブジェクトが奥行き方向において奥側または手前側に存在するかを判別する。すなわち、奥行き方向zが判別される。奥側に存在するオブジェクトはそれほど重要でないオブジェクトとみなし、奥側に存在するオブジェクトに付加情報が全部または部分的に重なるように、付加情報を挿入してもよい。これにより、動画データが3次元データである場合でも、付加情報を適切に挿入できる。 A depth map may be generated for each frame image. Using the depth map, it is determined whether the detected object exists on the back side or the near side in the depth direction. That is, the depth direction z is determined. The object existing on the back side may be regarded as an unimportant object, and the additional information may be inserted so that the additional information overlaps all or part of the object existing on the back side. Thereby, even when the moving image data is three-dimensional data, the additional information can be appropriately inserted.
 挿入領域に応じて、付加情報の表示態様が変更されてもよい。挿入領域に応じて、例えば、付加情報のサイズや形状が変更されもよい。例えば、図11A、図11Bおよび図11Cに示すように、適切な挿入領域が画面の右下付近と判断されたとする。このような場合は、例えば、複数段にわたって表示される付加情報90を生成する。付加情報90が画面の右下付近における挿入領域に挿入される。 The display mode of the additional information may be changed according to the insertion area. Depending on the insertion area, for example, the size and shape of the additional information may be changed. For example, as shown in FIGS. 11A, 11B, and 11C, it is assumed that an appropriate insertion area is determined near the lower right of the screen. In such a case, for example, the additional information 90 displayed over a plurality of levels is generated. Additional information 90 is inserted into the insertion area near the lower right of the screen.
 オブジェクト毎の動きベクトルを求め、全ての動きベクトルから画面全体の動きベクトルを求めるようにしてもよい。そして、画面全体の動きベクトルに基づいて、付加情報が挿入される挿入領域を変化させるようにしてもよい。画面内の動きに応じて付加情報が表示される領域が変化するため、ユーザによって違和感のない表示を行うことができる。 The motion vector for each object may be obtained, and the motion vector for the entire screen may be obtained from all the motion vectors. Then, the insertion area into which the additional information is inserted may be changed based on the motion vector of the entire screen. Since the area in which the additional information is displayed changes according to the movement in the screen, the user can perform a display without a sense of incongruity.
 挿入される付加情報は、一の付加情報に限られることはない。複数の付加情報が挿入されてもよい。 The additional information to be inserted is not limited to one additional information. A plurality of additional information may be inserted.
 付加情報は、オブジェクトに関連するものでもよい。例えば、オブジェクトに関連する広告が付加情報とされてもよい。GPSやコンパス情報から、画像処理装置や撮像装置が存在する場所を取得し、その場所に関連する付加情報を挿入してもよい。 Additional information may be related to the object. For example, an advertisement related to the object may be the additional information. A location where the image processing device or the imaging device exists may be acquired from the GPS or compass information, and additional information related to the location may be inserted.
 画像処理装置は、映像製作システムに対して適用することができる。例えば、ポップアップメニューを表示する際に、再生中の動画の主要なオブジェクトの表示を妨げることなく、メニューを表示できる。さらに、画像編集時のコメントを挿入する領域を推薦するようにしてもよい。 The image processing apparatus can be applied to a video production system. For example, when displaying a pop-up menu, the menu can be displayed without hindering the display of main objects of the moving image being played. Furthermore, an area for inserting a comment at the time of image editing may be recommended.
 AR(Argmented Reality:拡張現実などとも称される)において、リアルタイムに表示される動画データに、動画データの主要なオブジェクトの表示を妨げることなく、署名や広告等の付加情報を表示できる。 In AR (Argented Reality), additional information such as signatures and advertisements can be displayed on the moving image data displayed in real time without obstructing the display of main objects of the moving image data.
 さらに、本開示は、装置に限らず、方法、プログラム、記録媒体として実現することができる。 Furthermore, the present disclosure can be realized not only as a device but also as a method, a program, and a recording medium.
 なお、実施形態および変形例における構成および処理は、技術的な矛盾が生じない範囲で適宜組み合わせることができる。例示した処理の流れにおけるそれぞれの処理の順序は、技術的な矛盾が生じない範囲で適宜、変更できる。 It should be noted that the configurations and processes in the embodiments and the modifications can be appropriately combined within a range where no technical contradiction occurs. The order of each process in the exemplified process flow can be changed as appropriate within a range where no technical contradiction occurs.
 本開示は、例示した処理が複数の装置によって分散されて処理される、いわゆるクラウドシステムに対して適用することもできる。例示した処理が実行されるシステムであって、例示した処理の少なくとも一部の処理が実行される装置として、本開示を実現することができる。 This disclosure can also be applied to a so-called cloud system in which the exemplified processing is distributed and processed by a plurality of devices. The present disclosure can be realized as a system in which the exemplified process is executed and an apparatus in which at least a part of the exemplified process is executed.
 本開示は、以下の構成をとることもできる。
(1)
 複数のフレーム画像を解析し、オブジェクトを検出する検出部と、
 前記検出されたオブジェクトを避けるように、付加情報を挿入する領域を決定する領域制御部と
 を有する画像処理装置。
(2)
 前記領域制御部は、
 前記付加情報を挿入する領域の候補を、複数、検出し、
 前記複数の領域の候補の中から選択された領域を、前記付加情報を挿入する領域として決定する(1)に記載の画像処理装置。
(3)
 前記複数の領域の候補が表示部に表示され、
 前記領域制御部は、前記表示部に表示された前記複数の領域の候補の中からユーザ操作により選択された領域を、前記付加情報を挿入する領域として決定する(2)に記載の画像処理装置。
(4)
 前記検出部は、前記オブジェクトを複数、検出し、
 前記領域制御部は、前記複数のオブジェクトのうち、選択されたオブジェクトを避けるように、前記付加情報を挿入する領域を決定する
(1)乃至(3)のいずれかに記載の画像処理装置。
(5)
 前記複数のオブジェクトが表示部に表示され、
 前記領域制御部は、前記表示部に表示された複数のオブジェクトの中から、ユーザ操作により選択されたオブジェクトを避けるように、前記付加情報を挿入する領域を決定する(4)に記載の画像処理装置。
(6)
 前記領域制御部は、前記検出されたオブジェクトに前記付加情報が部分的に重なるように、前記付加情報を挿入する領域を決定する(1)乃至(5)のいずれかに記載の画像処理装置。
(7)
 前記領域制御部は、前記検出されたオブジェクトのうち、奥行き方向において奥側に存在するオブジェクトに前記付加情報が部分的に重なるように、前記付加情報を挿入する領域を決定する(6)に記載の画像処理装置。
(8)
 前記決定された領域に応じて、前記付加情報の表示態様が変更される(1)乃至(7)のいずれかに記載の画像処理装置。
(9)
 前記複数のフレーム画像を記憶する記憶部を有する(1)乃至(8)のいずれかに記載の画像処理装置。
(10)
 撮像部を有し、
 前記複数のフレーム画像は、前記撮像部により撮像された所定数のフレーム画像である(1)乃至(8)のいずれかに記載の画像処理装置。
(11)
 前記領域に前記付加情報を挿入した動画像を表示する表示部を有する(1)乃至(10)のいずれかに記載の画像処理装置。
(12)
 複数のフレーム画像を解析することでオブジェクトを検出し、
 前記検出されたオブジェクトを避けるように、付加情報を挿入する領域を決定する画像処理装置における画像処理方法。
(13)
 複数のフレーム画像を解析することでオブジェクトを検出し、
 前記検出されたオブジェクトを避けるように、付加情報を挿入する領域を決定する画像処理装置における画像処理方法を、コンピュータに実行させるためのプログラム。
(14)
 (13)に記載のプログラムが記録された記録媒体。
This indication can also take the following composition.
(1)
A detection unit that analyzes a plurality of frame images and detects an object;
An image processing apparatus comprising: an area control unit that determines an area into which additional information is inserted so as to avoid the detected object.
(2)
The area controller is
Detecting a plurality of candidate areas for inserting the additional information;
The image processing apparatus according to (1), wherein an area selected from the plurality of area candidates is determined as an area into which the additional information is inserted.
(3)
The plurality of region candidates are displayed on the display unit,
The image processing apparatus according to (2), wherein the area control unit determines an area selected by a user operation from among the plurality of area candidates displayed on the display unit as an area into which the additional information is inserted. .
(4)
The detection unit detects a plurality of the objects,
The image processing apparatus according to any one of (1) to (3), wherein the area control unit determines an area into which the additional information is inserted so as to avoid a selected object among the plurality of objects.
(5)
The plurality of objects are displayed on a display unit,
The image processing according to (4), wherein the area control unit determines an area into which the additional information is inserted so as to avoid an object selected by a user operation from among a plurality of objects displayed on the display unit. apparatus.
(6)
The image processing apparatus according to any one of (1) to (5), wherein the area control unit determines an area into which the additional information is inserted so that the additional information partially overlaps the detected object.
(7)
The area control unit determines an area in which the additional information is to be inserted so that the additional information partially overlaps an object existing on the back side in the depth direction among the detected objects (6). Image processing apparatus.
(8)
The image processing apparatus according to any one of (1) to (7), wherein a display mode of the additional information is changed according to the determined area.
(9)
The image processing apparatus according to any one of (1) to (8), further including a storage unit that stores the plurality of frame images.
(10)
Having an imaging unit,
The image processing device according to any one of (1) to (8), wherein the plurality of frame images are a predetermined number of frame images captured by the imaging unit.
(11)
The image processing apparatus according to any one of (1) to (10), further including a display unit that displays a moving image in which the additional information is inserted in the area.
(12)
Detect objects by analyzing multiple frame images,
An image processing method in an image processing apparatus for determining an area into which additional information is inserted so as to avoid the detected object.
(13)
Detect objects by analyzing multiple frame images,
A program for causing a computer to execute an image processing method in an image processing apparatus that determines an area into which additional information is inserted so as to avoid the detected object.
(14)
A recording medium on which the program according to (13) is recorded.
100・・・画像処理装置
102・・・フラッシュメモリ
104・・・表示部
106・・・ワークメモリ
107・・・画像処理部
200・・・撮像装置
202・・・フラッシュメモリ
204・・・表示部
206・・・ワークメモリ
207・・・画像処理部
209・・・レンズ
210・・・撮像部
DESCRIPTION OF SYMBOLS 100 ... Image processing apparatus 102 ... Flash memory 104 ... Display part 106 ... Work memory 107 ... Image processing part 200 ... Imaging device 202 ... Flash memory 204 ... Display part 206... Work memory 207... Image processing unit 209... Lens 210.

Claims (14)

  1.  複数のフレーム画像を解析し、オブジェクトを検出する検出部と、
     前記検出されたオブジェクトを避けるように、付加情報を挿入する領域を決定する領域制御部と
     を有する画像処理装置。
    A detection unit that analyzes a plurality of frame images and detects an object;
    An image processing apparatus comprising: an area control unit that determines an area into which additional information is inserted so as to avoid the detected object.
  2.  前記領域制御部は、
     前記付加情報を挿入する領域の候補を、複数、検出し、
     前記複数の領域の候補の中から選択された領域を、前記付加情報を挿入する領域として決定する請求項1に記載の画像処理装置。
    The area controller is
    Detecting a plurality of candidate areas for inserting the additional information;
    The image processing apparatus according to claim 1, wherein an area selected from the plurality of area candidates is determined as an area into which the additional information is inserted.
  3.  前記複数の領域の候補が表示部に表示され、
     前記領域制御部は、前記表示部に表示された前記複数の領域の候補の中からユーザ操作により選択された領域を、前記付加情報を挿入する領域として決定する請求項2に記載の画像処理装置。
    The plurality of region candidates are displayed on the display unit,
    The image processing apparatus according to claim 2, wherein the region control unit determines a region selected by a user operation from among the plurality of region candidates displayed on the display unit as a region into which the additional information is inserted. .
  4.  前記検出部は、前記オブジェクトを複数、検出し、
     前記領域制御部は、前記複数のオブジェクトのうち、選択されたオブジェクトを避けるように、前記付加情報を挿入する領域を決定する請求項1に記載の画像処理装置。
    The detection unit detects a plurality of the objects,
    The image processing apparatus according to claim 1, wherein the area control unit determines an area into which the additional information is inserted so as to avoid a selected object among the plurality of objects.
  5.  前記複数のオブジェクトが表示部に表示され、
     前記領域制御部は、前記表示部に表示された複数のオブジェクトの中から、ユーザ操作により選択されたオブジェクトを避けるように、前記付加情報を挿入する領域を決定する請求項4に記載の画像処理装置。
    The plurality of objects are displayed on a display unit,
    The image processing according to claim 4, wherein the area control unit determines an area in which the additional information is inserted so as to avoid an object selected by a user operation from among a plurality of objects displayed on the display unit. apparatus.
  6.  前記領域制御部は、前記検出されたオブジェクトに前記付加情報が部分的に重なるように、前記付加情報を挿入する領域を決定する請求項1に記載の画像処理装置。 The image processing apparatus according to claim 1, wherein the area control unit determines an area in which the additional information is inserted so that the additional information partially overlaps the detected object.
  7.  前記領域制御部は、前記検出されたオブジェクトのうち、奥行き方向において奥側に存在するオブジェクトに前記付加情報が部分的に重なるように、前記付加情報を挿入する領域を決定する請求項6に記載の画像処理装置。 The said area control part determines the area | region which inserts the said additional information so that the said additional information may overlap with the object which exists in the back | inner side in the depth direction among the said detected objects. Image processing apparatus.
  8.  前記決定された領域に応じて、前記付加情報の表示態様が変更される請求項1に記載の画像処理装置。 The image processing apparatus according to claim 1, wherein a display mode of the additional information is changed according to the determined area.
  9.  前記複数のフレーム画像を記憶する記憶部を有する請求項1に記載の画像処理装置。 The image processing apparatus according to claim 1, further comprising a storage unit that stores the plurality of frame images.
  10.  撮像部を有し、
     前記複数のフレーム画像は、前記撮像部により撮像された所定数のフレーム画像である請求項1に記載の画像処理装置。
    Having an imaging unit,
    The image processing apparatus according to claim 1, wherein the plurality of frame images are a predetermined number of frame images captured by the imaging unit.
  11.  前記領域に前記付加情報を挿入した動画像を表示する表示部を有する請求項1に記載の画像処理装置。 The image processing apparatus according to claim 1, further comprising a display unit that displays a moving image in which the additional information is inserted in the area.
  12.  複数のフレーム画像を解析することでオブジェクトを検出し、
     前記検出されたオブジェクトを避けるように、付加情報を挿入する領域を決定する画像処理装置における画像処理方法。
    Detect objects by analyzing multiple frame images,
    An image processing method in an image processing apparatus for determining an area into which additional information is inserted so as to avoid the detected object.
  13.  複数のフレーム画像を解析することでオブジェクトを検出し、
     前記検出されたオブジェクトを避けるように、付加情報を挿入する領域を決定する画像処理装置における画像処理方法を、コンピュータに実行させるためのプログラム。
    Detect objects by analyzing multiple frame images,
    A program for causing a computer to execute an image processing method in an image processing apparatus that determines an area into which additional information is inserted so as to avoid the detected object.
  14.  請求項13に記載のプログラムが記録された記録媒体。 A recording medium on which the program according to claim 13 is recorded.
PCT/JP2013/056200 2012-03-21 2013-02-28 Image processing device, image processing method, program, and recording medium WO2013141025A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012063513 2012-03-21
JP2012-063513 2012-03-21

Publications (1)

Publication Number Publication Date
WO2013141025A1 true WO2013141025A1 (en) 2013-09-26

Family

ID=49222487

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/056200 WO2013141025A1 (en) 2012-03-21 2013-02-28 Image processing device, image processing method, program, and recording medium

Country Status (1)

Country Link
WO (1) WO2013141025A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06178168A (en) * 1992-12-04 1994-06-24 Hitachi Ltd Data display device
JPH06268897A (en) * 1993-03-12 1994-09-22 Hitachi Ltd Date display device
JPH08275195A (en) * 1995-03-31 1996-10-18 Toshiba Corp Image display device
JP2005242204A (en) * 2004-02-27 2005-09-08 Matsushita Electric Ind Co Ltd Method and device for information display
JP2008203756A (en) * 2007-02-22 2008-09-04 Toshiba Corp Video signal processor, video display device, and video signal processing method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06178168A (en) * 1992-12-04 1994-06-24 Hitachi Ltd Data display device
JPH06268897A (en) * 1993-03-12 1994-09-22 Hitachi Ltd Date display device
JPH08275195A (en) * 1995-03-31 1996-10-18 Toshiba Corp Image display device
JP2005242204A (en) * 2004-02-27 2005-09-08 Matsushita Electric Ind Co Ltd Method and device for information display
JP2008203756A (en) * 2007-02-22 2008-09-04 Toshiba Corp Video signal processor, video display device, and video signal processing method

Similar Documents

Publication Publication Date Title
US8629897B2 (en) Image processing device, image processing method, and program
CN103002210B (en) Image processing apparatus and image processing method
JP5141797B2 (en) Image display device, image display program, and image display method
US10397467B2 (en) Imaging apparatus, image processing device, imaging method, and computer-readable recording medium
US20100238325A1 (en) Image processor and recording medium
US20110249146A1 (en) Imaging device, display control method and program
US20120105657A1 (en) Image processing apparatus, image pickup apparatus, image processing method, and program
US20170111574A1 (en) Imaging apparatus and imaging method
JP2013162333A (en) Image processing device, image processing method, program, and recording medium
JP2018093376A (en) Imaging apparatus, imaging method and program
JP5952782B2 (en) Image processing apparatus, control method therefor, program, and storage medium
JP5220157B2 (en) Information processing apparatus, control method therefor, program, and storage medium
JP2016178608A (en) Image processing apparatus, image processing method and program
JP5884723B2 (en) Image composition apparatus, image composition method, and program
JP4923956B2 (en) Image display device, image display program, and image display method
WO2013141025A1 (en) Image processing device, image processing method, program, and recording medium
JP6545229B2 (en) IMAGE PROCESSING APPARATUS, IMAGING APPARATUS, CONTROL METHOD OF IMAGE PROCESSING APPARATUS, AND PROGRAM
JP5741062B2 (en) Image processing apparatus, image processing method, and program
JP5948779B2 (en) Image processing apparatus, image processing method, and program
JP5740934B2 (en) Subject detection apparatus, subject detection method, and program
JP5696525B2 (en) Imaging apparatus, imaging method, and program
JP6241503B2 (en) Image processing apparatus, image processing method, and program
JP2008160274A (en) Motion vector detection method, its apparatus and its program, electronic hand-blur correction method, its apparatus and its program, as well as imaging apparatus
JP5926062B2 (en) User interface device
JP5732773B2 (en) Image identification device, image identification method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13764332

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13764332

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP