WO2013024847A1 - Stereoscopic image generating device, stereoscopic image display device, stereoscopic image generating method, and program - Google Patents

Stereoscopic image generating device, stereoscopic image display device, stereoscopic image generating method, and program Download PDF

Info

Publication number
WO2013024847A1
WO2013024847A1 PCT/JP2012/070670 JP2012070670W WO2013024847A1 WO 2013024847 A1 WO2013024847 A1 WO 2013024847A1 JP 2012070670 W JP2012070670 W JP 2012070670W WO 2013024847 A1 WO2013024847 A1 WO 2013024847A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
region
unit
information
dimensional
Prior art date
Application number
PCT/JP2012/070670
Other languages
French (fr)
Japanese (ja)
Inventor
敦稔 〆野
正宏 塩井
健明 末永
健史 筑波
Original Assignee
シャープ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by シャープ株式会社 filed Critical シャープ株式会社
Publication of WO2013024847A1 publication Critical patent/WO2013024847A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/293Generating mixed stereoscopic images; Generating mixed monoscopic and stereoscopic images, e.g. a stereoscopic image overlay window on a monoscopic image background
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/261Image signal generators with monoscopic-to-stereoscopic image conversion

Definitions

  • the present invention relates to a stereoscopic image generation device, a stereoscopic image display device, a stereoscopic image generation method, and a program.
  • This application claims priority based on Japanese Patent Application No. 2011-178833 for which it applied to Japan on August 18, 2011, and uses the content here.
  • a plurality of (two or more viewpoints) viewpoint images are required.
  • 3DCG 3D computer graphics
  • two or more cameras are used for shooting. In this case, it is very difficult to synchronize the focus of each camera and set the camera direction and position. In addition, the apparatus becomes large and photographing is difficult.
  • 2D ⁇ 3D conversion technology that creates multi-viewpoint video (3-dimensional video) by image processing from single-viewpoint video (2-dimensional video) instead of actually shooting multi-viewpoint video has attracted attention.
  • This technique is a technique for synthesizing a viewpoint image that is different from the viewpoint that is actually captured based on the depth information for each region in the image.
  • some 3D movies are produced using this 2D ⁇ 3D conversion technology, and depth information is created manually by the producer. Since it is complicated to manually create all the depth information, some depth information is automatically created. For example, there is a method of estimating the depth of a region from feature amounts (luminance, spatial frequency, contrast, motion) in an image.
  • this automatic 2D ⁇ 3D conversion technology does not always create an image with a stereoscopic effect in line with the intention of the producer. This is because there is a problem in depth estimation based on (luminance, spatial frequency, contrast, motion).
  • the focused area is the main subject, located in front, the upper part of the image is the background and the lower part of the image is the ground Is assumed.
  • an image having an unnatural stereoscopic effect is synthesized. For example, it seems that what should actually be in the back appears to pop out, what that should be in the front seems to be in the back, or there is an unnatural unevenness in the flat part, etc. .
  • Patent Document 1 proposes an intention-adaptive 2D3D conversion device that can enhance the three-dimensional effect of a specific sensation of a person or an object in a captured video in accordance with a person's intention.
  • a stereoscopic image according to the intention of the producer or editor can be obtained by emphasizing the stereoscopic feeling of a specific sense of a person or object in the image. It is possible.
  • the conversion device of Patent Document 1 has a problem in that the amount of calculation when generating a stereoscopic image increases because the feature amount is extracted from a region other than the region in which the stereoscopic effect is emphasized.
  • the present invention has been made in view of such circumstances, and an object of the present invention is to provide a stereoscopic image generation device, a stereoscopic image display device, a stereoscopic image generation method, and a program that reduce the amount of calculation when generating a stereoscopic image. There is to do.
  • the present invention has been made to solve the above-described problems, and one aspect of the present invention relates to an image acquisition unit that acquires image data of a two-dimensional image and a partial region of the two-dimensional image. 3D perceived in three dimensions only for the partial area based on the depth information generating unit that generates depth information indicating the depth value of each pixel, the image data of the two-dimensional image, and the depth information.
  • a stereoscopic image generation apparatus including an image composition unit that generates image data of an image.
  • the other aspect of this invention is a three-dimensional image generation apparatus as described in (1), Comprising:
  • the said depth information acquisition part is the said The depth information is generated based on region information for designating a region and a three-dimensional shape and data stored in the shape storage unit.
  • a stereoscopic image generation apparatus wherein an area information acquisition unit that generates the area information indicating an area specified by a user operation and a 3D shape is provided. And the depth information acquisition unit generates the depth information based on the region information acquired by the region information acquisition unit and the data stored in the shape storage unit.
  • the stereoscopic image generating apparatus wherein the region information storage unit that stores a plurality of the region information in advance and the region information storage unit store them.
  • a region information selection unit that selects region information from the region information, the depth information acquisition unit, based on the region information selected by the region information selection unit and the data stored in the shape storage unit, The depth information is generated.
  • the stereoscopic image generating device including a receiving unit that receives the data of the two-dimensional image and the region information, and the depth information
  • the acquisition unit generates the depth information based on the received region information and data stored in the shape storage unit.
  • the stereoscopic image generating apparatus according to any one of (1) to (5), in which the image composition unit is configured to generate the partial region from the two-dimensional image. From the data of the two-dimensional region image that generates the data of the two-dimensional region image extracted from the data, the image region dividing unit that generates the data of the three-dimensional region image obtained by extracting the partial region from the two-dimensional image, Based on the two-dimensional region processing unit that generates background image data obtained by interpolating the partial region, the data of the three-dimensional region image, and the depth information, three-dimensional image data relating to the partial region is generated. A part that generates three-dimensional image data that is perceived three-dimensionally only for the partial area based on the three-dimensional area processing unit, the background image data, and the three-dimensional image data related to the partial area. With the stereo image generator To Bei.
  • an image acquisition unit that acquires data of a two-dimensional image and a depth that generates depth information indicating a depth value for each pixel for a partial region of the two-dimensional image.
  • An information generating unit an image combining unit that generates image data of a three-dimensional image that is perceived three-dimensionally only for the partial region based on the image data of the two-dimensional image and the depth information;
  • a stereoscopic image display device including a display unit that displays a stereoscopic image using image data of a three-dimensional image.
  • depth information indicating a depth value for each pixel is generated for a partial region of the two-dimensional image.
  • an image acquisition unit for acquiring image data of a two-dimensional image, and depth information indicating a depth value for each pixel for a partial region of the two-dimensional image.
  • Depth information generation unit to generate, function as an image composition unit to generate image data of a three-dimensional image perceived three-dimensionally only for the partial area based on the image data of the two-dimensional image and the depth information It is a program to make it.
  • FIG. 1 It is a schematic block diagram which shows the structure of the partial stereo image production
  • FIG. 1 is a block diagram of a partial stereoscopic image generation apparatus 100 which is a stereoscopic image generation apparatus in the present embodiment.
  • the partial stereoscopic image generating apparatus 100 adds a stereoscopic effect to only a specific region designated by the user in the input two-dimensional (2-dimension; 2D) image, thereby allowing a two-dimensional portion and a three-dimensional (3-dimensional).
  • Dimension; 3D A partial stereoscopic image in which a part is mixed is generated.
  • the partial stereoscopic image generation apparatus 100 includes an image input unit 101, an area information input unit 102, a background depth input unit 103, a depth information generation unit 104, a shape storage unit 109, and an image composition unit 110.
  • the image composition unit 110 includes an image region dividing unit 105, a 2D region processing unit 106, a 3D region processing unit 107, and a partial stereoscopic image generation unit 108.
  • the image input unit 101 (image acquisition unit) reads a 2D image I which is image data of a two-dimensional image as an input, and sends it to the image region dividing unit 105.
  • the image input unit 101 may include a card slot into which a memory card is inserted, read the 2D image I from the memory card inserted into the card slot, and send the 2D image I to the image region dividing unit 105.
  • a receiving unit that receives a wireless signal such as an infrared signal or Bluetooth (Bluetooth®), or a communication interface for wired communication such as a USB (Universal Serial Bus) is provided, and the receiving unit or the communication interface receives the received signal.
  • the 2D image I may be sent to the image area dividing unit 105.
  • an image capturing apparatus may be provided, and image data of an image captured by the image capturing apparatus may be sent to the image area dividing unit 105 as a 2D image I.
  • the region information input unit 102 receives designation of a part of the region in the image and a three-dimensional shape of the region by user operation, and receives region information R representing the part of the region and the three-dimensional shape. It is generated and sent to the depth information generation unit 104.
  • the area information input unit 102 receives an area designation by a pointing device (not shown) such as a mouse provided in the partial stereoscopic image generation apparatus 100, for example. Details of the area information will be described later.
  • the background depth input unit 103 receives the designation of the position of the background in the depth direction by the user, generates a background depth value D1 indicating the position, and sends it to the depth information generation unit 104.
  • the depth information generation unit 104 indicates the region information R among the images indicated by the 2D image I based on the region information R sent from the region information input unit 102 and the shape information m stored in the shape memory 109. Depth information Dp indicating the depth value for each pixel is generated for the region. In this embodiment, the depth information generation unit 104 uses the background depth value D1 sent from the background depth input unit 103 as a pixel value for pixels outside the region indicated by the region R in the depth information Dp. Set. The depth information generation unit 104 sends the generated depth information Dp to the image region division unit 105 and the 3D region processing unit 107.
  • the shape storage unit 109 stores shape information representing a three-dimensional shape for each three-dimensional shape such as a hemisphere or a half cylinder.
  • the shape information is information representing an arithmetic expression for calculating the depth value of each pixel using parameters representing the size, orientation, and the like as variables.
  • the image region dividing unit 105 converts the image data (2D image I) sent from the image input unit 101 into 2D region data and 3D region data based on the depth information Dp sent from the depth information generating unit 104. To divide.
  • the image region dividing unit 105 extracts pixel values of the region other than the region where the depth value for each pixel is indicated by the depth information Dp, that is, the region other than the region indicated by the region information R, from the 2D image I.
  • 2D area data (2D area image data) is generated.
  • the image region dividing unit 105 extracts the pixel value of the region indicated by the depth information Dp for each pixel, that is, the region indicated by the region information R, from the 2D image I, thereby generating 3D region data ( 3D area image data) is generated.
  • the 2D area data divided by the image area dividing unit 105 is sent to the 2D area processing unit 106.
  • the 3D area data is sent to the 3D area processing unit 107.
  • the 2D area processing unit 106 generates background image data from the 2D area data sent from the image area dividing unit 105.
  • the 2D area processing unit 106 uses 2D area data to interpolate the pixel values of the area where the pixel value is not indicated in the 2D area data, that is, the area where the depth value for each pixel is indicated by the depth information Dp. To calculate background image data.
  • the 3D area processing unit 107 and the left eye 3D area data and the right eye 3D area data is generated.
  • the left-eye 3D area data and the right-eye 3D area data generated by the 3D area processing unit 107 are sent to the partial stereoscopic image generation unit 108.
  • the 3D area data for the left eye and the 3D area data for the right eye constitute 3D image data related to the area where the depth value for each pixel is indicated by the depth information Dp.
  • the partial stereoscopic image generation unit 108 is based on the background image data sent from the 2D region processing unit 106 and the left-eye 3D region data and right-eye 3D region data sent from the 3D region processing unit 107.
  • Left-eye image data and right-eye image data are generated and output as a partial stereoscopic image O.
  • the three-dimensional image data including the left-eye image data and the right-eye image data has a parallax based on the depth information Dp in the region specified by the region information R. That is, this three-dimensional image data is perceived three-dimensionally only for the region specified by the region information R.
  • the partial stereoscopic image generating apparatus 100 can generate a partial stereoscopic image in which a 2D portion and a 3D portion are mixed.
  • FIG. 2 is a diagram for explaining a method for expressing a 2D image I input by the image input unit 101.
  • the 2D image I is data having R (red), G (green), and B (blue) three color values (value range is 256 steps from 0 to 255) for each pixel.
  • the width of the image represented by the 2D image I is W and the height is H.
  • the horizontal direction is the X coordinate
  • the vertical direction is the Y coordinate
  • the upper left corner is the origin. That is, the origin is the pixel 201 that is 0th from the left and 0th from the top.
  • the pixel 200 in FIG. 2 is the xth pixel from the left and the yth pixel from the top, so the coordinates are (x, y).
  • the coordinates of the lower right pixel 202 are (W ⁇ 1, H ⁇ 1).
  • FIG. 3 is a diagram illustrating an example of area data constituting the area information R.
  • the region data includes four values of ID: 1 indicating the hemisphere, its center coordinates (x, y), radius r, and base depth value d.
  • ID indicating the hemisphere
  • r radius
  • base depth value d the parameters are ID: 2 indicating the semi-cylinder
  • Rotation angle ⁇ and base depth value d.
  • the region data is composed of ID: 3 indicating the circle, and four values of its parameters, center coordinates (x, y), radius r, and base depth value d.
  • the area data includes ID: 4 indicating the rectangle, and parameters of the upper left vertex (x, y), lower right vertex coordinates (x, y), rotation angle ⁇ , and base depth. It consists of six values of value d.
  • the base depth value d in each area data is used in the depth information generation unit 104 based on the area information R to set a feeling of popping out of the figure when generating the depth information Dp.
  • the rotation angle ⁇ is a rotation angle for rotating the figure represented by the upper left vertex (x, y) and the lower right vertex coordinate (x, y) about an axis orthogonal to the xy plane.
  • a pointing device such as a mouse or a touch panel provided in the PC is used.
  • the area information input unit 102 inputs area information.
  • the area information input unit 102 first displays the input image 400 sent from the image input unit 101 on the screen pd. Next, when the user operates the mouse or the like and selects the center of the area to be designated with the mouse pointer pc, the area information input unit 102 acquires this (S4-1 in FIG. 4B). Next, when one of the outer circumferences of the circle is selected, the area information input unit 102 acquires it, calculates the distance from the center, and sets it as the radius r (S4-2 in FIG. 4C). Next, the area information input unit 102 displays the base depth designation scale bs on the screen. When the user designates a scale with a mouse or the like to set the sense of depth of the circle, the area information input unit 102 acquires this and sets it as a base depth value d (S4-3 in FIG. 4D).
  • the area information input unit 102 sends the data of the ID, the center coordinates (x, y) of the circle, the radius r, and the base depth value d to the depth information generation unit 104.
  • the area information is not limited to one for one image, and a plurality of area information can be input.
  • the background depth input unit 103 reads the depth feeling of the area (background area) not specified by the user as an input, and sends it to the depth information generation unit 104.
  • a method for inputting a feeling of depth a method similar to the method for inputting a base depth value may be used.
  • a method of automatically setting the background depth value from the area information specified by the area information input unit 102 may be used. That is, the background depth adjustment scale is displayed on the screen, and the scale is designated using the mouse. The designated background depth value D1 is sent to the depth information generation unit 104.
  • the depth information generation unit 104 receives the region information R sent from the region information input unit 102, the shape information m stored in the shape storage unit 109, and the background depth value D1 sent from the background depth input unit 103. Based on this, depth information Dp is generated. The depth information generation unit 104 sends the depth information Dp to the image region division unit 105 and the 3D region processing unit 107.
  • FIG. 5 and FIG. 6 are projection views showing an outline of depth information.
  • a rectangle indicated by 500 in FIG. 5 is a depth image representing depth information as an image.
  • Representing depth information as an image means a luminance image in which a depth value is represented by a luminance value.
  • the depth information is divided into 256 levels of 0 to 255, and the depth value for each pixel of the input image is set between them.
  • the depth image 500 when the region information R of the hemisphere 501 and the semi-cylinder 502 is sent from the region information input unit 102, and the background depth value D1 is sent from the background depth input unit 103, the depth information generation unit 104. Is a depth image generated.
  • the depth indicates that the depth increases from the display surface in the direction of the arrow.
  • the background depth value D1 0
  • the background is the display surface.
  • the number of data elements sent from the area information input unit 102 may be determined by ID.
  • FIG. 6 is a depth image when a circle 511 and a rectangle 512 are sent as region information.
  • a priority may be given to each figure, and the highest priority among the overlapping figures may be selected.
  • the depth indicates that the depth increases from the display surface in the direction of the arrow.
  • the background depth value D1 0
  • the background is the display surface.
  • the number of data elements sent from the area information input unit 102 may be determined by ID.
  • the image region dividing unit 105 divides the image region into a 2D region and a 3D region based on the 2D input image sent from the image input unit 101 and the depth information sent from the depth information generation unit 104. . Thereafter, the 2D part is sent to the 2D region processing unit 106, and the 3D region is sent to the 3D region processing unit 107.
  • a mask image 720 is generated from the depth information 710.
  • the mask image 720 is a binary image in which the pixel value of the 3D portion of the depth information 710 is “0” and the pixel value of the 2D portion is “1”.
  • the 3D part is an area where a depth value is set for each pixel
  • the 2D part is an area where the pixel value is a background depth value.
  • the pixel value of the 2D portion of the depth image may be a value indicating the 2D portion without using the background depth value, and the depth information may be configured by the depth image and the background depth value.
  • the image area dividing unit 105 generates a 2D area 740 and a 3D area 730 based on the input image 700 and the mask image 720.
  • the image area dividing unit 105 leaves only the color information (pixel value) of the pixel whose pixel value is “1” in the mask image 720 among the pixels of the input image 700, and the pixel value is “0” in the mask image 720.
  • the 2D region 740 is generated by setting the pixel value of the pixel to a special value (for example, a negative number). In the 2D area 740, the pixel value of the hatched area 741 is the special value described above.
  • the image area dividing unit 105 leaves only the color information (pixel value) of the pixel whose mask is 0 among the pixels of the input image 700, and sets the pixel value of the pixel whose mask is 1 to a special value (for example, a negative value).
  • a special value for example, a negative value.
  • a 3D region 730 is generated.
  • the pixel values of the area other than the area 731, that is, the hatched area are the above-described special values.
  • the 2D area processing unit 106 performs a hole filling process on the pixel 741 (a pixel having a negative value) corresponding to the 3D area in the 2D area data 740 sent from the image area dividing unit 105 to generate a background image 770. To do. Thereafter, the 2D region processing unit 106 sends the background image 770 to the partial stereoscopic image generation unit 108.
  • a method of applying a smoothing filter such as a median filter to a pixel corresponding to a 3D region is used.
  • the median filter is a method in which pixel values of peripheral pixels of the target pixel are arranged in order of increasing (or decreasing) value, and the median value (median) is used as the pixel value of the target pixel.
  • the 2D region processing unit 106 performs the hole filling process as follows. 1) First, a median filter is applied to a 3D area pixel corresponding to the boundary between the 2D area and the 3D area. At this time, only the pixels in the 2D portion (positive number of pixels) are used for calculating the median value. 2) Next, the median filter is similarly applied to the boundary portion of the filtered image. 3) Thereafter, this filtering process is repeated until there are no more 3D portion pixels (pixels with negative values).
  • the 2D area processing unit 106 may apply a smoothing filter such as a Gaussian filter to the 2D area in order to improve the attractiveness of the 3D area.
  • the 3D area processing unit 107 Based on the 3D area data 730 sent from the image area dividing unit 105 and the depth information 710 sent from the depth information generating unit 104, the 3D area processing unit 107 performs the left-eye 3D area 750 and the right-eye 3D. An area 760 is generated, and these two data are sent to the partial stereoscopic image generation unit 108.
  • the 3D region processing unit 107 uses horizontal pixel movement as a method for generating the left-eye and right-eye 3D regions.
  • the pixel movement amount s is calculated from the depth value z for each pixel using a relational expression as shown in FIG.
  • the relational expression shown in FIG. 8 is basically set so that the amount of movement becomes larger as the depth value is smaller (that is, closer to the front).
  • the direction of movement varies depending on whether the area is displayed in front of the actual display surface of the display device or in the case where the area is displayed behind the display surface. That is, the area to be displayed in front of the display surface is moved to the right side for the left eye and to the left side for the right eye. Conversely, the area to be displayed in front of the display surface is moved to the left side for the left eye and to the right side for the right eye.
  • the coefficient of the relational expression is obtained from the distance between the display device and the viewer, the resolution of the display device, and the like.
  • the 3D region processing unit 107 obtains an image (parallax map) obtained by converting the depth information of each pixel into the movement amount of each pixel using this relational expression. Thereafter, the 3D area processing unit 107 moves the 3D area 730 according to the parallax map, and generates a left-eye 3D area 750 and a right-eye 3D area 760.
  • a pixel is moved, a plurality of pixels may move to the same pixel. In this case, a pixel closer to the front (a pixel moved from a distance) is employed.
  • the 3D region designation method is a hemisphere or a semi-cylinder
  • the amount of pixel movement increases as it is closer to the center.
  • a hole may occur at the boundary where the movement amount changes.
  • This hole is interpolated by applying a smoothing filter such as a median filter or a Gaussian filter to the periphery of the 3D region.
  • the partial stereoscopic image generation unit 108 uses the background image sent from the 2D region processing unit 106, the left eye 3D region and the right eye 3D region sent from the 3D region processing unit 107, for the left eye.
  • Two partial stereoscopic images and a right stereoscopic partial stereoscopic image are generated and output as a partial stereoscopic image O.
  • the left-eye partial stereoscopic image is generated by superimposing the left-eye 3D region on the background image
  • the right-eye partial stereoscopic image is generated by superimposing the right-eye 3D region on the background image. To do.
  • Overlaying the background image and the 3D area means that only the positive part of the 3D area information sent from the 3D area processing unit 107 is overwritten on the background image.
  • An existing stereoscopic display device may be used as a method for displaying the output partial stereoscopic image.
  • a stereoscopic display method using an active shutter system as shown in FIGS. 9A and 9B is used.
  • the active shutter method only the left-eye partial stereoscopic image 900 is displayed on the left eye by opening and closing the liquid crystal shutter of the glasses in synchronization with the image displayed by the display unit Dsp using the liquid crystal shutter glasses SG. Only the right-eye partial stereoscopic image 901 is shown on the eye.
  • FIG. 9A in which the display unit Dsp displays the left-eye partial image 900, the left shutter is open and the right shutter is closed. That is, the left eye partial image 900 is shown to the left eye.
  • FIG. 9B in which the display unit Dsp displays the right-eye partial image 901, the left shutter is closed and the right shutter is open. That is, the right eye partial image 901 is shown to the right eye. Further, this method makes it possible to express the difference in appearance (parallax) between the right eye and the left eye, and the viewer can obtain a stereoscopic effect.
  • FIG. 10A and FIG. 10B are diagrams showing how an active shutter type stereoscopic display device looks when a partial stereoscopic image is viewed.
  • the viewer gazes at the 2D portion FIG. 10A
  • the intersection of the left and right eyes that gaze at the background image 1000 coincides with the display surface, and the background image 1000 appears to be on the display surface.
  • the viewer gazes at the person image 1001 that is a 3D portion FIG. 10B
  • the illusion that the intersection of the left and right eyes is on the front side of the display surface and the person image 1001 is on the front side of the actual position. To do. Based on the above principle, only the 3D region appears to protrude from the 2D region. *
  • a partial stereoscopic image may be displayed using a parallax barrier type stereoscopic display device as shown in FIG.
  • a parallax barrier 1101 is arranged in front of the display 1100 so that only the image L corresponding to the left eye can be seen by the left eye Le, and only the image R corresponding to the right eye is seen by the right eye Re. By making it visible, stereoscopic vision is possible.
  • the display device that displays the partial stereoscopic images is a multi-viewpoint image display device having three or more viewpoints. In this case, an image having the number of viewpoints necessary for the display device may be generated. Specifically, the relational expression between the depth value and the movement amount in FIG. 8 is changed for each viewpoint. The relational expression is set so that the amount of movement increases as the viewpoint shifts to the left and right.
  • the partial stereoscopic image generation apparatus 100 may include a stereoscopic image display apparatus, and the stereoscopic image display apparatus may display the partial stereoscopic image generated by the partial stereoscopic image generation unit 108.
  • the image input unit 101 reads an input image (S1).
  • the area information input unit 102 reads area information (S2).
  • the background depth input unit 103 reads the background depth (S3).
  • the depth information generation unit 104 generates depth information based on the read area information and background depth (S4).
  • the image area dividing unit 105 divides the input image into a 2D area and a 3D area based on the generated depth information (S5).
  • the 2D area processing unit 106 performs a 2D area filling process to generate a background image (S6).
  • the 3D area processing unit 107 performs pixel shift of the 3D area to generate a 3D area for the left eye (S7), and similarly performs pixel shift of the 3D area to generate a 3D area for the right eye (S8).
  • the partial stereoscopic image generation unit 108 overwrites the background image with the left-eye 3D region, and generates a left-eye partial stereoscopic image (S9).
  • the right-eye 3D region is overwritten on the background image to generate a right-eye partial stereoscopic image (S10).
  • the left-eye partial stereoscopic image and the right-eye partial stereoscopic image are output as partial stereoscopic images (S11).
  • the technique of the first embodiment is applied to an electronic dictionary, a mobile phone, a smartphone, or the like whose display unit includes a touch panel. That is, this is a partial stereoscopic image generation and display device that easily performs 2D conversion, 3D conversion, and the like of an image with a small amount of processing and memory, and performs 3D display.
  • a partial stereoscopic image generation / display apparatus according to the second embodiment will be described with reference to a block diagram shown in FIG.
  • the partial stereoscopic image generation display device 1300 is, for example, a smartphone.
  • the partial stereoscopic image generation display device 1300 includes a display unit 1301, an operation detection unit 1302, a 2D image generation unit 1303, an area information selection unit 1304, an area information storage unit 1305, a background depth value storage unit 1306, a depth information generation unit 104, and a shape.
  • a storage unit 109 and an image composition unit 110 are included. Since the depth information generation unit 104, the shape storage unit 109, and the image composition unit 110 have the same configuration as the partial stereoscopic image generation apparatus 100 of the first embodiment shown in FIG.
  • the display unit 1301 displays the partial stereoscopic image generated by the image composition unit 110.
  • the display method the display method as described in the description of the first embodiment may be used.
  • the operation detection unit 1302 is, for example, a touch panel attached to the surface of the display unit 1301 and detects an operation by the user.
  • the 2D image generation unit 1303 (image acquisition unit) generates an image to be displayed on the display unit 1301 in accordance with a user operation detected by the operation detection unit 1302. Note that the 2D image generation unit 1303 generates a two-dimensional image.
  • the area information selection unit 1304 reads one of the plurality of area information stored in the area information storage unit 1305 according to the user operation detected by the operation detection unit 1302 and outputs the read information to the depth information generation unit 104.
  • the area information storage unit 1305 stores in advance a plurality of pieces of area information corresponding to the image generated by the 2D image generation unit 1303.
  • the area information stored in the area information storage unit 1305 is the same as the area information described in the first embodiment, and is based on an ID indicating a figure such as a hemisphere or a rectangle and its parameters (center coordinates, radius, etc.). Thus, the area data of a plurality of figures may be included.
  • the background depth value storage unit 1306 stores the background depth value in advance.
  • FIG. 14A is a diagram illustrating a display example of the partial stereoscopic image generation display device 1300 according to the present embodiment.
  • the region information selection unit 1304 selects the region information in which the generated depth information becomes the depth information 1410, It is possible to obtain an effect that only a part of the button portions (here, 1, 2, 4, 9, *) corresponding to the 3D region 1412 in the depth information 1410 pops out.
  • the depth information generated based on the region information selected by the region information selection unit 1304 is the depth information 1430
  • the portion specified by the rectangular region 1432 in the depth information 1430 that is, “ The panel part 1405) of “1st place and bowl” can be most effective.
  • the depth information such as the depth information 1410 and 1430 is generated by the depth information generation unit 104 from the region information and the background depth value as described in the description of the first embodiment.
  • the user inputs the 2D image, the region information, and the background depth value.
  • the region information and the background depth created in advance by the image creator are stored in the smartphone. Are stored in the memory (region information storage unit 1305, background depth value 1306), and read out as necessary to change the feeling of popping out.
  • the sense of depth and pop-out of the displayed image may be changed according to the user input through the touch panel of the smartphone.
  • rules such as how to change the region information and the background depth value in response to user input are set in advance, and input to the device is performed according to this rule when the user inputs. What is necessary is just to determine area
  • the stereoscopic effect of the same image can be easily changed by changing the area information and the background depth value to different ones. For example, a method of acquiring information by downloading or a method of direct input by the user as in the first embodiment is possible.
  • 3D display such as buttons can be performed with a small amount of data.
  • Other effects can be realized by changing only a small amount of data such as region information and background depth value when the stereoscopic effect of the same image is to be changed.
  • FIG. 15 is a schematic block diagram showing the configuration of the television broadcast system 1500 in the present embodiment.
  • the television broadcast system 1500 includes a partial stereoscopic image transmission device 1501 and a partial stereoscopic image reception device 1503 connected to the partial stereoscopic image transmission device 1501 via a broadcast network 1502.
  • Broadcast network 1502 may be a broadcast network using radio waves, or a broadcast network using the Internet or the like.
  • the partial stereoscopic image transmission apparatus 1501 is installed in a broadcasting station, and transmits a 2D image constituting a broadcast program, region information, and a background depth value.
  • the area information and the background depth value are the same as the area information and the background depth value in the first and second embodiments.
  • the partial stereoscopic image reception device 1503 is a partial stereoscopic image generation device or a partial stereoscopic image display device according to the present embodiment, and is based on the 2D image transmitted by the partial stereoscopic image transmission device 1501, the region information, and the background depth value. A partial stereoscopic image is generated and displayed.
  • FIG. 16 is a schematic block diagram illustrating a configuration of the partial stereoscopic image receiving device 1503.
  • the partial stereoscopic image receiving apparatus 1503 includes a receiving unit 1531, an information separating unit 1532, a decoder 1533, an audio information processing unit 1534, a speaker 1535, a depth information generating unit 104, a shape storage unit 109, an image composition unit 110, and a display unit 1301. Consists of.
  • the depth information generation unit 104, the shape storage unit 109, the image composition unit 110, and the display unit 1301 are the same as the respective units in FIG.
  • the receiving unit 1531 receives an input signal sent from the partial stereoscopic image transmission device 1501. Thereafter, the information separator 1532 separates the input signal received by the receiver 1531 into video / audio data and other metadata.
  • the video / audio data is sent to the decoder 1533, and the other metadata is sent to the depth information generation unit 104.
  • the decoder 1533 decodes the video / audio data, outputs the audio data to the audio information processing unit 1534, and outputs the video data (2D image) to the image synthesis unit 110.
  • the audio information processing unit 1534 processes the audio data to generate an audio signal, and outputs the audio signal to the speaker 1535.
  • the speaker 1535 outputs audio in accordance with the audio signal.
  • the partial stereoscopic image receiving device 1503 in this block diagram does not necessarily have the configuration illustrated in the drawing as long as it can generate and display a partial stereoscopic image.
  • metadata other than the region information and the background depth value may exist in the metadata separated by the information separation unit 1532, but a processing unit that performs some processing may be provided.
  • the present embodiment it is possible to broadcast a stereoscopic video with a small amount of data.
  • a stereoscopic video is broadcast on television, a left-eye video and a right-eye video are required.
  • the image need only be one viewpoint, and it is possible to realize a television broadcast with a stereoscopic effect by sending only a small amount of data area information and background depth value. It is. Since the present invention can generate and display a stereoscopic image with a small amount of data, it is useful for storing video information on a storage medium such as a DVD or a Blu-ray disc. It is also useful for video streaming and download distribution.
  • Each unit may be realized by recording, causing the computer system to read and execute the program recorded on the recording medium.
  • the “computer system” includes an OS and hardware such as peripheral devices.
  • the “computer system” includes a homepage providing environment (or display environment) if a WWW system is used.
  • the “computer-readable recording medium” refers to a storage device such as a flexible medium, a magneto-optical disk, a portable medium such as a ROM or a CD-ROM, and a hard disk incorporated in a computer system.
  • the “computer-readable recording medium” dynamically holds a program for a short time like a communication line when transmitting a program via a network such as the Internet or a communication line such as a telephone line.
  • a volatile memory in a computer system serving as a server or a client in that case, and a program that holds a program for a certain period of time are also included.
  • the program may be a program for realizing a part of the functions described above, and may be a program capable of realizing the functions described above in combination with a program already recorded in a computer system.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

The stereoscopic image generating device is equipped with: an image acquisition section for acquiring image data for a two-dimensional image; a depth information generating section for generating depth information that indicates the depth value for each pixel in a partial area of the two-dimensional image; and an image synthesis section that, on the basis of the image data for the two-dimensional image and the depth information, generates image data for a three-dimensional image in which only the partial area in question is perceived stereoscopically.

Description

立体画像生成装置、立体画像表示装置、立体画像生成方法およびプログラムStereoscopic image generating apparatus, stereoscopic image display apparatus, stereoscopic image generating method, and program
 本発明は、立体画像生成装置、立体画像表示装置、立体画像生成方法およびプログラムに関する。
 本願は、2011年08月18日に、日本に出願された特願2011-178833号に基づき優先権を主張し、その内容をここに援用する。
The present invention relates to a stereoscopic image generation device, a stereoscopic image display device, a stereoscopic image generation method, and a program.
This application claims priority based on Japanese Patent Application No. 2011-178833 for which it applied to Japan on August 18, 2011, and uses the content here.
立体画像を表示するには、複数(2視点以上)の視点画像を必要とする。例えば、3DCG(3次元コンピュータグラフィックス)のコンテンツにおいて立体画像を制作する場合、視点の位置を変更することで、比較的容易に複数視点の映像を作成することができる。一方、実写の立体画像を制作する場合、2つ以上のカメラを用いて撮影をすることになる。この場合、各カメラのピントの同期やカメラ方向、位置の設定が非常に困難である。
また、装置が大規模になり、撮影が困難である。
In order to display a stereoscopic image, a plurality of (two or more viewpoints) viewpoint images are required. For example, in the case of producing a stereoscopic image in 3DCG (3D computer graphics) content, it is possible to create a video image of a plurality of viewpoints relatively easily by changing the position of the viewpoint. On the other hand, when producing a live-action stereoscopic image, two or more cameras are used for shooting. In this case, it is very difficult to synchronize the focus of each camera and set the camera direction and position.
In addition, the apparatus becomes large and photographing is difficult.
 そこで、実際に多視点の映像を撮影するのではなく、1視点の映像(2次元の映像)から画像処理によって多視点の映像(3次元の映像)をつくる2D→3D変換技術が注目を浴びている。当技術は、画像中における領域毎の奥行情報を基に、実際に撮影した視点とは別の視点画像を合成する技術である。現在、3D映画のいくつかは、この2D→3D変換技術を用いて制作されており、奥行情報は、製作者によって手動で作成されている。全ての奥行情報を手動で作成するのは、煩雑であるため、奥行情報を自動で作成するものもある。例えば、画像中の特徴量(輝度、空間周波数、コントラスト、動き)から、領域の奥行きを推定する方法がある。 Therefore, 2D → 3D conversion technology that creates multi-viewpoint video (3-dimensional video) by image processing from single-viewpoint video (2-dimensional video) instead of actually shooting multi-viewpoint video has attracted attention. ing. This technique is a technique for synthesizing a viewpoint image that is different from the viewpoint that is actually captured based on the depth information for each region in the image. Currently, some 3D movies are produced using this 2D → 3D conversion technology, and depth information is created manually by the producer. Since it is complicated to manually create all the depth information, some depth information is automatically created. For example, there is a method of estimating the depth of a region from feature amounts (luminance, spatial frequency, contrast, motion) in an image.
 しかしながら、この自動2D→3D変換技術は、必ずしも製作者の意図に沿った立体感をもつ映像を作成することができるとは限らない。これは、(輝度、空間周波数、コントラスト、動き)による奥行き推定に問題がある為である。この推定方法では、一般的な画像の構図として、ピントがあっている領域は主要被写体であり、前の方に位置すること、画像の上の方は背景で画像の下の方は地面であることなどを仮定している。この為、被写体の動きなどでフォーカスがあっていない場合、特殊な視点位置での撮影をした場合などには、不自然な立体感をもつ画像を合成してしまう。例えば、実際に奥にあるべきものが飛び出して見える、手前にあるべきものが奥にあるように見える、平面部分に不自然な凹凸感があるなどの、違和感のある立体画像を合成してしまう。 However, this automatic 2D → 3D conversion technology does not always create an image with a stereoscopic effect in line with the intention of the producer. This is because there is a problem in depth estimation based on (luminance, spatial frequency, contrast, motion). In this estimation method, as a general image composition, the focused area is the main subject, located in front, the upper part of the image is the background and the lower part of the image is the ground Is assumed. For this reason, when the subject is not in focus due to the movement of the subject or when shooting is performed at a special viewpoint position, an image having an unnatural stereoscopic effect is synthesized. For example, it seems that what should actually be in the back appears to pop out, what that should be in the front seems to be in the back, or there is an unnatural unevenness in the flat part, etc. .
 特許文献1では、以上の事情に鑑み、人の意図に応じて、撮影映像内にある人物や物体のうち特定の感覚の立体感を強調できる意図適応型の2D3D変換装置を提案している。
具体的には、入力した感性ワードに応じて、画像内の人物や物体のうちの特定の感覚の立体感を強調することで、製作者や編集者の意図に応じた立体画像を得ることを可能にしている。
In view of the above circumstances, Patent Document 1 proposes an intention-adaptive 2D3D conversion device that can enhance the three-dimensional effect of a specific sensation of a person or an object in a captured video in accordance with a person's intention.
Specifically, according to the input sensitivity word, a stereoscopic image according to the intention of the producer or editor can be obtained by emphasizing the stereoscopic feeling of a specific sense of a person or object in the image. It is possible.
特開平10-191397号公報Japanese Patent Laid-Open No. 10-191397
しかしながら、特許文献1の変換装置においては、立体感を強調する領域以外についても、特徴量の抽出を行うため、立体画像を生成する際の演算量が多くなってしまうという問題がある。 However, the conversion device of Patent Document 1 has a problem in that the amount of calculation when generating a stereoscopic image increases because the feature amount is extracted from a region other than the region in which the stereoscopic effect is emphasized.
 本発明は、このような事情に鑑みてなされたもので、その目的は、立体画像を生成する際の演算量を抑えた立体画像生成装置、立体画像表示装置、立体画像生成方法およびプログラムを提供することにある。 The present invention has been made in view of such circumstances, and an object of the present invention is to provide a stereoscopic image generation device, a stereoscopic image display device, a stereoscopic image generation method, and a program that reduce the amount of calculation when generating a stereoscopic image. There is to do.
(1)この発明は上述した課題を解決するためになされたもので、本発明の一態様は、2次元画像の画像データを取得する画像取得部と、前記2次元画像の一部の領域について、画素毎の奥行き値を示す奥行情報を生成する奥行情報生成部と、前記2次元画像の画像データと、前記奥行情報とに基づき、前記一部の領域についてのみ立体的に知覚される3次元画像の画像データを生成する画像合成部とを具備する、立体画像生成装置である。 (1) The present invention has been made to solve the above-described problems, and one aspect of the present invention relates to an image acquisition unit that acquires image data of a two-dimensional image and a partial region of the two-dimensional image. 3D perceived in three dimensions only for the partial area based on the depth information generating unit that generates depth information indicating the depth value of each pixel, the image data of the two-dimensional image, and the depth information. A stereoscopic image generation apparatus including an image composition unit that generates image data of an image.
(2)また、本発明の他の態様は、(1)に記載の立体画像生成装置であって、立体形状を表すデータを記憶する形状記憶部を具備し、前記奥行情報取得部は、前記領域と立体形状とを指定する領域情報と、前記形状記憶部が記憶するデータとに基づき、前記奥行情報を生成する。 (2) Moreover, the other aspect of this invention is a three-dimensional image generation apparatus as described in (1), Comprising: The shape memory | storage part which memorize | stores the data showing a three-dimensional shape is comprised, The said depth information acquisition part is the said The depth information is generated based on region information for designating a region and a three-dimensional shape and data stored in the shape storage unit.
(3)また、本発明の他の態様は、(2)に記載の立体画像生成装置であって、ユーザー操作により指定された領域および立体形状を示す前記領域情報を生成する領域情報取得部を具備し、前記奥行情報取得部は、前記領域情報取得部が取得した領域情報と、前記形状記憶部が記憶するデータとに基づき、前記奥行情報を生成する。 (3) According to another aspect of the present invention, there is provided a stereoscopic image generation apparatus according to (2), wherein an area information acquisition unit that generates the area information indicating an area specified by a user operation and a 3D shape is provided. And the depth information acquisition unit generates the depth information based on the region information acquired by the region information acquisition unit and the data stored in the shape storage unit.
(4)また、本発明の他の態様は、(2)に記載の立体画像生成装置であって、複数の前記領域情報を予め記憶する領域情報記憶部と、前記領域情報記憶部が記憶する領域情報の中から領域情報を選択する領域情報選択部とを具備し、前記奥行情報取得部は、前記領域情報選択部が選択した領域情報と、前記形状記憶部が記憶するデータとに基づき、前記奥行情報を生成する。 (4) According to another aspect of the present invention, there is provided the stereoscopic image generating apparatus according to (2), wherein the region information storage unit that stores a plurality of the region information in advance and the region information storage unit store them. A region information selection unit that selects region information from the region information, the depth information acquisition unit, based on the region information selected by the region information selection unit and the data stored in the shape storage unit, The depth information is generated.
(5)また、本発明の他の態様は、(2)に記載の立体画像生成装置であって、前記2次元画像のデータと前記領域情報とを受信する受信部を具備し、前記奥行情報取得部は、前記受信した領域情報と、前記形状記憶部が記憶するデータとに基づき、前記奥行情報を生成する。 (5) According to another aspect of the present invention, there is provided the stereoscopic image generating device according to (2), including a receiving unit that receives the data of the two-dimensional image and the region information, and the depth information The acquisition unit generates the depth information based on the received region information and data stored in the shape storage unit.
(6)また、本発明の他の態様は、(1)から(5)のいずれかに記載の立体画像生成装置であって、前記画像合成部は、前記2次元画像から前記一部の領域以外を抽出した2次元領域画像のデータと、前記2次元画像から前記一部の領域を抽出した3次元領域画像のデータとを生成する画像領域分割部と、前記2次元領域画像のデータから、前記一部の領域を補間した背景画像のデータを生成する2次元領域処理部と、前記3次元領域画像のデータと前記奥行情報とに基づき、前記一部の領域に関する3次元画像データを生成する3次元領域処理部と、前記背景画像のデータと、前記一部の領域に関する3次元画像データとに基づき、前記一部の領域についてのみ立体的に知覚される3次元画像のデータを生成する部分立体画像生成部とを具備する。 (6) According to another aspect of the present invention, there is provided the stereoscopic image generating apparatus according to any one of (1) to (5), in which the image composition unit is configured to generate the partial region from the two-dimensional image. From the data of the two-dimensional region image that generates the data of the two-dimensional region image extracted from the data, the image region dividing unit that generates the data of the three-dimensional region image obtained by extracting the partial region from the two-dimensional image, Based on the two-dimensional region processing unit that generates background image data obtained by interpolating the partial region, the data of the three-dimensional region image, and the depth information, three-dimensional image data relating to the partial region is generated. A part that generates three-dimensional image data that is perceived three-dimensionally only for the partial area based on the three-dimensional area processing unit, the background image data, and the three-dimensional image data related to the partial area. With the stereo image generator To Bei.
(7)また、本発明の他の態様は、2次元画像のデータを取得する画像取得部と、前記2次元画像の一部の領域について、画素毎の奥行き値を示す奥行情報を生成する奥行情報生成部と、前記2次元画像の画像データと、前記奥行情報とに基づき、前記一部の領域についてのみ立体的に知覚される3次元画像の画像データを生成する画像合成部と、前記3次元画像の画像データを用いて、立体画像を表示する表示部とを具備する、立体画像表示装置である。 (7) According to another aspect of the present invention, an image acquisition unit that acquires data of a two-dimensional image and a depth that generates depth information indicating a depth value for each pixel for a partial region of the two-dimensional image. An information generating unit; an image combining unit that generates image data of a three-dimensional image that is perceived three-dimensionally only for the partial region based on the image data of the two-dimensional image and the depth information; A stereoscopic image display device including a display unit that displays a stereoscopic image using image data of a three-dimensional image.
(8)また、本発明の他の態様は、2次元画像の画像データを取得する第1の過程と、前記2次元画像の一部の領域について、画素毎の奥行き値を示す奥行情報を生成する第2の過程と、前記2次元画像の画像データと、前記奥行情報とに基づき、前記一部の領域についてのみ立体的に知覚される3次元画像の画像データを生成する第3の過程とを有する、立体画像生成方法である。 (8) According to another aspect of the present invention, in the first process of acquiring image data of a two-dimensional image, depth information indicating a depth value for each pixel is generated for a partial region of the two-dimensional image. And a third step of generating image data of a three-dimensional image that is perceived three-dimensionally only for the partial area based on the image data of the two-dimensional image and the depth information. A method for generating a stereoscopic image.
(9)また、本発明の他の態様は、コンピュータを、2次元画像の画像データを取得する画像取得部、前記2次元画像の一部の領域について、画素毎の奥行き値を示す奥行情報を生成する奥行情報生成部、前記2次元画像の画像データと、前記奥行情報とに基づき、前記一部の領域についてのみ立体的に知覚される3次元画像の画像データを生成する画像合成部として機能させるためのプログラムである。 (9) According to another aspect of the present invention, an image acquisition unit for acquiring image data of a two-dimensional image, and depth information indicating a depth value for each pixel for a partial region of the two-dimensional image. Depth information generation unit to generate, function as an image composition unit to generate image data of a three-dimensional image perceived three-dimensionally only for the partial area based on the image data of the two-dimensional image and the depth information It is a program to make it.
 この発明によれば、立体画像を生成する際の演算量を抑えることができる。 According to the present invention, it is possible to reduce the amount of calculation when generating a stereoscopic image.
この発明の第一の実施形態における部分立体画像生成装置100の構成を示す概略ブロック図である。It is a schematic block diagram which shows the structure of the partial stereo image production | generation apparatus 100 in 1st embodiment of this invention. 同実施形態における画像入力部101で入力される2D画像Iの表現方法を説明する図である。It is a figure explaining the representation method of 2D image I inputted with image input part 101 in the embodiment. 同実施形態における領域情報Rを構成する領域データの例を示す図である。It is a figure showing an example of field data which constitutes field information R in the embodiment. 同実施形態における領域情報(円)の指定方法(その1)を説明する図である。It is a figure explaining the designation method (the 1) of field information (circle) in the embodiment. 同実施形態における領域情報(円)の指定方法(その2)を説明する図である。It is a figure explaining the designation method (the 2) of field information (circle) in the embodiment. 同実施形態における領域情報(円)の指定方法(その3)を説明する図である。It is a figure explaining the designation method (the 3) of field information (circle) in the embodiment. 同実施形態における領域情報(円)の指定方法(その4)を説明する図である。It is a figure explaining the designation | designated method (the 4) of area | region information (circle) in the embodiment. 同実施形態における奥行き情報の例の概略を示す投影図である。It is a projection figure which shows the outline of the example of the depth information in the embodiment. 同実施形態における奥行き情報の別の例の概略を示す投影図である。It is a projection figure which shows the outline of another example of the depth information in the embodiment. 同実施形態における2D領域と3D領域を説明する図である。It is a figure explaining 2D field and 3D field in the embodiment. 同実施形態における奥行値と画素移動量との関係式を説明するグラフである。It is a graph explaining the relational expression of the depth value and pixel movement amount in the embodiment. アクティブシャッタ方式による立体表示方法を説明する概念図(その1)である。It is a conceptual diagram (the 1) explaining the stereoscopic display method by an active shutter system. アクティブシャッタ方式による立体表示方法を説明する概念図(その2)である。It is a conceptual diagram (the 2) explaining the stereoscopic display method by an active shutter system. 同実施形態における部分立体画像の知覚のされ方を説明する概念図(その1)である。It is a conceptual diagram (the 1) explaining how to perceive the partial stereo image in the embodiment. 同実施形態における部分立体画像の知覚のされ方を説明する概念図(その2)である。It is a conceptual diagram (the 2) explaining how to perceive the partial stereo image in the embodiment. パララックスバリア方式による立体表示方法を説明する概念図である。It is a conceptual diagram explaining the stereoscopic display method by a parallax barrier system. 同実施形態における部分立体画像生成装置100の動作を説明するフローチャートである。It is a flowchart explaining operation | movement of the partial stereo image production | generation apparatus 100 in the same embodiment. この発明の第二の実施形態における部分立体画像生成表示装置1300の構成を示す概略ブロック図である。It is a schematic block diagram which shows the structure of the partial stereo image production | generation display apparatus 1300 in 2nd embodiment of this invention. 同実施形態における部分立体画像生成表示装置1300の表示例を示す図である。It is a figure which shows the example of a display of the partial stereo image production | generation display apparatus 1300 in the embodiment. 同実施形態における部分立体画像生成表示装置1300の別の表示例を示す図である。It is a figure which shows another example of a display of the partial stereo image production | generation display apparatus 1300 in the embodiment. この発明の第三の実施形態における実施形態におけるテレビジョン放送システム1500の構成を示す概略ブロック図である。It is a schematic block diagram which shows the structure of the television broadcasting system 1500 in embodiment in 3rd embodiment of this invention. 同実施形態における部分立体画像受信装置1503の構成を示す概略ブロック図である。It is a schematic block diagram which shows the structure of the partial stereo image receiver 1503 in the same embodiment.
 (第一の実施形態)
 以下、図面を参照して、本発明の第一の実施形態について説明する。図1は、本実施形態における立体画像生成装置である部分立体画像生成装置100のブロック図である。部分立体画像生成装置100は、入力された2次元(2-Dimension;2D)画像のうち、ユーザーが指定した特定の領域のみに立体感を付与することで、2次元部分と3次元(3-Dimension;3D)部分が混在した部分立体画像を生成する。
(First embodiment)
Hereinafter, a first embodiment of the present invention will be described with reference to the drawings. FIG. 1 is a block diagram of a partial stereoscopic image generation apparatus 100 which is a stereoscopic image generation apparatus in the present embodiment. The partial stereoscopic image generating apparatus 100 adds a stereoscopic effect to only a specific region designated by the user in the input two-dimensional (2-dimension; 2D) image, thereby allowing a two-dimensional portion and a three-dimensional (3-dimensional). Dimension; 3D) A partial stereoscopic image in which a part is mixed is generated.
 部分立体画像生成装置100は、画像入力部101、領域情報入力部102、背景奥行入力部103、奥行情報生成部104、形状記憶部109、画像合成部110を含んで構成される。なお、画像合成部110は、画像領域分割部105、2D領域処理部106、3D領域処理部107、部分立体画像生成部108を含んで構成される。 The partial stereoscopic image generation apparatus 100 includes an image input unit 101, an area information input unit 102, a background depth input unit 103, a depth information generation unit 104, a shape storage unit 109, and an image composition unit 110. The image composition unit 110 includes an image region dividing unit 105, a 2D region processing unit 106, a 3D region processing unit 107, and a partial stereoscopic image generation unit 108.
 画像入力部101(画像取得部)は、入力として2次元画像の画像データである2D画像Iを読み込み、画像領域分割部105へ送る。画像入力部101は、例えば、メモリカードを挿入するカードスロットを備え、該カードスロットに挿入されたメモリカードから2D画像Iを読み込み、画像領域分割部105へ送るようにしてもよい。また、赤外線信号やブルートゥース(Bluetooth(登録商標))などの無線信号を受信する受信部や、USB(Universal Serial Bus)など有線通信用の通信インターフェースを備え、該受信部あるいは該通信インターフェースが受信した2D画像Iを画像領域分割部105へ送るようにしてもよい。また、撮像装置を備え、該撮像装置が撮影した画像の画像データを、2D画像Iとして画像領域分割部105へ送るようにしてもよい。 The image input unit 101 (image acquisition unit) reads a 2D image I which is image data of a two-dimensional image as an input, and sends it to the image region dividing unit 105. For example, the image input unit 101 may include a card slot into which a memory card is inserted, read the 2D image I from the memory card inserted into the card slot, and send the 2D image I to the image region dividing unit 105. In addition, a receiving unit that receives a wireless signal such as an infrared signal or Bluetooth (Bluetooth®), or a communication interface for wired communication such as a USB (Universal Serial Bus) is provided, and the receiving unit or the communication interface receives the received signal. The 2D image I may be sent to the image area dividing unit 105. Further, an image capturing apparatus may be provided, and image data of an image captured by the image capturing apparatus may be sent to the image area dividing unit 105 as a 2D image I.
 領域情報入力部102(領域情報取得部)は、ユーザー操作による画像内の一部の領域と該領域の立体形状との指定を受けて、該一部の領域および立体形状を表す領域情報Rを生成し、奥行情報生成部104へ送る。領域情報入力部102は、例えば、部分立体画像生成装置100が備えるマウスなどのポインティングデバイス(不図示)により領域の指定を受ける。領域情報の詳細は後述する。背景奥行入力部103は、ユーザーによる背景の奥行き方向の位置の指定を受けて、該位置を示す背景奥行値D1を生成し、奥行情報生成部104へ送る。 The region information input unit 102 (region information acquisition unit) receives designation of a part of the region in the image and a three-dimensional shape of the region by user operation, and receives region information R representing the part of the region and the three-dimensional shape. It is generated and sent to the depth information generation unit 104. The area information input unit 102 receives an area designation by a pointing device (not shown) such as a mouse provided in the partial stereoscopic image generation apparatus 100, for example. Details of the area information will be described later. The background depth input unit 103 receives the designation of the position of the background in the depth direction by the user, generates a background depth value D1 indicating the position, and sends it to the depth information generation unit 104.
 奥行情報生成部104は、領域情報入力部102から送られてきた領域情報Rと、形状記憶109が記憶する形状情報mとを基に、2D画像Iが示す画像のうち、領域情報Rが示す領域について画素毎の奥行き値を示す奥行情報Dpを生成する。なお、本実施形態では、奥行情報生成部104は、奥行情報Dpのうち、領域Rが示す領域外の画素には、背景奥行入力部103から送られてきた背景奥行値D1を、画素値として設定する。奥行情報生成部104は、生成した奥行情報Dpを、画像領域分割部105と、3D領域処理部107とへ送る。形状記憶部109は、半球や半円柱などの立体形状毎にその立体形状を表す形状情報を記憶する。例えば、形状情報は、大きさや向きなどを表すパラメータを変数として、各画素の奥行き値を算出するための演算式を表す情報である。画像領域分割部105は、画像入力部101から送られてきた画像データ(2D画像I)を、奥行情報生成部104から送られてきた奥行情報Dpを基に、2D領域データと3D領域データに分割する。 The depth information generation unit 104 indicates the region information R among the images indicated by the 2D image I based on the region information R sent from the region information input unit 102 and the shape information m stored in the shape memory 109. Depth information Dp indicating the depth value for each pixel is generated for the region. In this embodiment, the depth information generation unit 104 uses the background depth value D1 sent from the background depth input unit 103 as a pixel value for pixels outside the region indicated by the region R in the depth information Dp. Set. The depth information generation unit 104 sends the generated depth information Dp to the image region division unit 105 and the 3D region processing unit 107. The shape storage unit 109 stores shape information representing a three-dimensional shape for each three-dimensional shape such as a hemisphere or a half cylinder. For example, the shape information is information representing an arithmetic expression for calculating the depth value of each pixel using parameters representing the size, orientation, and the like as variables. The image region dividing unit 105 converts the image data (2D image I) sent from the image input unit 101 into 2D region data and 3D region data based on the depth information Dp sent from the depth information generating unit 104. To divide.
 ここで、画像領域分割部105は、奥行情報Dpによって画素毎の奥行き値が示されている領域以外、すなわち領域情報Rが示す領域以外の領域の画素値を、2D画像Iから抽出することで、2D領域データ(2次元領域画像のデータ)を生成する。また、画像領域分割部105は、奥行情報Dpによって画素毎の奥行き値が示されている領域、すなわち領域情報Rが示す領域の画素値を、2D画像Iから抽出することで、3D領域データ(3次元領域画像のデータ)を生成する。画像領域分割部105によって分割された2D領域データは、2D領域処理部106に送られる。同じく、3D領域データは、3D領域処理部107に送られる。 Here, the image region dividing unit 105 extracts pixel values of the region other than the region where the depth value for each pixel is indicated by the depth information Dp, that is, the region other than the region indicated by the region information R, from the 2D image I. 2D area data (2D area image data) is generated. Further, the image region dividing unit 105 extracts the pixel value of the region indicated by the depth information Dp for each pixel, that is, the region indicated by the region information R, from the 2D image I, thereby generating 3D region data ( 3D area image data) is generated. The 2D area data divided by the image area dividing unit 105 is sent to the 2D area processing unit 106. Similarly, the 3D area data is sent to the 3D area processing unit 107.
 2D領域処理部106は、画像領域分割部105から送られてきた2D領域データから背景画像データを生成する。2D領域処理部106は、2D領域データで画素値が示されていない領域、すなわち奥行情報Dpによって画素毎の奥行き値が示されていた領域の画素値を、2D領域データを用いて補間することで算出し、背景画像データを生成する。3D領域処理部107は、画像領域分割部105から送られてきた3D領域データと、奥行情報生成部104から送られてきた奥行情報Dpとに基づき、左眼用3D領域データと、右眼用3D領域データとを生成する。3D領域処理部107によって生成された、左眼用3D領域データと右眼用3D領域データは部分立体画像生成部108へ送られる。なお、左眼用3D領域データと、右眼用3D領域データとで、奥行情報Dpによって画素毎の奥行き値が示されている領域に関する3次元画像データを構成する。 The 2D area processing unit 106 generates background image data from the 2D area data sent from the image area dividing unit 105. The 2D area processing unit 106 uses 2D area data to interpolate the pixel values of the area where the pixel value is not indicated in the 2D area data, that is, the area where the depth value for each pixel is indicated by the depth information Dp. To calculate background image data. Based on the 3D area data sent from the image area dividing unit 105 and the depth information Dp sent from the depth information generating unit 104, the 3D area processing unit 107 and the left eye 3D area data and the right eye 3D area data is generated. The left-eye 3D area data and the right-eye 3D area data generated by the 3D area processing unit 107 are sent to the partial stereoscopic image generation unit 108. Note that the 3D area data for the left eye and the 3D area data for the right eye constitute 3D image data related to the area where the depth value for each pixel is indicated by the depth information Dp.
 部分立体画像生成部108は、2D領域処理部106から送られてきた背景画像データと、3D領域処理部107から送られてきた左眼用3D領域データと右眼用3D領域データを基に、左眼用画像データと、右眼用画像データとを生成し、これらを部分立体画像Oとして出力する。左眼用画像データと、右眼用画像データとからなる3次元画像データは、領域情報Rにて指定された領域において、奥行情報Dpに基づく視差を有している。すなわち、この3次元画像データは、領域情報Rにて指定された領域についてのみ立体的に知覚されるものである。
 以上に説明した処理により、部分立体画像生成装置100は、2D部分と3D部分が混在した部分立体画像を生成することが可能となる。
The partial stereoscopic image generation unit 108 is based on the background image data sent from the 2D region processing unit 106 and the left-eye 3D region data and right-eye 3D region data sent from the 3D region processing unit 107. Left-eye image data and right-eye image data are generated and output as a partial stereoscopic image O. The three-dimensional image data including the left-eye image data and the right-eye image data has a parallax based on the depth information Dp in the region specified by the region information R. That is, this three-dimensional image data is perceived three-dimensionally only for the region specified by the region information R.
Through the processing described above, the partial stereoscopic image generating apparatus 100 can generate a partial stereoscopic image in which a 2D portion and a 3D portion are mixed.
 図2は、画像入力部101で入力される2D画像Iの表現方法を説明する図である。図2に示すように、2D画像Iは、画素毎にR(赤),G(緑),B(青)3色の値(値の範囲は0から255の256段階)を持つデータである。2D画像Iが表す画像の横幅はW、高さはHである。各画素の座標は、横方向をX座標、縦方向をY座標とし、左上端を原点する。すなわち原点は、左から0番目、かつ、上から0番目の画素201である。また、例えば、図2のおける画素200は、左からx番目、上からy番目の画素であるので、その座標は(x、y)である。また、最も右下の画素202の座標は(W-1、H-1)である。 FIG. 2 is a diagram for explaining a method for expressing a 2D image I input by the image input unit 101. As shown in FIG. 2, the 2D image I is data having R (red), G (green), and B (blue) three color values (value range is 256 steps from 0 to 255) for each pixel. . The width of the image represented by the 2D image I is W and the height is H. As for the coordinates of each pixel, the horizontal direction is the X coordinate, the vertical direction is the Y coordinate, and the upper left corner is the origin. That is, the origin is the pixel 201 that is 0th from the left and 0th from the top. In addition, for example, the pixel 200 in FIG. 2 is the xth pixel from the left and the yth pixel from the top, so the coordinates are (x, y). The coordinates of the lower right pixel 202 are (W−1, H−1).
 次に、領域情報入力部102について説明する。領域情報入力部102は、ユーザーからの領域情報の入力を受け付ける。図3は、領域情報Rを構成する領域データの例を示す図である。例えば、ユーザーによって入力された領域が半球である場合、領域データは、半球を示すID:1と、そのパラメータである中心座標(x、y)、半径r及びベース奥行値dの4つの値とから構成される。また、ユーザーによって入力された領域が半円柱である場合、パラメータは、半円柱を示すID:2と、そのパラメータである左上の頂点(x、y)、右下の頂点座標(x、y)、回転角θ、及びベース奥行値dの6つの値とから構成される。 Next, the area information input unit 102 will be described. The area information input unit 102 receives input of area information from the user. FIG. 3 is a diagram illustrating an example of area data constituting the area information R. As illustrated in FIG. For example, when the region input by the user is a hemisphere, the region data includes four values of ID: 1 indicating the hemisphere, its center coordinates (x, y), radius r, and base depth value d. Consists of When the area input by the user is a semi-cylinder, the parameters are ID: 2 indicating the semi-cylinder, and the upper left vertex (x, y) and lower right vertex coordinates (x, y) that are the parameters. , Rotation angle θ, and base depth value d.
 同様に、領域が円である場合、領域データは、円を示すID:3と、そのパラメータである中心座標(x、y)、半径r及びベース奥行値dの4つの値とから構成される。領域が長方形のである場合、領域データは、長方形を示すID:4と、そのパラメータである左上の頂点(x、y)、右下の頂点座標(x、y)、回転角θ、及びベース奥行値dの6つの値とから構成される。 Similarly, when the region is a circle, the region data is composed of ID: 3 indicating the circle, and four values of its parameters, center coordinates (x, y), radius r, and base depth value d. . When the area is a rectangle, the area data includes ID: 4 indicating the rectangle, and parameters of the upper left vertex (x, y), lower right vertex coordinates (x, y), rotation angle θ, and base depth. It consists of six values of value d.
 ここで、各領域データにおけるベース奥行値dは、領域情報Rを基に奥行情報生成部104において、奥行情報Dpを生成する際に図形の飛び出し感を設定する為に用いられる。回転角θは、左上の頂点(x、y)、右下の頂点座標(x、y)によって表される図形をxy平面と直交する軸周りに回転させる回転角である。領域情報の入力には、例えば、部分立体画像生成装置100がPC(Personal Computer)とプログラムとから構成される場合、PCが備えるマウス、タッチパネル等のポインティングデバイスを用いる。本実施形態では、領域情報入力部102において領域情報の入力を行う。 Here, the base depth value d in each area data is used in the depth information generation unit 104 based on the area information R to set a feeling of popping out of the figure when generating the depth information Dp. The rotation angle θ is a rotation angle for rotating the figure represented by the upper left vertex (x, y) and the lower right vertex coordinate (x, y) about an axis orthogonal to the xy plane. For the input of the region information, for example, when the partial stereoscopic image generating apparatus 100 is configured by a PC (Personal Computer) and a program, a pointing device such as a mouse or a touch panel provided in the PC is used. In this embodiment, the area information input unit 102 inputs area information.
 例として、図4Aから図4Dを用いて、領域情報(円)の指定方法を説明する。領域情報入力部102は、まず、図4Aに示すように、画像入力部101から送られてくる入力画像400を画面pd上に表示する。次に、ユーザーがマウスなどを操作し、マウスポインタpcで指定したい領域の中心を選択すると、領域情報入力部102がこれを取得する(図4BのS4-1)。次に、円の外周のいずれかを選択すると、領域情報入力部102がこれを取得し、中心からの距離を算出し、半径rとする(図4CのS4-2)。
 次に、領域情報入力部102は、ベース奥行指定目盛bsを画面上に表示する。ユーザーがマウスなどで目盛を指定することで、円の奥行感を設定すると、領域情報入力部102がこれを取得し、ベース奥行き値dとする(図4DのS4-3)。
As an example, a method for specifying region information (circle) will be described with reference to FIGS. 4A to 4D. As shown in FIG. 4A, the area information input unit 102 first displays the input image 400 sent from the image input unit 101 on the screen pd. Next, when the user operates the mouse or the like and selects the center of the area to be designated with the mouse pointer pc, the area information input unit 102 acquires this (S4-1 in FIG. 4B). Next, when one of the outer circumferences of the circle is selected, the area information input unit 102 acquires it, calculates the distance from the center, and sets it as the radius r (S4-2 in FIG. 4C).
Next, the area information input unit 102 displays the base depth designation scale bs on the screen. When the user designates a scale with a mouse or the like to set the sense of depth of the circle, the area information input unit 102 acquires this and sets it as a base depth value d (S4-3 in FIG. 4D).
 以上の方法を用いて、領域を指定する。領域情報入力部102は、ID、円の中心座標(x、y)、半径r及びベース奥行値dのデータを奥行情報生成部104へ送る。なお、領域情報は、1つの画像に対し1つのみに限らず、複数の領域情報を入力することが可能である。 Specify the area using the above method. The area information input unit 102 sends the data of the ID, the center coordinates (x, y) of the circle, the radius r, and the base depth value d to the depth information generation unit 104. The area information is not limited to one for one image, and a plurality of area information can be input.
 背景奥行入力部103は、ユーザーによって指定されなかった領域(背景領域)の奥行き感を入力として読み込み、奥行情報生成部104に送る。奥行き感の入力方法には、ベース奥行値の入力方法と同様の方法を用いるとよい。
 また、背景奥行値がユーザーによって、指定されなかった場合は、領域情報入力部102によって指定された領域情報から自動的に、背景奥行値を設定する方法を用いてもよい。すなわち、画面上に背景奥行調整目盛を表示し、マウスを用いて目盛を指定する。指定された背景奥行値D1は、奥行情報生成部104へ送られる。
The background depth input unit 103 reads the depth feeling of the area (background area) not specified by the user as an input, and sends it to the depth information generation unit 104. As a method for inputting a feeling of depth, a method similar to the method for inputting a base depth value may be used.
Further, when the background depth value is not specified by the user, a method of automatically setting the background depth value from the area information specified by the area information input unit 102 may be used. That is, the background depth adjustment scale is displayed on the screen, and the scale is designated using the mouse. The designated background depth value D1 is sent to the depth information generation unit 104.
 奥行情報生成部104は、領域情報入力部102から送られてきた領域情報Rと、形状記憶部109が記憶する形状情報mと、背景奥行入力部103から送られてきた背景奥行値D1とを基に、奥行情報Dpを生成する。奥行情報生成部104は、この奥行情報Dpを画像領域分割部105と3D領域処理部107に送る。 The depth information generation unit 104 receives the region information R sent from the region information input unit 102, the shape information m stored in the shape storage unit 109, and the background depth value D1 sent from the background depth input unit 103. Based on this, depth information Dp is generated. The depth information generation unit 104 sends the depth information Dp to the image region division unit 105 and the 3D region processing unit 107.
 図5、図6は奥行情報の概略を示した投影図である。図5の500で示す矩形内は、奥行情報を画像で表した奥行画像である。奥行情報を画像で表すとはすなわち、奥行値を輝度値で表した輝度画像である。本実施形態では、奥行情報を0~255の256段階に分割し、その間で、入力画像の各画素における奥行値を設定する。奥行画像500は、半球501と半円柱502の領域情報Rが領域情報入力部102から送られ、背景奥行入力部103からは、背景奥行値D1が送られてきた場合に、奥行情報生成部104が生成する奥行画像である。 FIG. 5 and FIG. 6 are projection views showing an outline of depth information. A rectangle indicated by 500 in FIG. 5 is a depth image representing depth information as an image. Representing depth information as an image means a luminance image in which a depth value is represented by a luminance value. In this embodiment, the depth information is divided into 256 levels of 0 to 255, and the depth value for each pixel of the input image is set between them. In the depth image 500, when the region information R of the hemisphere 501 and the semi-cylinder 502 is sent from the region information input unit 102, and the background depth value D1 is sent from the background depth input unit 103, the depth information generation unit 104. Is a depth image generated.
 半球501の領域情報は、(ID、中心のx座標、中心のy座標、半径r、ベース奥行値d)=(1、50、40、20、-5)という5つの要素をもつベクトルデータとして領域情報入力部102から送られてくる。よって、半球501は、図のように中心の座標が(50,40)の位置に配置される。ここで、図5下の投影図においては、奥行は矢印の方向に行く程、表示面より奥であることを表している。 The area information of the hemisphere 501 is vector data having five elements of (ID, center x coordinate, center y coordinate, radius r, base depth value d) = (1, 50, 40, 20, -5). Sent from the area information input unit 102. Therefore, the hemisphere 501 is arranged at the position of the center coordinates (50, 40) as shown in the figure. Here, in the projection view in the lower part of FIG. 5, the depth indicates that the depth increases from the display surface in the direction of the arrow.
 この例では、背景奥行値D1=0の為、背景は表示面ということになる。ベース奥行値dは、背景奥行入力部103から送られてきた背景奥行値D1からどの程度離れているかを表し、値は負の数とする。よって、この例では、半球501、d=-5の為、背景に対し、5だけ手前に配置することになる。 In this example, since the background depth value D1 = 0, the background is the display surface. The base depth value d represents how far away from the background depth value D1 sent from the background depth input unit 103, and the value is a negative number. Therefore, in this example, since the hemisphere 501 and d = −5, it is arranged by 5 in front of the background.
 半円柱502の領域情報は、(ID、左上頂点のx座標、左上頂点のy座標、右下頂点のx座標、右下頂点のy座標、回転角θ、ベース奥行値d)=(2、120、80、150、120、0°、0)という7つの要素をもつベクトルデータとして領域情報入力部102から送られてくる。領域情報入力部102から送られてきたデータの要素数は、IDで判断すればよい。半円柱502は、図5に表すように配置する。ベース奥行値d=0の為、半円柱の平面部が背景奥行と同じになるような位置に配置されることになる。 The area information of the semi-cylinder 502 is (ID, x coordinate of upper left vertex, y coordinate of upper left vertex, x coordinate of lower right vertex, y coordinate of lower right vertex, rotation angle θ, base depth value d) = (2, 120, 80, 150, 120, 0 °, 0) is sent from the region information input unit 102 as vector data having seven elements. The number of data elements sent from the area information input unit 102 may be determined by ID. The semi-cylinder 502 is arranged as shown in FIG. Since the base depth value d = 0, the flat portion of the semi-cylinder is arranged at the same position as the background depth.
 図6は、領域情報として、円511、長方形512が送られてきた場合の奥行き画像である。複数の領域情報を配置する場合、2つ以上の図形が同じ座標に重なって存在することが考えられるが、その際は、重なった図形の中で奥行が最も手前になるものを選択すればよい。なお、図形毎に優先度を付けて、重なった図形の中で最も優先度が高いものを選択するようにしてもよい。 FIG. 6 is a depth image when a circle 511 and a rectangle 512 are sent as region information. When arranging a plurality of area information, it is possible that two or more figures exist on the same coordinate, but in that case, it is only necessary to select the one with the depth closest to the overlapped figure. . It should be noted that a priority may be given to each figure, and the highest priority among the overlapping figures may be selected.
 円511の領域情報は、(ID、中心のx座標、中心のy座標、半径r、ベース奥行値d)=(3、50、40、20、-5)という5つの要素をもつベクトルデータとして領域情報入力部102から送られてくる。よって、円511は、図のように中心の座標が(50,40)の位置に配置される。ここで、図6下の投影図においては、奥行は矢印の方向に行く程、表示面より奥であることを表している。 The area information of the circle 511 is vector data having five elements (ID, center x coordinate, center y coordinate, radius r, base depth value d) = (3, 50, 40, 20, -5). Sent from the area information input unit 102. Therefore, the circle 511 is arranged at the position of the center coordinates (50, 40) as shown in the figure. Here, in the projection view in the lower part of FIG. 6, the depth indicates that the depth increases from the display surface in the direction of the arrow.
 この例では、背景奥行値D1=0の為、背景は表示面ということになる。ベース奥行値dは、背景奥行入力部103から送られてきた背景奥行値D1からどの程度離れているかを表し、値は負の数とする。よって、この例では、円511は、d=-5の為、背景に対し、5だけ手前に配置することになる。 In this example, since the background depth value D1 = 0, the background is the display surface. The base depth value d represents how far away from the background depth value D1 sent from the background depth input unit 103, and the value is a negative number. Therefore, in this example, since the circle 511 is d = −5, the circle 511 is arranged in front of the background by 5.
 長方形512の領域情報は、(ID、左上頂点のx座標、左上頂点のy座標、右下頂点のx座標、右下頂点のy座標、回転角θ、ベース奥行値d)=(4、120、80、150、120、0°、0)という7つの要素をもつベクトルデータとして領域情報入力部102から送られてくる。領域情報入力部102から送られてきたデータの要素数は、IDで判断すればよい。長方形512は、図6に表すように配置する。ベース奥行値d=-10の為、背景に対し、10だけ手前に配置することになる。 The area information of the rectangle 512 is (ID, x coordinate of upper left vertex, y coordinate of upper left vertex, x coordinate of lower right vertex, y coordinate of lower right vertex, rotation angle θ, base depth value d) = (4, 120 , 80, 150, 120, 0 °, 0) is sent from the area information input unit 102 as vector data having seven elements. The number of data elements sent from the area information input unit 102 may be determined by ID. The rectangle 512 is arranged as shown in FIG. Since the base depth value d = −10, it is arranged by 10 in front of the background.
 画像領域分割部105は、画像入力部101から送られてきた、2Dの入力画像と、奥行情報生成部104から送られてきた奥行情報を基に、画像領域を2D領域と3D領域に分割する。その後、2D部分は2D領域処理部106に、3D領域は3D領域処理部107に送る。 The image region dividing unit 105 divides the image region into a 2D region and a 3D region based on the 2D input image sent from the image input unit 101 and the depth information sent from the depth information generation unit 104. . Thereafter, the 2D part is sent to the 2D region processing unit 106, and the 3D region is sent to the 3D region processing unit 107.
 2D領域と3D領域の詳細について図7を用いて説明する。まず、奥行情報710からマスク画像720を生成する。マスク画像720は、奥行情報710の3D部分の画素値を「0」、2D部分の画素値を「1」とした2値画像である。3D部分とは、画素毎に奥行き値が設定されている領域であり、2D部分は、画素値が背景奥行き値となっている領域である。なお、奥行き画像の2D部分の画素値を、背景奥行き値とせずに、2D部分であることを示す値とし、奥行き画像と、背景奥行き値とで、奥行情報を構成するようにしてもよい。 Details of the 2D region and the 3D region will be described with reference to FIG. First, a mask image 720 is generated from the depth information 710. The mask image 720 is a binary image in which the pixel value of the 3D portion of the depth information 710 is “0” and the pixel value of the 2D portion is “1”. The 3D part is an area where a depth value is set for each pixel, and the 2D part is an area where the pixel value is a background depth value. In addition, the pixel value of the 2D portion of the depth image may be a value indicating the 2D portion without using the background depth value, and the depth information may be configured by the depth image and the background depth value.
 画像領域分割部105は、入力画像700と、マスク画像720を基に、2D領域740と3D領域730を生成する。画像領域分割部105は、入力画像700の画素のうち、マスク画像720において画素値が「1」の画素の色情報(画素値)だけを残し、マスク画像720において画素値が「0」である画素の画素値を特殊な値(例えば負の数)として、2D領域740を生成する。2D領域740では、ハッチングされた領域741の画素値が上述の特殊な値となっている。また、画像領域分割部105は、入力画像700の画素のうち、マスクが0の画素の色情報(画素値)だけを残し、マスクが1である画素の画素値を特殊な値(例えば負の数)として、3D領域730を生成する。3D領域730では、領域731以外の領域、すなわちハッチングされた領域の画素値が上述の特殊な値となっている。 The image area dividing unit 105 generates a 2D area 740 and a 3D area 730 based on the input image 700 and the mask image 720. The image area dividing unit 105 leaves only the color information (pixel value) of the pixel whose pixel value is “1” in the mask image 720 among the pixels of the input image 700, and the pixel value is “0” in the mask image 720. The 2D region 740 is generated by setting the pixel value of the pixel to a special value (for example, a negative number). In the 2D area 740, the pixel value of the hatched area 741 is the special value described above. In addition, the image area dividing unit 105 leaves only the color information (pixel value) of the pixel whose mask is 0 among the pixels of the input image 700, and sets the pixel value of the pixel whose mask is 1 to a special value (for example, a negative value). A 3D region 730 is generated. In the 3D area 730, the pixel values of the area other than the area 731, that is, the hatched area are the above-described special values.
 2D領域処理部106は、画像領域分割部105から送られてきた2D領域データ740のうち、3D領域にあたる画素741(値が負の画素)に対して、穴埋め処理を行い、背景画像770を生成する。その後、2D領域処理部106は、背景画像770を、部分立体画像生成部108へ送る。穴埋め処理には、例えば3D領域にあたる画素に対して、メディアンフィルタなどの平滑化フィルタを適用する方法を用いる。メディアンフィルタとは、注目画素の周辺画素の画素値を値の大きい(または小さい)順に並べ、その中央値(メディアン)を注目画素の画素値とする方法である。 The 2D area processing unit 106 performs a hole filling process on the pixel 741 (a pixel having a negative value) corresponding to the 3D area in the 2D area data 740 sent from the image area dividing unit 105 to generate a background image 770. To do. Thereafter, the 2D region processing unit 106 sends the background image 770 to the partial stereoscopic image generation unit 108. For the hole filling process, for example, a method of applying a smoothing filter such as a median filter to a pixel corresponding to a 3D region is used. The median filter is a method in which pixel values of peripheral pixels of the target pixel are arranged in order of increasing (or decreasing) value, and the median value (median) is used as the pixel value of the target pixel.
 本実施形態では、2D領域処理部106は、以下のようにして穴埋め処理を行う。1)はじめに2D領域と3D領域の境界部にあたる3D領域画素に対しメディアンフィルタをかける。この時、中央値の計算に用いるのは、2D部分の画素のみ(正の数の画素)である。2)次に、フィルタ処理を行った画像に対し、同じように境界部分にメディアンフィルタをかける。3)その後は、このフィルタ処理を3D部分の画素(値が負の画素)が無くなるまで繰り返す。なお、上記穴埋め処理に加えて、2D領域処理部106は、3D領域の誘目性を高める為、2D領域に対し、ガウシアンフィルタ等の平滑化フィルタをかけてもよい。 In the present embodiment, the 2D region processing unit 106 performs the hole filling process as follows. 1) First, a median filter is applied to a 3D area pixel corresponding to the boundary between the 2D area and the 3D area. At this time, only the pixels in the 2D portion (positive number of pixels) are used for calculating the median value. 2) Next, the median filter is similarly applied to the boundary portion of the filtered image. 3) Thereafter, this filtering process is repeated until there are no more 3D portion pixels (pixels with negative values). In addition to the above hole filling process, the 2D area processing unit 106 may apply a smoothing filter such as a Gaussian filter to the 2D area in order to improve the attractiveness of the 3D area.
 3D領域処理部107は、画像領域分割部105から送られてきた3D領域データ730と奥行情報生成部104から送られてきた奥行情報710を基に、左眼用3D領域750と右眼用3D領域760を生成し、この2つのデータを部分立体画像生成部108へ送る。3D領域処理部107は、左眼用、右眼用の3D領域を生成する方法に、水平方向の画素移動を用いる。 Based on the 3D area data 730 sent from the image area dividing unit 105 and the depth information 710 sent from the depth information generating unit 104, the 3D area processing unit 107 performs the left-eye 3D area 750 and the right-eye 3D. An area 760 is generated, and these two data are sent to the partial stereoscopic image generation unit 108. The 3D region processing unit 107 uses horizontal pixel movement as a method for generating the left-eye and right-eye 3D regions.
 具体的には、図8に示すような関係式を用いて、画素毎の奥行値zから画素移動量sを計算する。図8に示す関係式は、基本的には、奥行値が小さい(すなわち手前にある)ほど、移動量が大きくなるように設定する。また、領域を表示装置の実際の表示面より、手前に表示させる場合と、表示面より奥側に表示させる場合では、移動の方向が変わってくる。すなわち、表示面より手前に表示させる領域は、左眼用の場合には右側へ移動させ、右眼用の場合には左側へ移動させる。逆に、表示面より手前に表示させる領域は、左眼用の場合には左側へ移動させ、右眼用の場合には右側へ移動させる。関係式の係数は、表示装置と鑑賞者の距離、表示装置の解像度などから求める。 Specifically, the pixel movement amount s is calculated from the depth value z for each pixel using a relational expression as shown in FIG. The relational expression shown in FIG. 8 is basically set so that the amount of movement becomes larger as the depth value is smaller (that is, closer to the front). Further, the direction of movement varies depending on whether the area is displayed in front of the actual display surface of the display device or in the case where the area is displayed behind the display surface. That is, the area to be displayed in front of the display surface is moved to the right side for the left eye and to the left side for the right eye. Conversely, the area to be displayed in front of the display surface is moved to the left side for the left eye and to the right side for the right eye. The coefficient of the relational expression is obtained from the distance between the display device and the viewer, the resolution of the display device, and the like.
 3D領域処理部107は、この関係式を用いて、各画素の奥行情報を各画素の移動量に変換した画像(視差マップ)を求める。その後、3D領域処理部107は、3D領域730を、視差マップに従い移動させ、左眼用3D領域750と右眼用3D領域760を生成する。画素の移動を行う際、複数の画素が同じ画素に移動してくる場合があるが、この場合、より手前にある画素(遠くから移動してきた画素)を採用する。3D領域の指定方法が、半球、半円柱の場合、領域内において画素移動量に違いがでる(例えば、半球では、中心に近いほど移動量が大きくなる)。この為、移動量が変化する境界に穴が発生する場合がある。この穴は、3D領域周辺に対し、メディアンフィルタやガウシアンフィルタ等の平滑化フィルタを適用することで補間する。 The 3D region processing unit 107 obtains an image (parallax map) obtained by converting the depth information of each pixel into the movement amount of each pixel using this relational expression. Thereafter, the 3D area processing unit 107 moves the 3D area 730 according to the parallax map, and generates a left-eye 3D area 750 and a right-eye 3D area 760. When a pixel is moved, a plurality of pixels may move to the same pixel. In this case, a pixel closer to the front (a pixel moved from a distance) is employed. When the 3D region designation method is a hemisphere or a semi-cylinder, there is a difference in the amount of pixel movement within the region (for example, in a hemisphere, the amount of movement increases as it is closer to the center). For this reason, a hole may occur at the boundary where the movement amount changes. This hole is interpolated by applying a smoothing filter such as a median filter or a Gaussian filter to the periphery of the 3D region.
 部分立体画像生成部108は、2D領域処理部106から送られてきた背景画像と、3D領域処理部107から送られてきた左眼用3D領域、右眼用3D領域を基に、左眼用部分立体画像、右眼用部分立体画像の2つを生成し、部分立体画像Oとして出力する。左眼用部分立体画像は、背景画像に対し、左眼用3D領域を重ね合わせることで生成し、右眼用部分立体画像は、背景画像に対し、右眼用3D領域を重ね合わせることで生成する。背景画像と3D領域を重ね合わせるとは、すなわち、3D領域処理部107から送られてきた3D領域情報のうち、値が正の部分のみを背景画像に上書きする。 The partial stereoscopic image generation unit 108 uses the background image sent from the 2D region processing unit 106, the left eye 3D region and the right eye 3D region sent from the 3D region processing unit 107, for the left eye. Two partial stereoscopic images and a right stereoscopic partial stereoscopic image are generated and output as a partial stereoscopic image O. The left-eye partial stereoscopic image is generated by superimposing the left-eye 3D region on the background image, and the right-eye partial stereoscopic image is generated by superimposing the right-eye 3D region on the background image. To do. Overlaying the background image and the 3D area means that only the positive part of the 3D area information sent from the 3D area processing unit 107 is overwritten on the background image.
 出力された部分立体画像を表示する方法は、既存の立体表示装置を用いればよい。例えば、図9A、図9Bのようなアクティブシャッタ方式による立体表示方法を用いる。アクティブシャッタ方式では、液晶シャッターメガネSGを用い、表示部Dspによって表示される画像と同期して、メガネの液晶シャッタを開閉することにより、左眼には左眼用部分立体画像900のみを、右眼には右眼用部分立体画像901のみを見せる。例えば、表示部Dspが左眼用部分画像900を表示している図9Aでは、左シャッタは開いていて、右シャッタは閉じている。すなわち左眼に左眼用部分画像900を見せている。また、表示部Dspが右眼用部分画像901を表示している図9Bでは、左シャッタは閉じていて、右シャッタは開いている。すなわち右眼に右眼用部分画像901を見せている。また、この方法により、右眼と左眼の見え方の差(視差)を表現することが可能となり、観賞者は立体感を得ることができる。 An existing stereoscopic display device may be used as a method for displaying the output partial stereoscopic image. For example, a stereoscopic display method using an active shutter system as shown in FIGS. 9A and 9B is used. In the active shutter method, only the left-eye partial stereoscopic image 900 is displayed on the left eye by opening and closing the liquid crystal shutter of the glasses in synchronization with the image displayed by the display unit Dsp using the liquid crystal shutter glasses SG. Only the right-eye partial stereoscopic image 901 is shown on the eye. For example, in FIG. 9A in which the display unit Dsp displays the left-eye partial image 900, the left shutter is open and the right shutter is closed. That is, the left eye partial image 900 is shown to the left eye. In FIG. 9B in which the display unit Dsp displays the right-eye partial image 901, the left shutter is closed and the right shutter is open. That is, the right eye partial image 901 is shown to the right eye. Further, this method makes it possible to express the difference in appearance (parallax) between the right eye and the left eye, and the viewer can obtain a stereoscopic effect.
 図10A、図10Bは、アクティブシャッタ方式の立体表示装置において、部分立体画像を見た時、どのように見えるかを示した図である。2D部分を観賞者が注視する場合(図10A)、背景画像1000を注視する左右の目線の交点が表示面と一致し、背景画像1000は表示面上にあるように見える。一方、観賞者が3D部分である人物画像1001を注視する場合(図10B)、左右の目線の交点が表示面よりも手前側となり、人物画像1001が実際の位置よりも手前にあるように錯覚する。以上の原理により、3D領域のみが2D領域よりも飛び出して見えることになる。  FIG. 10A and FIG. 10B are diagrams showing how an active shutter type stereoscopic display device looks when a partial stereoscopic image is viewed. When the viewer gazes at the 2D portion (FIG. 10A), the intersection of the left and right eyes that gaze at the background image 1000 coincides with the display surface, and the background image 1000 appears to be on the display surface. On the other hand, when the viewer gazes at the person image 1001 that is a 3D portion (FIG. 10B), the illusion that the intersection of the left and right eyes is on the front side of the display surface and the person image 1001 is on the front side of the actual position. To do. Based on the above principle, only the 3D region appears to protrude from the 2D region. *
 また、図11に示すような、パララックスバリア方式の立体表示装置を用いて、部分立体画像を表示してもよい。パララックスバリア方式は、ディスプレイ1100前面に視差バリア1101を配置し、左の眼Leには左眼に対応する画像Lのみが見えるようにし、右の眼Reには右眼に対応する画像Rのみが見えるようにすることで立体視を可能とする。 Alternatively, a partial stereoscopic image may be displayed using a parallax barrier type stereoscopic display device as shown in FIG. In the parallax barrier method, a parallax barrier 1101 is arranged in front of the display 1100 so that only the image L corresponding to the left eye can be seen by the left eye Le, and only the image R corresponding to the right eye is seen by the right eye Re. By making it visible, stereoscopic vision is possible.
 なお、本実施形態では、部分立体画像として、左眼用、右眼用の2視点の画像のみを生成したが、部分立体画像を表示させる表示装置が3視点以上の多視点画像表示装置である場合は、表示装置に必要な視点数の画像を生成するようにしてもよい。具体的には、図8の奥行値と移動量の関係式を、視点毎に変更する。関係式は、視点が左右にずれる程、移動量が大きくなるように設定しておく。
 また、部分立体画像生成装置100が、立体画像表示装置を備えており、部分立体画像生成部108が生成した部分立体画像を、該立体画像表示装置が表示するようにしてもよい。
In this embodiment, only the left-eye and right-eye images are generated as the partial stereoscopic images. However, the display device that displays the partial stereoscopic images is a multi-viewpoint image display device having three or more viewpoints. In this case, an image having the number of viewpoints necessary for the display device may be generated. Specifically, the relational expression between the depth value and the movement amount in FIG. 8 is changed for each viewpoint. The relational expression is set so that the amount of movement increases as the viewpoint shifts to the left and right.
Further, the partial stereoscopic image generation apparatus 100 may include a stereoscopic image display apparatus, and the stereoscopic image display apparatus may display the partial stereoscopic image generated by the partial stereoscopic image generation unit 108.
 次に、第一の実施形態である部分立体画像生成装置100の処理手順を、図12のフロー図を用いて説明する。まず、画像入力部101は、入力画像を読み込む(S1)。領域情報入力部102は、領域情報を読み込む(S2)。
背景奥行入力部103は、背景奥行を読み込む(S3)。奥行情報生成部104は、読み込んだ領域情報と背景奥行を基に、奥行情報を生成する(S4)。画像領域分割部105は、生成した奥行情報を基に、入力画像を2D領域と3D領域に分割する(S5)。2D領域処理部106は、2D領域の穴埋め処理を行い、背景画像を生成する(S6)。3D領域処理部107は、3D領域の画素シフトを行い、左眼用3D領域を生成し(S7)、同様に、3D領域の画素シフトを行い、右眼用3D領域を生成する(S8)。部分立体画像生成部108は、左眼用3D領域を背景画像に上書きし、左眼用部分立体画像を生成する(S9)。
同様に、右眼用3D領域を背景画像に上書きし、右眼用部分立体画像を生成する(S10)。その後、左眼用部分立体画像と右眼用部分立体画像を、部分立体画像として出力する(S11)。
Next, the processing procedure of the partial stereoscopic image generating apparatus 100 according to the first embodiment will be described with reference to the flowchart of FIG. First, the image input unit 101 reads an input image (S1). The area information input unit 102 reads area information (S2).
The background depth input unit 103 reads the background depth (S3). The depth information generation unit 104 generates depth information based on the read area information and background depth (S4). The image area dividing unit 105 divides the input image into a 2D area and a 3D area based on the generated depth information (S5). The 2D area processing unit 106 performs a 2D area filling process to generate a background image (S6). The 3D area processing unit 107 performs pixel shift of the 3D area to generate a 3D area for the left eye (S7), and similarly performs pixel shift of the 3D area to generate a 3D area for the right eye (S8). The partial stereoscopic image generation unit 108 overwrites the background image with the left-eye 3D region, and generates a left-eye partial stereoscopic image (S9).
Similarly, the right-eye 3D region is overwritten on the background image to generate a right-eye partial stereoscopic image (S10). Thereafter, the left-eye partial stereoscopic image and the right-eye partial stereoscopic image are output as partial stereoscopic images (S11).
 以上の説明のように、本実施形態によれば、ユーザーが指定した特定の領域のみに立体感があり、それ以外の部分は一様な平面であるような部分立体画像を生成することにより、3D画像を生成する際の処理時間とメモリ量を削減することができるという効果を奏する。 As described above, according to the present embodiment, by generating a partial stereoscopic image in which only a specific area designated by the user has a stereoscopic effect and the other part is a uniform plane, There is an effect that it is possible to reduce the processing time and the memory amount when generating the 3D image.
(第二の実施形態)
 第二の実施形態は、第一の実施形態の手法を、電子辞書や携帯電話、スマートフォン等のうち、表示部がタッチパネルを備えるものに応用する。すなわち、少ない処理量とメモリ量で、画像の2D化、3D化等を容易に行い、3D表示を行う部分立体画像生成表示装置である。第二の実施形態である部分立体画像生成表示装置について、図13に示すブロック図を用いて説明する。部分立体画像生成表示装置1300は、例えば、スマートフォンである。
(Second embodiment)
In the second embodiment, the technique of the first embodiment is applied to an electronic dictionary, a mobile phone, a smartphone, or the like whose display unit includes a touch panel. That is, this is a partial stereoscopic image generation and display device that easily performs 2D conversion, 3D conversion, and the like of an image with a small amount of processing and memory, and performs 3D display. A partial stereoscopic image generation / display apparatus according to the second embodiment will be described with reference to a block diagram shown in FIG. The partial stereoscopic image generation display device 1300 is, for example, a smartphone.
 部分立体画像生成表示装置1300は、表示部1301、操作検出部1302、2D画像生成部1303、領域情報選択部1304、領域情報記憶部1305、背景奥行値記憶部1306、奥行情報生成部104、形状記憶部109、画像合成部110を含んで構成される。奥行情報生成部104、形状記憶部109、画像合成部110は、図1に示す第一の実施形態の部分立体画像生成装置100と同じ構成の為、ここでは説明を省略する。 The partial stereoscopic image generation display device 1300 includes a display unit 1301, an operation detection unit 1302, a 2D image generation unit 1303, an area information selection unit 1304, an area information storage unit 1305, a background depth value storage unit 1306, a depth information generation unit 104, and a shape. A storage unit 109 and an image composition unit 110 are included. Since the depth information generation unit 104, the shape storage unit 109, and the image composition unit 110 have the same configuration as the partial stereoscopic image generation apparatus 100 of the first embodiment shown in FIG.
 表示部1301は、画像合成部110が生成した部分立体画像を表示する。表示方法は、第一の実施形態の説明で示したような表示方法を用いればよい。例えば、スマートフォン等の携帯端末で実現する場合には、図11に示すようなパララックスバリア方式の立体表示方式を用いるのが望ましい。
 操作検出部1302は、例えば、表示部1301に表面に貼付されたタッチパネルであり、ユーザーによる操作を検出する。2D画像生成部1303(画像取得部)は、操作検出部1302が検出したユーザーによる操作に応じて、表示部1301に表示するための画像を生成する。なお、2D画像生成部1303が生成するのは、2次元画像である。領域情報選択部1304は、操作検出部1302が検出したユーザーによる操作に応じて、領域情報記憶部1305が記憶する複数の領域情報の中から一つを読出し、奥行情報生成部104に出力する。領域情報記憶部1305は、2D画像生成部1303が生成する画像に応じた領域情報を、予め複数記憶する。なお、領域情報記憶部1305が記憶する領域情報は、第一の実施形態において説明した領域情報と同様であり、半球や長方形などの図形を示すIDとそのパラメータ(中心座標、半径など)とからなり、複数の図形の領域データを含んでいてもよい。背景奥行値記憶部1306は、背景奥行値を予め記憶する。
The display unit 1301 displays the partial stereoscopic image generated by the image composition unit 110. As the display method, the display method as described in the description of the first embodiment may be used. For example, when it is realized by a mobile terminal such as a smartphone, it is desirable to use a parallax barrier stereoscopic display method as shown in FIG.
The operation detection unit 1302 is, for example, a touch panel attached to the surface of the display unit 1301 and detects an operation by the user. The 2D image generation unit 1303 (image acquisition unit) generates an image to be displayed on the display unit 1301 in accordance with a user operation detected by the operation detection unit 1302. Note that the 2D image generation unit 1303 generates a two-dimensional image. The area information selection unit 1304 reads one of the plurality of area information stored in the area information storage unit 1305 according to the user operation detected by the operation detection unit 1302 and outputs the read information to the depth information generation unit 104. The area information storage unit 1305 stores in advance a plurality of pieces of area information corresponding to the image generated by the 2D image generation unit 1303. The area information stored in the area information storage unit 1305 is the same as the area information described in the first embodiment, and is based on an ID indicating a figure such as a hemisphere or a rectangle and its parameters (center coordinates, radius, etc.). Thus, the area data of a plurality of figures may be included. The background depth value storage unit 1306 stores the background depth value in advance.
 図14Aは、本実施形態における部分立体画像生成表示装置1300の表示例を示す図である。部分立体画像生成表示装置1300の表示部1301上にボタン1402やボタン1404が表示する場合、生成される奥行情報が奥行情報1410となる領域情報を、領域情報選択部1304が選択していると、奥行情報1410における3D領域1412に対応する、一部のボタン部分(ここでは、1、2、4、9、*)のみが飛び出すような効果を得ることができる。 FIG. 14A is a diagram illustrating a display example of the partial stereoscopic image generation display device 1300 according to the present embodiment. When the button 1402 or the button 1404 is displayed on the display unit 1301 of the partial stereoscopic image generation display device 1300, if the region information selection unit 1304 selects the region information in which the generated depth information becomes the depth information 1410, It is possible to obtain an effect that only a part of the button portions (here, 1, 2, 4, 9, *) corresponding to the 3D region 1412 in the depth information 1410 pops out.
 また、図14Bも同様に、領域情報選択部1304が選択した領域情報に基づき生成される奥行情報が奥行情報1430であると、奥行情報1430中の矩形領域1432で指定された部分(すなわち、「1位 かつ丼」のパネル部分1405)が最も飛び出す効果を得ることができる。 Similarly, in FIG. 14B, if the depth information generated based on the region information selected by the region information selection unit 1304 is the depth information 1430, the portion specified by the rectangular region 1432 in the depth information 1430 (that is, “ The panel part 1405) of “1st place and bowl” can be most effective.
 奥行情報1410、1430のような奥行情報は、第一の実施形態の説明でも述べたように、領域情報と背景奥行値から奥行情報生成部104が生成する。第一の実施形態では、2D画像と領域情報、背景奥行値をユーザーが入力する形をとっていたが、本実施形態では、画像の制作者などがあらかじめ作成した領域情報や背景奥行をスマートフォン内のメモリ(領域情報記憶部1305、背景奥行値1306)に保存しておき、必要に応じて読み出すことで、飛び出し感を変化させる。 The depth information such as the depth information 1410 and 1430 is generated by the depth information generation unit 104 from the region information and the background depth value as described in the description of the first embodiment. In the first embodiment, the user inputs the 2D image, the region information, and the background depth value. However, in this embodiment, the region information and the background depth created in advance by the image creator are stored in the smartphone. Are stored in the memory (region information storage unit 1305, background depth value 1306), and read out as necessary to change the feeling of popping out.
 なお、スマートフォンのタッチパネルを通した、ユーザーの入力に応じて、表示する画像の奥行き感や飛び出し感を変化させてもよい。この場合は、ユーザーの入力に対してどのように領域情報と背景奥行値を変化させるかなどのルールをあらかじめ設定しておき、ユーザーの入力があった時点で、このルールに従い、装置に入力する領域情報と背景奥行値を決定すればよい。
 また、領域情報と背景奥行値を別のものに変更することで、同じ画像の立体感を容易に変更することができる。例えば、ダウンロードによって情報を取得する方法や、第一の実施形態のようにユーザーが直接入力する方法が可能である。
It should be noted that the sense of depth and pop-out of the displayed image may be changed according to the user input through the touch panel of the smartphone. In this case, rules such as how to change the region information and the background depth value in response to user input are set in advance, and input to the device is performed according to this rule when the user inputs. What is necessary is just to determine area | region information and a background depth value.
Moreover, the stereoscopic effect of the same image can be easily changed by changing the area information and the background depth value to different ones. For example, a method of acquiring information by downloading or a method of direct input by the user as in the first embodiment is possible.
 以上のように、本実施形態によれば、少ないデータ量で、ボタン等の3D表示が可能となる。その他の効果としては、同じ画像の立体感を変化させたい場合、領域情報、背景奥行値などの少ないデータを変更するのみで実現可能である。 As described above, according to the present embodiment, 3D display such as buttons can be performed with a small amount of data. Other effects can be realized by changing only a small amount of data such as region information and background depth value when the stereoscopic effect of the same image is to be changed.
(第三の実施形態)
 第三の実施形態は、テレビジョン放送等に応用したものである。図15は、本実施形態におけるテレビジョン放送システム1500の構成を示す概略ブロック図である。テレビジョン放送システム1500は、部分立体画像送信装置1501と、該部分立体画像送信装置1501と放送ネットワーク1502を介して接続された部分立体画像受信装置1503とを含んで構成される。
(Third embodiment)
The third embodiment is applied to television broadcasting or the like. FIG. 15 is a schematic block diagram showing the configuration of the television broadcast system 1500 in the present embodiment. The television broadcast system 1500 includes a partial stereoscopic image transmission device 1501 and a partial stereoscopic image reception device 1503 connected to the partial stereoscopic image transmission device 1501 via a broadcast network 1502.
 放送ネットワーク1502は、電波を用いた放送ネットワークであってもよいし、インターネットなどを用いた放送ネットワークであってもよい。部分立体画像送信装置1501は、放送局に設置され、放送番組を構成する2D画像と、領域情報と、背景奥行値とを送信する。領域情報および背景奥行値は、第一および第二の実施形態における領域情報および背景奥行値と同様である。部分立体画像受信装置1503は、本実施形態における部分立体画像生成装置あるいは部分立体画像表示装置であり、部分立体画像送信装置1501が送信した2D画像と、領域情報と、背景奥行値とに基づき、部分立体画像を生成し、表示する。 Broadcast network 1502 may be a broadcast network using radio waves, or a broadcast network using the Internet or the like. The partial stereoscopic image transmission apparatus 1501 is installed in a broadcasting station, and transmits a 2D image constituting a broadcast program, region information, and a background depth value. The area information and the background depth value are the same as the area information and the background depth value in the first and second embodiments. The partial stereoscopic image reception device 1503 is a partial stereoscopic image generation device or a partial stereoscopic image display device according to the present embodiment, and is based on the 2D image transmitted by the partial stereoscopic image transmission device 1501, the region information, and the background depth value. A partial stereoscopic image is generated and displayed.
 図16は、部分立体画像受信装置1503の構成を示す概略ブロック図である。部分立体画像受信装置1503は、受信部1531、情報分離部1532、デコーダ1533、音声情報処理部1534、スピーカ1535、奥行情報生成部104、形状記憶部109、画像合成部110、表示部1301を含んで構成される。奥行情報生成部104、形状記憶部109、画像合成部110、表示部1301は、図13における各部と同様であるので、説明を省略する。 FIG. 16 is a schematic block diagram illustrating a configuration of the partial stereoscopic image receiving device 1503. The partial stereoscopic image receiving apparatus 1503 includes a receiving unit 1531, an information separating unit 1532, a decoder 1533, an audio information processing unit 1534, a speaker 1535, a depth information generating unit 104, a shape storage unit 109, an image composition unit 110, and a display unit 1301. Consists of. The depth information generation unit 104, the shape storage unit 109, the image composition unit 110, and the display unit 1301 are the same as the respective units in FIG.
 受信部1531は、部分立体画像送信装置1501から送られてきた入力信号を受信する。その後、情報分離部1532において、受信部1531が受信した入力信号を、映像音声データとその他メタデータに分離する。映像音声データはデコーダ1533に送られ、その他メタデータは奥行情報生成部104に送られる。デコーダ1533は、映像音声データを復号し、音声データは音声情報処理部1534に出力し、映像データ(2D画像)は画像合成部110に出力する。 The receiving unit 1531 receives an input signal sent from the partial stereoscopic image transmission device 1501. Thereafter, the information separator 1532 separates the input signal received by the receiver 1531 into video / audio data and other metadata. The video / audio data is sent to the decoder 1533, and the other metadata is sent to the depth information generation unit 104. The decoder 1533 decodes the video / audio data, outputs the audio data to the audio information processing unit 1534, and outputs the video data (2D image) to the image synthesis unit 110.
 音声情報処理部1534は、音声データを加工して音声信号を生成し、スピーカ1535に出力する。スピーカ1535は、該音声信号に従い、音声を出力する。
 本ブロック図の部分立体画像受信装置1503は、部分立体画像を生成し、表示することができれば、必ずしも図に示す構成でなくてもよい。例えば、情報分離部1532によって分離されるメタデータには、領域情報と背景奥行値以外のものも存在する可能性があるが、これらに何らかの処理を行う処理部を備えていてもよい。
The audio information processing unit 1534 processes the audio data to generate an audio signal, and outputs the audio signal to the speaker 1535. The speaker 1535 outputs audio in accordance with the audio signal.
The partial stereoscopic image receiving device 1503 in this block diagram does not necessarily have the configuration illustrated in the drawing as long as it can generate and display a partial stereoscopic image. For example, metadata other than the region information and the background depth value may exist in the metadata separated by the information separation unit 1532, but a processing unit that performs some processing may be provided.
 以上のように、本実施形態によれば、少ないデータ量で、立体感のある映像を放送することが可能となる。従来は、立体映像をテレビジョン放送する際、左眼用の映像と右眼用の映像が必要であった。しかし、本実施形態を用いれば、画像は1つの視点だけでよく、これに少ないデータ量の領域情報と背景奥行値のみを加えて送るだけで、立体感のある映像をテレビジョン放送が実現可能である。
 なお、本発明は、少ないデータ量において立体感のある映像を生成表示することが可能な為、DVD、ブルーレイディスク等の記憶媒体に映像情報を保存する際にも有用であるし、インターネットなどにおける映像のストリーミングやダウンロード配信にも有用である。
As described above, according to the present embodiment, it is possible to broadcast a stereoscopic video with a small amount of data. Conventionally, when a stereoscopic video is broadcast on television, a left-eye video and a right-eye video are required. However, if this embodiment is used, the image need only be one viewpoint, and it is possible to realize a television broadcast with a stereoscopic effect by sending only a small amount of data area information and background depth value. It is.
Since the present invention can generate and display a stereoscopic image with a small amount of data, it is useful for storing video information on a storage medium such as a DVD or a Blu-ray disc. It is also useful for video streaming and download distribution.
 また、図1における部分立体画像生成装置100、図13における部分立体画像生成表示装置1300、図16における部分立体画像受信装置1503の各部の機能を実現するためのプログラムをコンピュータ読み取り可能な記録媒体に記録して、この記録媒体に記録されたプログラムをコンピュータシステムに読み込ませ、実行することにより各部を実現するようにしてもよい。なお、ここでいう「コンピュータシステム」とは、OSや周辺機器等のハードウェアを含むものとする。 Further, a program for realizing the functions of the respective parts of the partial stereoscopic image generating apparatus 100 in FIG. 1, the partial stereoscopic image generating / displaying apparatus 1300 in FIG. 13, and the partial stereoscopic image receiving apparatus 1503 in FIG. Each unit may be realized by recording, causing the computer system to read and execute the program recorded on the recording medium. Here, the “computer system” includes an OS and hardware such as peripheral devices.
 また、「コンピュータシステム」は、WWWシステムを利用している場合であれば、ホームページ提供環境(あるいは表示環境)も含むものとする。
 また、「コンピュータ読み取り可能な記録媒体」とは、フレキシブルディスク、光磁気ディスク、ROM、CD-ROM等の可搬媒体、コンピュータシステムに内蔵されるハードディスク等の記憶装置のことをいう。さらに「コンピュータ読み取り可能な記録媒体」とは、インターネット等のネットワークや電話回線等の通信回線を介してプログラムを送信する場合の通信線のように、短時間の間、動的にプログラムを保持するもの、その場合のサーバやクライアントとなるコンピュータシステム内部の揮発性メモリのように、一定時間プログラムを保持しているものも含むものとする。また上記プログラムは、前述した機能の一部を実現するためのものであっても良く、さらに前述した機能をコンピュータシステムにすでに記録されているプログラムとの組み合わせで実現できるものであっても良い。
Further, the “computer system” includes a homepage providing environment (or display environment) if a WWW system is used.
The “computer-readable recording medium” refers to a storage device such as a flexible medium, a magneto-optical disk, a portable medium such as a ROM or a CD-ROM, and a hard disk incorporated in a computer system. Furthermore, the “computer-readable recording medium” dynamically holds a program for a short time like a communication line when transmitting a program via a network such as the Internet or a communication line such as a telephone line. In this case, a volatile memory in a computer system serving as a server or a client in that case, and a program that holds a program for a certain period of time are also included. The program may be a program for realizing a part of the functions described above, and may be a program capable of realizing the functions described above in combination with a program already recorded in a computer system.
 以上、この発明の実施形態を図面を参照して詳述してきたが、具体的な構成はこの実施形態に限られるものではなく、この発明の要旨を逸脱しない範囲の設計変更等も含まれる。 As described above, the embodiment of the present invention has been described in detail with reference to the drawings. However, the specific configuration is not limited to this embodiment, and includes design changes and the like without departing from the gist of the present invention.
 100…部分立体画像生成装置
 101…画像入力部101
 102…領域情報入力部
 103…背景奥行入力部
 104…奥行情報生成部
 105…画像領域分割部
 106…2D領域処理部
 107…3D領域処理部
 108…部分立体画像生成部
 109…形状記憶部
 110…画像合成部
 1300…部分立体画像生成表示装置
 1301…表示部
 1302…操作検出部
 1303…2D画像生成部
 1304…領域情報選択部
 1305…領域情報記憶部
 1306…背景奥行値記憶部
 1500…テレビジョン放送システム
 1501…部分立体画像送信装置
 1502…放送ネットワーク
 1503…部分立体画像受信装置
 1531…受信部
 1532…情報分離部
 1533…デコーダ
 1534…音声情報処理部
 1535…スピーカ
DESCRIPTION OF SYMBOLS 100 ... Partial stereo image production | generation apparatus 101 ... Image input part 101
DESCRIPTION OF SYMBOLS 102 ... Area information input part 103 ... Background depth input part 104 ... Depth information generation part 105 ... Image area division part 106 ... 2D area process part 107 ... 3D area process part 108 ... Partial stereo image generation part 109 ... Shape memory part 110 ... Image composition unit 1300 ... Partial stereoscopic image generation and display device 1301 ... Display unit 1302 ... Operation detection unit 1303 ... 2D image generation unit 1304 ... Area information selection unit 1305 ... Area information storage unit 1306 ... Background depth value storage unit 1500 ... Television broadcast System 1501 ... Partial stereoscopic image transmission device 1502 ... Broadcast network 1503 ... Partial stereoscopic image reception device 1531 ... Reception unit 1532 ... Information separation unit 1533 ... Decoder 1534 ... Audio information processing unit 1535 ... Speaker

Claims (9)

  1.  2次元画像の画像データを取得する画像取得部と、
     前記2次元画像の一部の領域について、画素毎の奥行き値を示す奥行情報を生成する奥行情報生成部と、
     前記2次元画像の画像データと、前記奥行情報とに基づき、前記一部の領域についてのみ立体的に知覚される3次元画像の画像データを生成する画像合成部と
     を具備する、立体画像生成装置。
    An image acquisition unit for acquiring image data of a two-dimensional image;
    A depth information generating unit that generates depth information indicating a depth value for each pixel for a partial region of the two-dimensional image;
    A three-dimensional image generation apparatus comprising: an image composition unit configured to generate image data of a three-dimensional image perceived three-dimensionally only for the partial region based on the image data of the two-dimensional image and the depth information .
  2.  立体形状を表すデータを記憶する形状記憶部を具備し、
     前記奥行情報取得部は、前記領域と立体形状とを指定する領域情報と、前記形状記憶部が記憶するデータとに基づき、前記奥行情報を生成する、
     請求項1に記載の立体画像生成装置。
    A shape storage unit for storing data representing a three-dimensional shape;
    The depth information acquisition unit generates the depth information based on region information designating the region and a three-dimensional shape and data stored in the shape storage unit.
    The stereoscopic image generating apparatus according to claim 1.
  3.  ユーザー操作により指定された領域および立体形状を示す前記領域情報を生成する領域情報取得部を具備し、
     前記奥行情報取得部は、前記領域情報取得部が取得した領域情報と、前記形状記憶部が記憶するデータとに基づき、前記奥行情報を生成する、
     請求項2に記載の立体画像生成装置。
    An area information acquisition unit that generates the area information indicating the area and the three-dimensional shape specified by the user operation;
    The depth information acquisition unit generates the depth information based on the region information acquired by the region information acquisition unit and the data stored in the shape storage unit.
    The three-dimensional image generation apparatus according to claim 2.
  4.  複数の前記領域情報を予め記憶する領域情報記憶部と、
     前記領域情報記憶部が記憶する領域情報の中から領域情報を選択する領域情報選択部と
     を具備し、
     前記奥行情報取得部は、前記領域情報選択部が選択した領域情報と、前記形状記憶部が記憶するデータとに基づき、前記奥行情報を生成する、
     請求項2に記載の立体画像生成装置。
    A region information storage unit that stores a plurality of the region information in advance;
    A region information selection unit that selects region information from the region information stored in the region information storage unit;
    The depth information acquisition unit generates the depth information based on the region information selected by the region information selection unit and the data stored in the shape storage unit.
    The three-dimensional image generation apparatus according to claim 2.
  5.  前記2次元画像のデータと前記領域情報とを受信する受信部を具備し、
     前記奥行情報取得部は、前記受信した領域情報と、前記形状記憶部が記憶するデータとに基づき、前記奥行情報を生成する、
     請求項2に記載の立体画像生成装置。
    A receiving unit for receiving the data of the two-dimensional image and the region information;
    The depth information acquisition unit generates the depth information based on the received region information and data stored in the shape storage unit.
    The three-dimensional image generation apparatus according to claim 2.
  6.  前記画像合成部は、
     前記2次元画像から前記一部の領域以外を抽出した2次元領域画像のデータと、前記2次元画像から前記一部の領域を抽出した3次元領域画像のデータとを生成する画像領域分割部と、
     前記2次元領域画像のデータから、前記一部の領域を補間した背景画像のデータを生成する2次元領域処理部と、
     前記3次元領域画像のデータと前記奥行情報とに基づき、前記一部の領域に関する3次元画像データを生成する3次元領域処理部と、
     前記背景画像のデータと、前記一部の領域に関する3次元画像データとに基づき、前記一部の領域についてのみ立体的に知覚される3次元画像のデータを生成する部分立体画像生成部と
     を具備する、請求項1に記載の立体画像生成装置。
    The image composition unit
    An image region dividing unit that generates data of a two-dimensional region image obtained by extracting a region other than the partial region from the two-dimensional image, and data of a three-dimensional region image obtained by extracting the partial region from the two-dimensional image; ,
    A two-dimensional region processing unit that generates background image data obtained by interpolating the partial region from the two-dimensional region image data;
    A three-dimensional region processing unit that generates three-dimensional image data related to the partial region based on the data of the three-dimensional region image and the depth information;
    A partial stereoscopic image generating unit that generates data of a three-dimensional image perceived stereoscopically for only the partial region based on the background image data and the three-dimensional image data related to the partial region; The stereoscopic image generating apparatus according to claim 1.
  7.  2次元画像のデータを取得する画像取得部と、
     前記2次元画像の一部の領域について、画素毎の奥行き値を示す奥行情報を生成する奥行情報生成部と、
     前記2次元画像の画像データと、前記奥行情報とに基づき、前記一部の領域についてのみ立体的に知覚される3次元画像の画像データを生成する画像合成部と、
     前記3次元画像の画像データを用いて、立体画像を表示する表示部と
     を具備する、立体画像表示装置。
    An image acquisition unit for acquiring data of a two-dimensional image;
    A depth information generating unit that generates depth information indicating a depth value for each pixel for a partial region of the two-dimensional image;
    An image composition unit that generates image data of a three-dimensional image that is perceived three-dimensionally only for the partial region based on the image data of the two-dimensional image and the depth information;
    A stereoscopic image display apparatus comprising: a display unit that displays a stereoscopic image using image data of the three-dimensional image.
  8.  2次元画像の画像データを取得する第1の過程と、
     前記2次元画像の一部の領域について、画素毎の奥行き値を示す奥行情報を生成する第2の過程と、
     前記2次元画像の画像データと、前記奥行情報とに基づき、前記一部の領域についてのみ立体的に知覚される3次元画像の画像データを生成する第3の過程と
     を有する、立体画像生成方法。
    A first process of acquiring image data of a two-dimensional image;
    A second process of generating depth information indicating a depth value for each pixel for a partial region of the two-dimensional image;
    A three-dimensional image generation method comprising: a third step of generating image data of a three-dimensional image that is perceived three-dimensionally only for the partial region based on the image data of the two-dimensional image and the depth information. .
  9.  コンピュータを、
     2次元画像の画像データを取得する画像取得部、
     前記2次元画像の一部の領域について、画素毎の奥行き値を示す奥行情報を生成する奥行情報生成部、
     前記2次元画像の画像データと、前記奥行情報とに基づき、前記一部の領域についてのみ立体的に知覚される3次元画像の画像データを生成する画像合成部
     として機能させるためのプログラム。
    Computer
    An image acquisition unit for acquiring image data of a two-dimensional image;
    A depth information generating unit that generates depth information indicating a depth value for each pixel for a partial region of the two-dimensional image;
    A program for functioning as an image synthesis unit that generates image data of a three-dimensional image that is perceived stereoscopically for only the partial region based on the image data of the two-dimensional image and the depth information.
PCT/JP2012/070670 2011-08-18 2012-08-14 Stereoscopic image generating device, stereoscopic image display device, stereoscopic image generating method, and program WO2013024847A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2011-178833 2011-08-18
JP2011178833A JP2013042414A (en) 2011-08-18 2011-08-18 Stereoscopic image generating device, stereoscopic image display device, stereoscopic image generating method, and program

Publications (1)

Publication Number Publication Date
WO2013024847A1 true WO2013024847A1 (en) 2013-02-21

Family

ID=47715164

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/070670 WO2013024847A1 (en) 2011-08-18 2012-08-14 Stereoscopic image generating device, stereoscopic image display device, stereoscopic image generating method, and program

Country Status (2)

Country Link
JP (1) JP2013042414A (en)
WO (1) WO2013024847A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104915981A (en) * 2015-05-08 2015-09-16 寇懿 Three-dimensional hairstyle design method based on somatosensory sensor
CN112015357A (en) * 2020-08-12 2020-12-01 浙江迅实科技有限公司 Method for making 3D stereograph and product thereof

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102249831B1 (en) 2014-09-26 2021-05-10 삼성전자주식회사 image generation apparatus and method for generating 3D panorama image

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005065162A (en) * 2003-08-20 2005-03-10 Matsushita Electric Ind Co Ltd Display device, transmitting apparatus, transmitting/receiving system, transmitting/receiving method, display method, transmitting method, and remote controller

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005065162A (en) * 2003-08-20 2005-03-10 Matsushita Electric Ind Co Ltd Display device, transmitting apparatus, transmitting/receiving system, transmitting/receiving method, display method, transmitting method, and remote controller

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104915981A (en) * 2015-05-08 2015-09-16 寇懿 Three-dimensional hairstyle design method based on somatosensory sensor
CN112015357A (en) * 2020-08-12 2020-12-01 浙江迅实科技有限公司 Method for making 3D stereograph and product thereof
CN112015357B (en) * 2020-08-12 2023-05-05 浙江迅实科技有限公司 Manufacturing method of 3D stereograph and product thereof

Also Published As

Publication number Publication date
JP2013042414A (en) 2013-02-28

Similar Documents

Publication Publication Date Title
TWI523488B (en) A method of processing parallax information comprised in a signal
US20200302688A1 (en) Method and system for generating an image
TWI508521B (en) Displaying graphics with three dimensional video
WO2011135857A1 (en) Image conversion device
WO2012036120A1 (en) Stereoscopic image generation device, stereoscopic image display device, stereoscopic image adjustment method, program for executing stereoscopic image adjustment method on computer, and recording medium on which program is recorded
KR101538947B1 (en) The apparatus and method of hemispheric freeviewpoint image service technology
TW201043002A (en) Combining 3D image and graphical data
JPWO2012176431A1 (en) Multi-viewpoint image generation apparatus and multi-viewpoint image generation method
KR20180069781A (en) Virtual 3D video generation and management system and method
US20120075291A1 (en) Display apparatus and method for processing image applied to the same
JP2003284093A (en) Stereoscopic image processing method and apparatus therefor
JP5755571B2 (en) Virtual viewpoint image generation device, virtual viewpoint image generation method, control program, recording medium, and stereoscopic display device
US20170171534A1 (en) Method and apparatus to display stereoscopic image in 3d display system
JP6033625B2 (en) Multi-viewpoint image generation device, image generation method, display device, program, and recording medium
CN103039082A (en) Image processing method and image display device according to the method
WO2013108285A1 (en) Image recording device, three-dimensional image reproduction device, image recording method, and three-dimensional image reproduction method
WO2013024847A1 (en) Stereoscopic image generating device, stereoscopic image display device, stereoscopic image generating method, and program
CN106686367A (en) Display mode switching method and display control system of virtual reality (VR) display
JP2003284095A (en) Stereoscopic image processing method and apparatus therefor
JP2011151773A (en) Video processing apparatus and control method
JP5036088B2 (en) Stereoscopic image processing apparatus, stereoscopic image processing method, and program
TW201327470A (en) Method for adjusting depths of 3D image and method for displaying 3D image and associated device
JP2003284094A (en) Stereoscopic image processing method and apparatus therefor
JP2014203017A (en) Image processing device, image processing method, display, and electronic apparatus
US9547933B2 (en) Display apparatus and display method thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12824131

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12824131

Country of ref document: EP

Kind code of ref document: A1