WO2010013382A1 - 映像符号化装置、映像符号化方法、映像再生装置、映像再生方法、映像記録媒体、及び映像データストリーム - Google Patents
映像符号化装置、映像符号化方法、映像再生装置、映像再生方法、映像記録媒体、及び映像データストリーム Download PDFInfo
- Publication number
- WO2010013382A1 WO2010013382A1 PCT/JP2009/002614 JP2009002614W WO2010013382A1 WO 2010013382 A1 WO2010013382 A1 WO 2010013382A1 JP 2009002614 W JP2009002614 W JP 2009002614W WO 2010013382 A1 WO2010013382 A1 WO 2010013382A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video
- sub
- display
- data
- picture
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/025—Systems for the transmission of digital non-picture data, e.g. of text during the active part of a television frame
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/156—Mixing image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/161—Encoding, multiplexing or demultiplexing different image signal components
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/172—Processing image signals image signals comprising non-image signal components, e.g. headers or format information
- H04N13/178—Metadata, e.g. disparity information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/172—Processing image signals image signals comprising non-image signal components, e.g. headers or format information
- H04N13/183—On-screen display [OSD] information, e.g. subtitles or menus
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
Definitions
- the present invention relates to a video playback apparatus and method for displaying stereoscopic video, a video encoding apparatus and encoding method for creating stereoscopic video, and further relates to a video recording medium and video data stream for holding video encoded data.
- stereoscopic video 3D video
- the most popular method is to display different images for both the left and right eyes so that they can be viewed with their respective eyes.
- This is a method of visualizing a stereoscopic image by giving a parallax.
- two cameras are used at the time of photographing, and images are simultaneously photographed by cameras arranged in a horizontal direction by a distance corresponding to the distance between both eyes.
- the left eye displays only the video captured with the left-eye camera
- the right eye displays only the video captured with the right-eye camera, allowing both eyes to detect parallax.
- the additional video information can be expressed in the depth direction because additional video information such as sub-pictures and graphics can be expressed in a depth direction because the presentation effect is limited. Is required.
- the subtitle display position can be set to an appropriate position in front of the video, but it is limited to one point.
- the present invention provides additional video information such as sub-pictures and graphics to be superimposed on a stereoscopic video in a method for visualizing a stereoscopic video by displaying different videos for the left and right eyes.
- It is another object of the present invention to reduce the amount of data for making additional video information such as sub-pictures and graphics at this time a stereoscopically viewable expression.
- It is another object of the present invention to simplify arithmetic processing when realizing a three-dimensional expression of video information such as sub-pictures and graphics in a video playback device.
- the purpose is to reduce the cost by saving the arithmetic processing performance required for the video playback device, and to improve the update speed of the video information displayed stereoscopically under the given arithmetic processing performance. .
- the video encoding apparatus of the present invention is A video encoding device that generates video data to be played back by a video playback device that visualizes a stereoscopic video by displaying different videos for the left and right eyes, A video representing a display video that constitutes a stereoscopic video by encoding a video signal output from a camera that captures images at the first and second viewpoints, which is arranged in the horizontal direction with an interval corresponding to the binocular interval Video encoding means for generating an encoded data stream; A sub-picture encoding data stream is generated by encoding the data of the display sub-pictures of the first and second viewpoints that are superimposed on the display pictures of the first and second viewpoints constituting the stereoscopic video, respectively.
- Video encoding means A stream multiplexing means for multiplexing the video encoded data stream generated by the video encoding means and the sub video encoded data stream generated by the sub video encoding means,
- encoding is performed so that data of one or more objects included in the display sub-picture of the first viewpoint can be independently decoded, At least one of horizontal movement and expansion / contraction of the one or more objects included in the second viewpoint display sub-picture with respect to each corresponding object displayed as the first viewpoint display sub-picture.
- To display the depth by displaying As the display sub-picture data of the second viewpoint, data indicating a left end movement width and a right end movement width for each of the objects is generated.
- the video playback device of the present invention is A video playback device for visualizing a stereoscopic video by decoding stereoscopic video data including an encoded sub-video and displaying separate video for both left and right eyes, Video decoding means for decoding the display video of the first and second viewpoints constituting the stereoscopic video; Sub-picture decoding means for decoding data of display sub-pictures of a plurality of viewpoints to be displayed superimposed on the display pictures of the first and second viewpoints constituting the stereoscopic video, In the sub-picture decoding means, independently decodes data of one or more objects included in the display sub-picture of the first viewpoint, Read the movement width at the left end and the movement width at the right end for each of the objects, generated as display sub-picture data of the second viewpoint, At least one of horizontal movement and expansion / contraction of one or more objects included in the second viewpoint display sub-picture with respect to each corresponding object displayed as the first viewpoint display sub-picture is performed. It is characterized by being displayed.
- sub-picture information such as sub-pictures and graphics superimposed on the stereoscopic image is displayed in the depth direction.
- Expression became possible, and the freedom as an expression method of a three-dimensional image increased.
- the present invention it is possible to reduce the amount of data for making the sub-picture information (sub-picture, graphics, etc.) at this time a stereoscopically viewable expression.
- the present invention it is possible to simplify the arithmetic processing when sub-picture information (sub-picture, graphics, etc.) is represented in a three-dimensional manner in the video reproduction apparatus. Further, according to the present invention, it is possible to save the arithmetic processing performance required for the video reproduction apparatus, and it is possible to reduce the cost. In addition, according to the present invention, it is possible to improve the display update speed of the three-dimensionally displayed video information under the given calculation processing performance, so that the fast-forward playback can be performed while the video information is displayed in three dimensions. It became possible.
- FIG. 1 It is a block diagram which shows the video coding apparatus of Embodiment 1 of this invention. It is a block diagram which shows the video reproduction apparatus of Embodiment 1 of this invention.
- (A) And (b) is a figure which shows the relationship between parallax and depth for demonstrating the principle of this invention.
- (A) And (b) is a figure which shows the image for both eyes for demonstrating the principle of this invention.
- (A) And (b) is a figure which shows the quantitative relationship of parallax and distance for demonstrating the principle of this invention.
- A) And (b) is a figure which shows an example of the caption arrangement
- (A) And (b) is a figure which shows the structure of the image for both eyes of the caption shown by Fig.6 (a) and (b). It is a figure which shows the image
- (A) And (b) is a figure which shows an example of the caption arrangement
- (A) And (b) is a figure which shows the structure of the image for both eyes of a caption shown by Fig.9 (a) and (b). It is a figure which shows the image
- (A) And (b) is a figure which shows an example of the drawing method of the caption used in Embodiment 3 of this invention.
- (A) And (b) is a figure which shows the relationship between parallax and height for demonstrating the principle of this invention.
- FIG. FIG. 1 shows the configuration of a system including a video encoding apparatus according to Embodiment 1 of the present invention.
- This apparatus digitally encodes a captured stereoscopic video (hereinafter referred to as a main video), and also displays a sub-video superimposed on the stereoscopic video during playback, that is, subtitles for subtitles and user equipment operations.
- graphics such as graphics for displaying choices, samples, guidance, and the like are generated and digitally encoded, and a video data stream is generated by multiplexing the main video with the digitally encoded data.
- the main video is a video that can be three-dimensionally represented, and the sub-video superimposed and displayed on the main video is generated and encoded so that it can be represented in the depth direction and is stereoscopically viewed.
- the transmission / recording means 31 also forms part of the video encoding device.
- the main video is shot by using two cameras, a left-eye video camera 11 and a right-eye video camera 12, which are horizontally spaced by a distance corresponding to the distance between both eyes. Shoot at the same time.
- the captured video signal of each camera is input to the video data encoder 21, where it is digitally encoded to form a main video encoded data stream.
- a plurality of methods are known for digital encoding of stereoscopic video, but the present invention is not limited to a specific method.
- the sub-picture is created by the graphics generator 22 according to the content creator's specifications and output as digitized sub-picture data.
- this sub-picture data in addition to image data such as sub-pictures and graphics displayed as sub-pictures, information on the position in the depth direction in which each object should be displayed is included for the objects included in the sub-picture. To do.
- the sub-picture data created by the graphics generator 22 may be created in a shape that can be seen from the viewer's viewpoint. For example, when the object to be displayed is a square, if the object is not equidistant from the viewpoint but is tilted in the depth direction, the object appears to be trapezoidal or unequal-sided quadrangle, but it looks like that. Create data. The position information on how to arrange in the depth direction is attached to it.
- the sub-picture data is input to the graphics data encoder 23 and encoded as a left-eye sub-picture and a right-eye sub-picture.
- Objects that are included in the left-eye sub-picture when the left-eye and right-eye viewpoints are generalized to be regarded as the first viewpoint and the second viewpoint, and, for example, the left eye is assigned as the reference first viewpoint
- These data are encoded so that they can be decoded independently and displayed.
- the sub video of the second viewpoint is generated from the sub video of the first viewpoint as a reference.
- the second viewpoint is the right eye. Due to binocular parallax, an object included in a right-eye sub-picture expresses a sense of depth simply by expanding and contracting in the horizontal direction relative to the object displayed as a left-eye sub-picture. be able to. Therefore, in order to represent the sub-picture for the right eye, a movement indicating how much the display position of the left end and the right end of each object should be shifted from the horizontal position when displaying for the left eye.
- the width need only be generated as data, associated with it, or associated with it. For example, it is held as a part of the data stream. This principle will be described in detail later.
- the sub-picture encoded data generated by the graphics data encoder 23 in this way is input to the data stream multiplexer 25 together with the main video encoded data generated by the video data encoder 21.
- the data stream multiplexer 25 multiplexes two pieces of encoded data into a multiplexed encoded data stream.
- the main video and the sub video that are time-designated so as to be displayed superimposed on one screen at the same time will be a multiplexed encoded data stream that can be displayed without failure such as data underrun. Synthesize.
- the multiplexed encoded data stream is input to the data stream transmission / storage means 30.
- the data stream transmission / storage unit 30 has a transmission function among the functional blocks represented by the transmission / recording unit 31, the data stream transmission / storage unit 30 modulates the multiplexed encoded data stream for transmission and receives it at a remote location. / Transmission to the reception function among the functional blocks represented as the reproduction means 33.
- the multiplexed encoded data stream is modulated for accumulation, and is recorded and accumulated in the recording medium 32. Any function that requires either a transmission function or a recording function is sufficient.
- FIG. 2 shows a configuration of a system including the video reproduction device according to the first embodiment of the present invention.
- This apparatus encodes a multiplexed encoded data stream encoded by a video encoding apparatus according to the description in FIG. 1 and input to the data stream transmission / accumulation means 30 into a main video that is a stereoscopic video and a representation in the depth direction. It is demodulated into a stereoscopic video that is possible and reproduced as a superimposed video, a data stream demultiplexer 45, a video data decoder (video decoding means) 41, and a graphics data decoder (sub-data).
- Video decoding means 43 right-eye video graphics display combining means 52, left-eye video graphics display combining means 51, and a stereoscopic display 60.
- reception / reproduction means 33 also forms part of the video reproduction apparatus.
- the data stream transmission / storage means 30 has a reception function among the functional blocks represented as the reception / reproduction means 33
- the multiplexed encoded data stream transmitted by the transmission function is received by the reception function and demodulated. And input to the data stream demultiplexer 45.
- the multiplexed encoded data stream stored in the recording medium 32 is read and demodulated by the reproduction function, and the data stream demultiplexer 45. Any function that requires either a reception function or a reproduction function is sufficient.
- the data stream demultiplexer 45 refers to the attribute information added to the stream from the multiplexed encoded data stream and separates and distributes the main video encoded data stream and the sub video encoded data stream.
- the main video encoded data stream is input to the video data decoder 41, and the sub video encoded data stream is input to the graphics data decoder 43.
- the video data decoder 41 decodes the main video encoded data and reproduces the main video data for the left eye and the right eye.
- the binocular video data thus decoded is sent to the left-eye video / graphics display combining unit 51 and the right-eye video / graphics display combining unit 52, respectively.
- the decoding in the video data decoder 41 is not limited to a specific video encoding method in the present invention, and any decoding corresponding to the method encoded by the video data encoder 21 may be used.
- the graphics data decoder 43 decodes the sub-picture encoded data stream and reproduces it as left-eye and right-eye sub-picture data.
- the decoding in the graphics data decoder 43 is a decoding method corresponding to the method when encoded by the graphics data encoder 23. As described above, when the left eye is assigned to the first viewpoint as a reference, the object data included in the left-eye sub-picture can be decoded independently, so that the left-eye sub-picture data is used as it is. Is output.
- the object included in the right-eye sub-picture as the second viewpoint moves in the horizontal direction and expands and contracts with respect to the object displayed as the left-eye sub-picture.
- the data has a movement width indicating how much to shift from the horizontal position when displaying for the left eye. Read to calculate the display position. In this way, it is possible to reproduce the stereoscopic effect caused by the binocular parallax.
- the video signal is expressed by sequentially scanning the horizontal scanning lines from the top to the bottom of the screen, it is extremely easy to move the display content horizontally on each scanning line representing the object. It is. In addition, horizontal expansion and contraction can be easily realized by simple arithmetic processing, only by changing the movement width for each point on one scanning line depending on the position.
- the binocular sub-picture data thus decoded is sent to the left-eye video / graphics display combining unit 51 and the right-eye video / graphics display combining unit 52, respectively.
- the left-eye video / graphics display combining unit 51 and the right-eye video / graphics display combining unit 52 superimpose the restored main video and sub-video in accordance with predetermined specifications, respectively, and display the video display signal on the stereoscopic display 60.
- the content creator adjusts the depth feeling due to the stereoscopic display of the main video and the depth feeling due to the stereoscopic display of the sub-picture when authoring.
- the background is set so that the sub-video is expressed by color information and transparency information, and the transparency is increased farther to mix with the main video. It is also possible to express the context with the main video.
- FIG. 3 (a) and 3 (b) are diagrams showing the relationship between parallax and depth, which is the principle of the present invention.
- FIG. 3A shows a plan view of the entire space to be imaged, including the camera, that is, the viewer's viewpoint
- FIG. 3B shows a side view thereof.
- the x-axis is the horizontal direction (right is positive)
- the z-axis is the depth direction (back is positive)
- the y-axis is vertical (bottom) as shown in FIG. Is positive).
- the center of the line of sight of the left eye L and the right eye R is both directed to an infinite point ahead of the same direction as indicated by arrows (solid line and broken line) in the figure. Further, as shown in FIG. 3A, the horizontal field of view of the left eye L extends in the depth direction (solid line), and the horizontal field of view of the right eye R extends in the depth direction (broken line). ).
- Objects L1, L2, L3, L4, L5, L6, and L7 are located at distances d1, d2, d3, d4, d5, d6, and d7 on the line of sight of the left eye L, respectively. Further, there are objects R1, R2, R3, R4, R5, R6, and R7 at positions of distances d1, d2, d3, d4, d5, d6, and d7 on the line of sight of the right eye R, respectively.
- the objects L1, L2, L3, L4, L5, L6, and L7 are white bars that extend upward from the center in the vertical direction, and the farther away they are, the longer.
- Objects R1, R2, R3, R4, R5, R6, and R7 are black bars extending downward from the center in the vertical direction, and the farther away they are, the longer. Consider how each of these objects looks when viewed from the viewpoint of the left eye L and the right eye R.
- FIG. 4 (a) and 4 (b) are diagrams showing an example of a binocular image that is the principle of the present invention.
- the image displayed for eyes is shown.
- Two objects L1 and R1 and L2 and R2 that are equidistant from the viewpoint are placed at the same interval. However, according to the perspective method on the image at the viewpoint, the closer the object is, the wider the interval is displayed. .
- the objects L1, L2, L3, L4, L5, L6, and L7 are all visible at the center in the horizontal direction.
- the objects R1, R2, R3, R4, R5, R6, and R7 appear to be shifted to the right by ⁇ R1, ⁇ R2, ⁇ R3, ⁇ R4, ⁇ R5, ⁇ R6, and ⁇ R7, respectively, from the horizontal center.
- the image of the right eye R shown in FIG. 4B is generated using the image of the left eye L.
- the objects L1, L2, L3, L4, L5, L6, and L7 that are seen at the center in the horizontal direction with the left eye L are compared with the positions in the horizontal direction in the image of the left eye L with the right eye R, ⁇ L1, ⁇ L2 , ⁇ L3, ⁇ L4, ⁇ L5, ⁇ L6, and ⁇ L7 appear to have moved to the left.
- the objects R1, R2, R3, R4, R5, R6, and R7 that were viewed by the left eye L toward the right side in the horizontal direction all appear to overlap the horizontal center in the right eye R. That is, compared to the horizontal position in the image of the left eye L, only ⁇ R1, ⁇ R2, ⁇ R3, ⁇ R4, ⁇ R5, ⁇ R6, and ⁇ R7 appear to have moved to the left.
- the movement width of the horizontal position of the object is larger as the distance is closer, becomes smaller as the distance is farther, and becomes infinite. It can be seen that the point stays in the same position without moving. If the movement width of the horizontal position corresponding to each position is specified for each of these objects, the right eye R image is created from the left eye L image to express the binocular parallax, It is possible to reproduce the sense of distance in the depth direction. That is, an image that can be viewed stereoscopically can be generated.
- FIG. 5A and 5 (b) are diagrams showing the quantitative relationship between parallax and distance for explaining the principle of the present invention.
- FIG. 5A shows a plan view of the entire space to be imaged, including the viewer's viewpoint, as in FIG. 3A.
- the definition of the x-axis and the z-axis, the viewpoints of the left eye and right eye, the line of sight, and the field of view are the same.
- FIG. 5B shows an image for the left eye.
- 2 ⁇ is the horizontal angle of view of the camera
- d is the distance from the camera
- a is the binocular distance
- X0 is the horizontal viewing width
- ⁇ x is the binocular parallax
- Px is the horizontal direction. Is the number of pixels in the horizontal direction of the screen corresponding to the binocular parallax ⁇ x.
- binocular parallax The relative length (referred to as binocular parallax) ⁇ x of the binocular interval a with respect to the horizontal visual field width X0 on the vertical plane at the distance d in the depth direction from each binocular camera having a horizontal angle of view 2 ⁇ .
- the binocular parallax ⁇ x caused by the binocular interval a is converted into the number of pixels ⁇ Px in the horizontal direction on the screen of the camera or display.
- ⁇ Px is as follows.
- the distance d in the depth direction is calculated from the equation (2) based on the horizontal angle of view 2 ⁇ of the camera, the binocular interval a, and the number of pixels ⁇ Px in the horizontal direction on the display screen corresponding to the binocular parallax ⁇ x. Is calculated as follows.
- the horizontal movement width relative to the position of the object when the image of the right eye R is generated using the image of the left eye L can be calculated quantitatively and can be specified in units of pixels.
- the calculated movement width ⁇ Px for each object or a predetermined portion of each object it is possible to create an image that expresses a sense of distance in the depth direction during playback. Become.
- the distance d in the depth direction obtained by the equation (3) changes corresponding to only the value of ⁇ Px of each part of the object. Therefore, for example, when two objects are arranged so as to overlap each other, it is possible to determine which one is put forward in the overlapped portion when displayed based on the magnitude of ⁇ Px. In the opaque object, the object behind is hidden, but in the overlapped part, the place where ⁇ Px is large is displayed in front, and the place where ⁇ Px is small is hidden.
- the graphics generator 22 creates image data of the left eye L, the distance in the depth direction with respect to a predetermined portion of the image, the viewing angle of the camera, and the screen size. Is output to the graphics data encoder 23.
- ⁇ Px is calculated using the equation (2) as described above for a predetermined portion of the image for which the distance is designated, and an image of the right eye R is generated and encoded.
- FIGS. 6A and 6B show an example of caption arrangement used in the first embodiment.
- a plan view and a side view of the entire space including the viewer's viewpoint and the subject of the video are shown.
- the definition of x-axis, y-axis, and z-axis, left eye and right eye viewpoint, line of sight, and visual field range are the same.
- a rectangular subtitle [A] is arranged vertically at a distance d5 from the viewpoint, and a rectangular subtitle [B] is inclined at the right side as viewed from the viewpoint at a position from the distance d5 to d7. Deploy.
- the subtitle [A] is arranged in the upper part of the center, and the subtitle [B] is arranged in the lower part of the center.
- FIGS. 3A and 3B it will be considered how these two subtitles are seen from the viewpoint of both eyes, that is, how they should be displayed on the display screen.
- FIGS. 7A and 7B show the structure of the binocular image of the caption shown in FIGS. 6A and 6B.
- FIG. 7 (a) with the left eye L, the vertically arranged rectangular subtitle [A] looks like a rectangle, and the rectangular subtitle [B] tilted to the back looks like a trapezoid.
- the horizontal positions of the two captions are the same x1 on the left side.
- the right side is the same in the real object, but in the image, the subtitle [A] at the distance d5 is x2, and the subtitle [B] at the distance d7 is x3.
- FIG. 7 (a) with the left eye L, the vertically arranged rectangular subtitle [A] looks like a rectangle, and the rectangular subtitle [B] tilted to the back looks like a trapezoid.
- the horizontal positions of the two captions are the same x1 on the left side.
- the right side is the same in the real object, but in the image, the subtitle [A] at the distance
- the vertically placed rectangular subtitle [A] looks like a rectangle
- the rectangular subtitle [B] tilted to the back looks like a trapezoid.
- the horizontal positions of the two captions are the same on the left side (x1- ⁇ x1). Although the right side is the same in the real thing, in the image, the caption [A] at the distance d5 is (x2- ⁇ x1), and the caption [B] at the distance d7 is (x3- ⁇ x3).
- the subtitle [A] may be moved to the left by ⁇ x1 as a whole.
- the left side may be moved to the left by ⁇ x1
- the right side may be moved to the left by ⁇ x3.
- ⁇ x1> ⁇ x3 and the movement width ⁇ x3 becomes smaller as the right side is deeper and the distance is larger.
- the width of the subtitle [A] is (x2-x1) for both the left eye L and the right eye R, but the width of the subtitle [B] is (x3-x1) for the left eye L and the right eye R Then, (x3 ⁇ x1) ⁇ ( ⁇ x3 ⁇ x1), which is longer when viewed from the right eye R due to the influence of binocular parallax.
- the graphics data encoder 23 When the distance in the depth direction is different between the left and right parts of the object to be displayed, when the graphics data encoder 23 performs encoding, the movement widths ⁇ x1 and ⁇ x3 for moving the left end position and the right end position of the object are calculated and the sub-distances are calculated. If stored in the video encoded data, the image for the right eye can be expressed on the basis of the image for the left eye. By using this sub-picture encoded data, when the graphics data decoder 43 decodes, the object displayed in the right-eye image can be easily reproduced from the left-eye image data and the movement widths ⁇ x1 and ⁇ x3. Can do.
- FIG. 8 shows the data structure of the encoded video in the examples of FIGS. 6 (a) and 6 (b). It is the whole structure of the video data stream which encoded the image
- This entire video data stream is referred to as “stereoscopic video data / graphics data stream” VGS.
- the stereoscopic video data / graphics data stream VGS is digitized and encoded by being divided into predetermined “coding units” UOC. Multiplexing is performed in the data stream multiplexer 25 so that “video data” VDD consisting of main video encoded data and “graphics data” GRD consisting of sub-picture encoded data are included in one encoding unit UOC. Is done.
- the data of one encoding unit UOC is being reproduced and displayed, the data of the next encoding unit UOC is read out, so that after the display of one encoding unit UOC is finished, there is no interruption. Can be followed by the display of the next coding unit UOC.
- the data arrangement shown in FIG. 8 is an example.
- the graphics data GRD does not necessarily have to be in all the encoding units UOC. .
- the graphics data GRD stores data of all objects to be displayed as sub-pictures.
- the structure is shown in the figure.
- the “number of objects” NOB indicates the number of objects included in the encoding unit UOC of the graphics data GRD.
- the graphics data of the object # 1 to the object #N is stored from the “object # 1 graphics data” GRD-1 to the “object #N graphics data” GRD-N.
- “Left-eye display graphics data” 104 representing the digitized data
- “right-eye display graphics data” 106 for representing the right-eye image on the basis of the sub-picture for the left eye.
- the “right eye display graphics data” 106 includes a “left end shift width” 108 and a “right end shift width” 110. Both are data of the movement width necessary for reproducing the object displayed in the right-eye image from the object displayed in the left-eye image as described above. ) And (b) correspond to ⁇ x1 and ⁇ x3.
- FIG. FIGS. 9A and 9B show a caption arrangement as an example of the caption arrangement used in the second embodiment.
- FIGS. 6A and 6B a plan view and a side view of the entire space to be imaged including the viewer's viewpoint are shown.
- the arrangement of the subtitle [A] is the same as the example shown in FIGS.
- a rectangular subtitle [C] is arranged at a position from the distance d5 to d7 with the upper side inclined to the back as viewed from the viewpoint.
- the subtitle [A] is arranged in the upper part of the center, and the subtitle [C] is arranged in the lower part of the center.
- it is considered how these two subtitles are seen from the viewpoint of both eyes, that is, how they should be displayed on the display screen.
- FIGS. 10A and 10B show the configuration of the binocular images of the caption shown in FIGS. 9A and 9B.
- FIG. 10 (a) with the left eye L, the vertically placed rectangular subtitle [A] appears to be a rectangle, and the rectangular subtitle [C] tilted to the back looks like a trapezoid.
- the horizontal positions of the two captions are at the same positions x1 and x2 at the left and right ends of the respective lower sides at the distance d5.
- the caption [A] is at the distance d5 and at the same x1 and x2 as the lower side, but in the caption [C] at the distance d7, the position of the upper left end is x4 and the position of the upper right end is x3.
- the vertically placed rectangular subtitle [A] looks like a rectangle
- the rectangular subtitle [C] tilted to the back looks like a trapezoid.
- the horizontal positions of the two captions are the same at the lower left corner and the lower right corner (x1- ⁇ x1) and (x2- ⁇ x1).
- the subtitle [A] at the distance d5 is the same as the lower side, the upper left end is (x1- ⁇ x1) and the upper right end is (x2- ⁇ x1), but the subtitle [C] at the distance d7 is ( x4- ⁇ x3), and the upper right end is (x3- ⁇ x3).
- the subtitle [A] may be moved to the left by ⁇ x1 as a whole. Further, regarding the subtitle [C], both the lower left end and the lower right end may be moved to the left by ⁇ x1, and both the upper left end and the upper right end may be moved to the left by ⁇ x3.
- ⁇ x1> ⁇ x3 and the movement width ⁇ x3 becomes smaller as the upper side is deeper and the distance is larger.
- the width of the caption [A] is (x2-x1) for both the left eye L and the right eye R, and does not change.
- the shape of the subtitle [C] is deformed for the left and right eyes due to the effect of binocular parallax, but the width is (x2-x1) for the left eye L and the right eye R, and (x3-x4) for the upper side. The part of the same distance does not change.
- the movement widths ⁇ x1 and ⁇ x3 for moving the lower end position and the upper end position of the object are calculated and the sub-distances are calculated. If stored in the video encoded data, the image for the right eye can be expressed on the basis of the image for the left eye.
- the graphics data decoder 43 decodes, the object displayed in the right-eye image can be easily reproduced from the left-eye image data and the movement widths ⁇ x1 and ⁇ x3. Can do.
- the following can be summarized as follows when combined with the example shown when the distance in the depth direction differs between the left and right portions of the object to be displayed.
- the distance in the depth direction differs between the upper, lower, left and right parts of the object to be displayed
- the graphics data encoder 23 when encoding is performed by the graphics data encoder 23, the respective movement widths for moving the position of the upper left and lower ends of the object and the positions of the upper and lower ends of the object are calculated. If stored in the video encoded data, the image for the right eye can be expressed on the basis of the image for the left eye.
- the graphics data decoder 43 decodes, the object to be displayed in the right-eye image is determined from the left-eye image data and the movement width of the respective left and right upper and lower positions. It can be easily reproduced.
- FIG. 11 shows the data structure of the encoded video in the examples of FIGS. 9 (a) and 9 (b). Most of the figure is the same as FIG. 8, and different parts will be described.
- the “right-eye display graphics data” 106 is simply composed of two fields of “left end shift width” 108 and “right end shift width” 110.
- the “left end shift width” 108 is divided into two “left upper end shift width” 112 and “left lower end shift width” 114 in more detail.
- the “right end shift width” 110 is composed of two fields of “upper right end shift width” 116 and “lower right end shift width” 118.
- Both are data of the movement width necessary for reproducing the object displayed in the right-eye image from the object displayed in the left-eye image as described above.
- (b) And (b), apply the same value to the “upper left end shift width” and the “lower left end shift width”, and the same value to the “upper right end shift width” and the “lower right end shift width”. Just put it in.
- the same value is set for “upper left end shift width” and “upper right end shift width”, and “left lower end shift width” and “right lower end shift width”. The same value should be put in each. More generally, an appropriate value for each of the four fields is calculated by the graphics data encoder 23 in accordance with the inclination of the object and set in each field.
- FIG. 12 (a) and 12 (b) are diagrams illustrating an example of a caption drawing method used in Embodiment 3 of the present invention.
- FIG. 10 shows a method for generating a right-eye image from a left-eye image by taking the subtitle [C] as an example for a graphic arranged inclined in the depth direction.
- the image for the left eye is a trapezoid
- the values of the x coordinate, which is the horizontal position of each vertex, are x1, x2, x3, and x4.
- a rectangular drawing area is secured, and a trapezoidal subtitle [C] is drawn in the rectangular drawing area.
- a trapezoidal subtitle [C] is drawn in the rectangular drawing area.
- the trapezoidal subtitle [C] is surrounded by an alternate long and short dash line in the figure, and the upper side of the rectangular drawing area where the x coordinate is between x1 and x2 To draw.
- the lower side of this rectangle is aligned with the lower side of the caption [C].
- FIG. 12B when an image for the right eye R is created based on the image for the left eye L, the drawing area of the rectangle Qa surrounded by the alternate long and short dash line is moved to an appropriate parallel position.
- the trapezoidal subtitle [C] drawn on this area matches the shape shown in FIG.
- the horizontal position of the drawing area that was rectangular in the image for the left eye L is (x1- ⁇ x1) at the lower left corner and (x2- ⁇ x2) at the lower right corner.
- the upper left corner is (x1- ⁇ x11), and the upper right corner is (x2- ⁇ x12).
- ⁇ x1 and ⁇ x2 are equal, and ⁇ x11 and ⁇ x12 are also equal.
- the drawing position is determined by calculating the shift width of the left and right ends of the drawing area for each horizontal scanning line. In this way, if the rectangular drawing area is defined as one object and the horizontal shift width of each vertex is specified to generate the right eye image from the left eye image, the target object It is easy to express the location where the image exists, so that it is easy to generate an image.
- the structure shown in FIG. 11 described above can be applied.
- the “left top shift width” is ⁇ x11
- the “left bottom shift width” is ⁇ x1.
- “Upper right end shift width” is ⁇ x12
- “right lower end shift width” is ⁇ x2.
- the gist of the present invention is that when displaying objects located at different depths in the depth direction, the positions of the four vertices of the object for the right eye are changed from the positions of the four vertices of the left and right upper and lower ends of the object for the left eye. The point is to calculate. At this time, in order to make it easy to calculate the positions of the four vertices of the right, left, upper, and lower ends of the object for the right eye, an example in which a field for setting each shift width is provided has been described. The expression method is not limited to “shift width”.
- the positions of “upper left corner” and “lower left corner” of the object for the right eye are expressed by “shift width” as described above, and the positions of “upper right corner” and “lower right corner” are the objects.
- the ratio of the horizontal lengths of the upper and lower ends of these objects may be expressed by providing a field for setting instead of “shift width” of “upper right end” and “lower right end”. .
- FIGS. 13A and 13B are diagrams showing the relationship between parallax and height, which are the principle of the present invention.
- 13 (a) and 13 (b) are a plan view (a) and a side view of the entire space to be imaged, similarly to the diagram showing the relationship between the parallax and the depth as the principle of the present invention shown in FIG. It is a figure which shows (b), but the arrange
- a vertical bar-like object E is placed at a position at a distance d6 in the depth direction at a position deviating from the center of the line of sight assuming a general case.
- an image taken from the left eye and right eye cameras at the time of imaging is displayed on a screen placed at a distance d0 indicated as “display screen” in the figure, for example.
- display screen When this screen is viewed from the viewer's viewpoint, the left eye L displays an image captured by the left eye camera, and the right eye R displays an image captured by the right eye camera.
- the line of sight when looking at both ends of the displayed object E from the left eye L and the right eye R are shown in a plan view (a) and a side view (b), respectively.
- the object E is displayed on a plane that intersects the display screen whose line of sight is at a distance d0.
- the direction of the line of sight of both eyes looking at the upper end of the object E is shifted due to parallax in the horizontal direction on the plan view (a), but coincides with the vertical direction on the side view (b). I understand that.
- the object E displayed for the left eye and the right eye may be the same height in both images.
- a sub-picture can be configured to include a plurality of modules, and each module can be used for the right eye with respect to the horizontal display positions on the left and right sides of the module.
- Each shift width for display is set and held in the sub-picture data.
- the sub-picture data is displayed as a left-eye sub-picture as it is superimposed on the left-eye video, and the right-eye sub-picture has a predetermined horizontal display position of the sub-picture data. The image is shifted by the width and superimposed on the right eye image.
- stereoscopic video data including the sub-picture data encoded as described above is held therein.
- each of the above “shift widths” is held as a fixed value on the data stream, but the playback device changes the “shift width” of the read data stream by the adjustment function added to the playback device, and changes the corresponding object. It is also possible to change the distance in the depth direction where is displayed. An object can be displayed at a distance desired by the user.
- the present invention is also applicable to the case where there is no main video and only the sub video.
- the graphics can be expressed in the depth direction and can be viewed stereoscopically, and the amount of data can be reduced, the arithmetic processing when expressing in three dimensions can be simplified, or the arithmetic processing required for the video playback device It can be applied as a general encoding and playback device and method for saving performance and reducing cost, or improving the update speed of stereoscopic video display under given arithmetic processing performance Is possible.
- the left-eye video camera 11, the right-eye video camera 12, the video data encoder 21, and the data stream multiplexer 25 are not necessary in the configuration of the video encoding device shown in FIG. become. 2, the video data decoder 41, the data stream / demultiplexer 45, the video / graphics display combining unit 51 for the left eye, and the video / graphics display combining unit 52 for the right eye. Is no longer needed.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Library & Information Science (AREA)
- Human Computer Interaction (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
実写映像を立体表示するには、撮影時に2台のカメラを用い、両眼の間隔に相当する距離だけ水平方向に離して配置したカメラで同時に撮影する。そして再生するとき、左眼には左眼用のカメラで撮影した映像のみを、右眼には右眼用のカメラで撮影した映像のみを、それぞれ見えるように表示し、両眼で視差を感知できるようにして、立体映像として認識させるものである。これまでに、片眼ずつにそれぞれ専用の映像を見せるための方式、その映像の解像度を高く見せる方式、その映像を表現するデータ量を削減する方式等について様々な技術が開示されている。
また、このときのサブピクチャやグラフィックス等の付加的な映像情報を立体視可能な表現とするためのデータ量を削減することを目的とする。
さらに、映像再生装置において、サブピクチャやグラフィックス等の映像情報の立体的な表現を実現するときの演算処理を簡略化することを目的とする。同時に、映像再生装置に求められる演算処理性能を節約することによってコストを低減すること、及び、所与の演算処理性能の下で立体表示される映像情報の更新速度を向上することも目的とする。
左右両眼用に各々別の映像を表示させることにより立体映像を視覚化する映像再生装置によって再生する映像データを生成する映像符号化装置であって、
水平方向に、両眼間隔に相当する間隔をあけて配置された、第1及び第2の視点で撮像を行なうカメラが出力する映像信号を符号化して、立体映像を構成する表示映像を表す映像符号化データストリームを生成する映像符号化手段と、
前記立体映像を構成する第1及び第2の視点の表示映像にそれぞれ重畳して表示する第1及び第2の視点の表示副映像のデータを符号化して副映像符号化データストリームを生成する副映像符号化手段と、
前記映像符号化手段により生成された映像符号化データストリームと、副映像符号化手段により生成された副映像符号化データストリームとを多重化するストリーム多重化手段とを備え、
前記副映像符号化手段において、前記第1の視点の表示副映像に含まれる1つ以上のオブジェクトのデータを独立して復号可能とするように符号化すると共に、
前記第2の視点の表示副映像に含まれる前記1つ以上のオブジェクトを、前記第1の視点の表示副映像として表示される各々対応するオブジェクトに対して、水平方向の移動及び伸縮の少なくとも一方を行なって、表示することにより奥行を表現するようにし、
前記第2の視点の表示副映像のデータとして、前記オブジェクトの各々に対して左端の移動幅と右端の移動幅を示すデータを生成する
ことを特徴とする。
符号化された副映像を含む立体映像のデータを復号し、左右両眼用に各々別の映像を表示させることにより立体映像を視覚化する映像再生装置であって、
立体映像を構成する前記第1及び第2の視点の表示映像を復号する映像復号手段と、
立体映像を構成する前記第1及び第2の視点の表示映像にそれぞれ重畳して表示する複数の視点の表示副映像のデータを復号する副映像復号手段とを備え、
前記副映像復号手段において、前記第1の視点の表示副映像に含まれる1つ以上のオブジェクトのデータを独立して復号すると共に、
前記第2の視点の表示副映像のデータとして生成された、前記オブジェクトの各々に対する左端の移動幅と右端の移動幅を読み取り、
前記第2の視点の表示副映像に含まれる1つ以上のオブジェクトを、前記第1の視点の表示副映像として表示される各々対応するオブジェクトに対して水平方向の移動及び伸縮の少なくとも一方を行なって表示する
ことを特徴とする。
また、本発明によれば、映像再生装置に求められる演算処理性能を節約することが可能になり、そのコストを低減することが可能となった。
また、本発明によれば、その所与の演算処理性能の下で、立体表示される映像情報の表示の更新速度を向上することも可能となり、映像情報を立体表示したまま早送り再生することが可能になった。
実施の形態1.
図1に、本発明の実施の形態1の映像符号化装置を含むシステムの構成を示す。この装置は、撮影した立体映像(以下、主映像と呼ぶ)をデジタル符号化すると共に、再生時にこの立体映像に重畳して表示する副映像、すなわち、字幕用のサブピクチャやユーザの機器操作に応じて選択肢やサンプル、案内等を表示するグラフィックス等の映像を生成してデジタル符号化し、主映像をデジタル符号化したデータに多重化した映像データストリームを生成するものである。ここで主映像は立体表現可能な映像であり、これに重畳表示する副映像も奥行方向の表現が可能で立体視されるものとなるように生成して符号化するものであり、左眼用ビデオカメラ11及び右眼用ビデオカメラ12に接続されたビデオエンコーダ(映像符号化手段)21と、グラフィクスジェネレータ22と、グラフィクスデータエンコーダ(副映像符号化手段)23と、データストリーム・マルチプレクサ(ストリーム多重化手段)25とを有する。データストリーム伝送/蓄積手段30のうち、送信/記録手段31も、映像符号化装置の一部をなす。
この原理は、後で詳細に説明する。
既述のように、基準とする第1の視点に左眼を割り当てたとき、左眼用の副映像に含まれるオブジェクトのデータは、独立して復号できるので、左眼用副映像データとしてそのまま出力される。
また、左眼Lで水平方向の右側に寄って見えていたオブジェクトR1、R2、R3、R4、R5、R6、R7は、右眼Rでは全て水平方向の中央に重なって見える。すなわち、左眼Lの画像における水平方向の位置に比較して、ΔR1、ΔR2、ΔR3、ΔR4、ΔR5、ΔR6、ΔR7だけ、それぞれ左側へ移動した位置に見える。
上記のようにして、各オブジェクト、あるいは各オブジェクトの所定の箇所に対して、算出した移動幅ΔPxを指定しておくことにより、再生時に奥行方向の距離感を表現する画像を作り出すことが可能になる。
したがって、たとえば、2つのオブジェクトが重なるように配置された場合、表示するときに重なった部分ではどちらを前に出すかを、ΔPxの大小で判断することができる。不透明なオブジェクトでは後ろになるものが隠されるが、重なった部分では、ΔPxの大きいところが手前に表示され、ΔPxの小さいところが隠される。
図6(a)及び(b)に、本実施の形態1で用いられる字幕配置の一例を示す。図3(a)及び(b)と同じく、視聴者の視点を含めて映像の対象となる空間全体の平面図及び側面図を示す。x軸、y軸、z軸の定義、左眼と右眼の視点、視線、視野の範囲の示し方も同じである。いま、距離d5の位置に視点から見て垂直に長方形の字幕[A]を配置し、また、距離d5からd7にかかる位置に視点から見て右辺を奥に傾けて長方形の字幕[B]を配置する。側面図からわかるように、字幕[A]は中央より上の部分に、字幕[B]は中央より下の部分に配置した。図3(a)及び(b)を参照しながら、この2つの字幕が両眼の視点からどう見えるか、すなわち、表示画面上にどう表示されるべきかを考える。
図7(b)に示すように右眼Rでは、垂直に置いた長方形の字幕[A]は長方形のまま見え、奥に傾けた長方形の字幕[B]は台形に見える。2つの字幕の水平方向の位置は、左辺は同じ(x1-Δx1)にある。右辺は実物では同じであるが、画像では、距離d5にある字幕[A]が(x2-Δx1)に、距離d7にある字幕[B]が(x3-Δx3)になる。
結果として、字幕[A]の幅は、左眼Lでも右眼Rでも(x2-x1)となり変わらないが、字幕[B]の幅は、左眼Lでは(x3-x1)、右眼Rでは(x3-x1)-(Δx3-Δx1)と、両眼視差の影響により右眼Rから見たときの方が長く見える。
1つの符号化単位UOCのデータを再生して表示している最中に、次の符号化単位UOCのデータを読出しておくことにより、1つの符号化単位UOCの表示を終えた後、途切れずに次の符号化単位UOCの表示が続くようにすることができる。但し、図8に示したデータ配置は一例であり、例えば、グラフィックスデータGRD用に大きなバッファメモリが用意されている場合は、グラフィックスデータGRDは必ずしも全ての符号化単位UOCになくてもよい。
図9(a)及び(b)に、本実施の形態2で用いられる字幕配置の一例の字幕配置を示す。図6(a)及び(b)と同じように視聴者の視点を含めて映像の対象となる空間全体の平面図及び側面図を示す。字幕[A]の配置は図6(a)及び(b)に示した例と同じである。長方形の字幕[C]を、距離d5からd7にかかる位置に視点から見て上辺を奥に傾けて配置する。側面図からわかるように、字幕[A]は中央より上の部分に、字幕[C]は中央より下の部分に配置した。図6(a)及び(b)の場合と同様にして、この2つの字幕が両眼の視点からどう見えるか、すなわち、表示画面上にどう表示されるべきかを考える。
図10(b)に示すように右眼Rでは、垂直に置いた長方形の字幕[A]は長方形のまま見え、奥に傾けた長方形の字幕[C]は台形に見える。2つの字幕の水平方向の位置は、左下端と右下端は同じ(x1-Δx1)と(x2-Δx1)にある。上辺について、距離d5にある字幕[A]は下辺と同じく、上左端が(x1-Δx1)、上右端が(x2-Δx1)にあるが、距離d7にある字幕[C]は上左端が(x4-Δx3)、上右端が(x3-Δx3)となる。
結果として、字幕[A]の幅は、左眼Lでも右眼Rでも(x2-x1)となり変わらない。また、字幕[C]の形は両眼視差の影響により左右眼用で変形するが、幅は左眼Lでも右眼Rでも下辺が(x2-x1)、上辺が(x3-x4)となり、同距離の部分については変わらない。
表示するオブジェクトの上下左右の部分で奥行方向の距離が異なるとき、グラフィックスデータエンコーダ23において符号化するとき、オブジェクト左上下端の位置と右上下端の位置を移動させるそれぞれの移動幅を算出して副映像符号化データの中に保持しておけば、左眼用の画像を基準として右眼用の画像を表現することができる。この副映像符号化データを使えば、グラフィックスデータデコーダ43において復号するとき、左眼用の画像データとその各左右上下端の位置の移動幅から、右眼用の画像に表示されるオブジェクトを簡単に再現することができる。
図8では、「右眼表示グラフィックスデータ」106を、単に「左端シフト幅」108と「右端シフト幅」110の2つのフィールドから成るとした。図11では、上記の図9(a)及び(b)の例に合わせるため、さらに詳細に、「左端シフト幅」108を「左上端シフト幅」112と「左下端シフト幅」114の2つのフィールドから構成し、「右端シフト幅」110を「右上端シフト幅」116と「右下端シフト幅」118の2つのフィールドから構成するようにした。
また、図10(a)及び(b)の例に適用するには、「左上端シフト幅」と「右上端シフト幅」に同じ値を、「左下端シフト幅」と「右下端シフト幅」に同じ値を、それぞれ入れればよい。さらに一般的には、オブジェクトの傾き方に応じて4つのフィールドそれぞれに適切な値を、グラフィックスデータエンコーダ23において算出して各フィールドに設定する。
図12(a)及び(b)は、本発明の実施の形態3で用いられる字幕の描画方法の一例を示す図である。ここでは、図10(a)及び(b)に示した字幕の描画方法とは異なる表現方法を示す。前記の図10では、奥行方向に傾斜して配置される図形について、字幕[C]を例にして左眼用の画像から右眼用の画像を生成する方法を示した。この例では、左眼用の画像は台形であり、各頂点の水平方向の位置であるx座標の値が、x1、x2、x3、x4、であった。左眼用の画像を基準とする右眼用の画像の位置表現を簡略化するには、長方形の描画領域を確保しておいて、その中に台形となる字幕[C]を描画し、右眼用の画像ではこの長方形の描画領域全体を変形することによって、その描画領域上の画像を変形させる方が簡単である。
図13(a)及び(b)は、本発明の原理となる視差と高さの関係を示す図である。図13(a)及び(b)は図3に示した本発明の原理となる視差と奥行の関係を示す図と同様に、映像の対象となる空間全体の平面図(a)と側面図を(b)を示す図であるが、配置されたオブジェクトが異なる。奥行方向の距離d6の位置に、一般的な場合を想定して視線の中心から外れた位置に縦棒状のオブジェクトEを置いている。撮像時に左眼と右眼のカメラから撮影された画像を、再生時には、例えば図中に「表示スクリーン」と示した距離d0の位置においた画面に表示する。視聴者の視点からこの画面を見たとき、左眼Lには左眼のカメラで撮影した画像が、右眼Rには右眼のカメラで撮影した画像が、それぞれ見えるように表示する。
また、本発明の映像再生装置及び方法では、サブピクチャデータは左眼用サブピクチャとしてそのまま左眼用映像に重畳して表示し、右眼用サブピクチャにはサブピクチャデータの水平表示位置を所定幅だけシフトし、右眼用映像に重畳して表示するようにしている。
さらに、本発明の映像記録媒体及び映像データストリームでは、上記のようにして符号化したサブピクチャデータを含む立体映像のデータを、それぞれの中に保持するようにしている。
すなわち、グラフィックスを奥行方向の表現が可能で立体視できるようにし、かつそのデータ量を削減したり、立体的に表現するときの演算処理を簡略化したり、あるいは映像再生装置に求められる演算処理性能を節約してコストを低減したり、所与の演算処理性能の下で立体映像表示の更新速度を向上したりするための一般的な符号化と再生を行う装置と方法として適用することが可能である。
Claims (8)
- 左右両眼用に各々別の映像を表示させることにより立体映像を視覚化する映像データを生成する映像符号化装置であって、
水平方向に、両眼間隔に相当する間隔をあけて配置された、第1及び第2の視点の映像信号を符号化して、立体映像を構成する表示映像を表す映像符号化データストリームを生成する映像符号化手段と、
前記立体映像を構成する第1及び第2の視点の表示映像にそれぞれ重畳して表示する第1及び第2の視点の表示副映像のデータを符号化して副映像符号化データストリームを生成する副映像符号化手段と、
前記映像符号化手段により生成された映像符号化データストリームと、副映像符号化手段により生成された副映像符号化データストリームとを多重化するストリーム多重化手段とを備え、
前記副映像符号化手段において、前記第1の視点の表示副映像に含まれる1つ以上のオブジェクトのデータを独立して復号可能とするように符号化すると共に、
前記第2の視点の表示副映像に含まれる前記1つ以上のオブジェクトを、前記第1の視点の表示副映像として表示される各々対応するオブジェクトに対して、水平方向の移動及び伸縮の少なくとも一方を行なって、表示することにより奥行を表現するようにし、
前記第2の視点の表示副映像のデータとして、前記オブジェクトの各々に対して、各視点からオブジェクトまでの距離に対応した、表示面上の水平方向における左端の移動幅と右端の移動幅を示すデータを生成する
ことを特徴とする映像符号化装置。 - 前記オブジェクトの各々に対する左端の移動幅と右端の移動幅として、
該左端の移動幅は左上端と左下端のそれぞれの移動幅、
該右端の移動幅は右上端と右下端のそれぞれの移動幅を示すデータを独立に生成する
ことを特徴とする請求項1に記載の映像符号化装置。 - 左右両眼用に各々別の映像を表示させることにより立体映像を視覚化する映像データを生成する映像符号化方法であって、
水平方向に、両眼間隔に相当する間隔をあけて配置された、第1及び第2の視点の映像信号を符号化して、立体映像を構成する表示映像を表す映像符号化データストリームを生成する映像符号化ステップと、
前記立体映像を構成する第1及び第2の視点の表示映像にそれぞれ重畳して表示する第1及び第2の視点の表示副映像のデータを符号化して副映像符号化データストリームを生成する副映像符号化ステップと、
前記映像符号化ステップにより生成された映像符号化データストリームと、副映像符号化手段により生成された副映像符号化データストリームとを多重化するストリーム多重化ステップとを備え、
前記副映像符号化ステップにおいて、前記第1の視点の表示副映像に含まれる1つ以上のオブジェクトのデータを独立して復号可能とするように符号化すると共に、
前記第2の視点の表示副映像に含まれる前記1つ以上のオブジェクトを、前記第1の視点の表示副映像として表示される各々対応するオブジェクトに対して、水平方向の移動及び伸縮の少なくとも一方を行なって、表示することにより奥行を表現するようにし、
前記第2の視点の表示副映像のデータとして、前記オブジェクトの各々に対して、各視点からオブジェクトまでの距離に対応した、表示面上の水平方向における左端の移動幅と右端の移動幅を示すデータを生成する
ことを特徴とする映像符号化方法。 - 前記オブジェクトの各々に対する左端の移動幅と右端の移動幅として、
該左端の移動幅は左上端と左下端のそれぞれの移動幅、
該右端の移動幅は右上端と右下端のそれぞれの移動幅を示すデータを独立に生成する
ことを特徴とする請求項3に記載の映像符号化方法。 - 請求項1又は請求項2に記載の映像符号化装置、あるいは、請求項3又は請求項4に記載の映像符号化方法によって符号化された
副映像を含む立体映像のデータを復号し、左右両眼用に各々別の映像を表示させることにより立体映像を視覚化する映像再生装置であって、
立体映像を構成する前記第1及び第2の視点の表示映像を復号する映像復号手段と、
立体映像を構成する前記第1及び第2の視点の表示映像にそれぞれ重畳して表示する複数の視点の表示副映像のデータを復号する副映像復号手段とを備え、
前記副映像復号手段において、前記第1の視点の表示副映像に含まれる1つ以上のオブジェクトのデータを独立して復号すると共に、
前記第2の視点の表示副映像のデータとして生成された、前記オブジェクトの各々に対する左端の移動幅と右端の移動幅を読み取り、
前記第2の視点の表示副映像に含まれる1つ以上のオブジェクトを、前記第1の視点の表示副映像として表示される各々対応するオブジェクトに対して、各視点からオブジェクトまでの距離に対応した、表示面上の水平方向の移動及び伸縮の少なくとも一方を行なって表示する
ことを特徴とする映像再生装置。 - 請求項1又は請求項2に記載の映像符号化装置、あるいは、請求項3又は請求項4に記載の映像符号化方法によって符号化された
副映像を含む立体映像のデータを復号し、左右両眼用に各々別の映像を表示させることにより立体映像を視覚化する映像再生方法であって、
立体映像を構成する前記第1及び第2の視点の表示映像を復号する映像復号ステップと、
立体映像を構成する前記第1及び第2の視点の表示映像にそれぞれ重畳して表示する複数の視点の表示副映像のデータを復号する副映像復号ステップとを備え、
前記副映像復号ステップにおいて、前記第1の視点の表示副映像に含まれる1つ以上のオブジェクトのデータを独立して復号すると共に、
前記第2の視点の表示副映像のデータとして生成された、前記オブジェクトの各々に対する左端の移動幅と右端の移動幅を読み取り、
前記第2の視点の表示副映像に含まれる1つ以上のオブジェクトを、前記第1の視点の表示副映像として表示される各々対応するオブジェクトに対して、各視点からオブジェクトまでの距離に対応した、表示面上の水平方向の移動及び伸縮の少なくとも一方を行なって表示する
ことを特徴とする映像再生方法。 - 請求項1又は請求項2に記載の映像符号化装置、或いは請求項3又は請求項4に記載の映像符号化方法によって符号化された
副映像を含む立体映像のデータを格納した映像記録媒体。 - 請求項1又は請求項2に記載の映像符号化装置、或いは請求項3又は請求項4に記載の映像符号化方法によって符号化された
副映像を含む立体映像のデータを伝送する映像データストリーム。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020117004756A KR101340102B1 (ko) | 2008-07-31 | 2009-06-10 | 영상 부호화 장치, 영상 부호화 방법, 영상 재생 장치 및 영상 재생 방법 |
EP09802632.1A EP2306729B1 (en) | 2008-07-31 | 2009-06-10 | Video encoding device, video encoding method, video reproduction device, video recording medium, and video data stream |
CN2009801301535A CN102113324B (zh) | 2008-07-31 | 2009-06-10 | 视频编码装置、视频编码方法、视频再现装置、视频再现方法 |
JP2010522595A JP5449162B2 (ja) | 2008-07-31 | 2009-06-10 | 映像符号化装置、映像符号化方法、映像再生装置、及び映像再生方法 |
US12/994,063 US9357231B2 (en) | 2008-07-31 | 2009-06-10 | Video encoding device, video encoding method, video reproducing device, video reproducing method, video recording medium, and video data stream |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2008-198132 | 2008-07-31 | ||
JP2008198132 | 2008-07-31 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2010013382A1 true WO2010013382A1 (ja) | 2010-02-04 |
Family
ID=41610098
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2009/002614 WO2010013382A1 (ja) | 2008-07-31 | 2009-06-10 | 映像符号化装置、映像符号化方法、映像再生装置、映像再生方法、映像記録媒体、及び映像データストリーム |
Country Status (6)
Country | Link |
---|---|
US (1) | US9357231B2 (ja) |
EP (1) | EP2306729B1 (ja) |
JP (1) | JP5449162B2 (ja) |
KR (1) | KR101340102B1 (ja) |
CN (1) | CN102113324B (ja) |
WO (1) | WO2010013382A1 (ja) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110193879A1 (en) * | 2010-02-05 | 2011-08-11 | Lg Electronics Inc. | Electronic device and a method for providing a graphical user interface (gui) for broadcast information |
JP2011160431A (ja) * | 2009-06-11 | 2011-08-18 | Panasonic Corp | 半導体集積回路 |
US20110292174A1 (en) * | 2010-05-30 | 2011-12-01 | Lg Electronics Inc. | Method and apparatus for processing and receiving digital broadcast signal for 3-dimensional subtitle |
JP2011259317A (ja) * | 2010-06-10 | 2011-12-22 | Sony Corp | 立体画像データ送信装置、立体画像データ送信方法、立体画像データ受信装置および立体画像データ受信方法 |
WO2012032997A1 (ja) * | 2010-09-06 | 2012-03-15 | ソニー株式会社 | 立体画像データ送信装置、立体画像データ送信方法および立体画像データ受信装置 |
WO2012036120A1 (ja) * | 2010-09-15 | 2012-03-22 | シャープ株式会社 | 立体画像生成装置、立体画像表示装置、立体画像調整方法、立体画像調整方法をコンピュータに実行させるためのプログラム、及びそのプログラムを記録した記録媒体 |
WO2012050366A3 (en) * | 2010-10-12 | 2012-06-21 | Samsung Electronics Co., Ltd. | 3d image display apparatus and display method thereof |
JP2012220874A (ja) * | 2011-04-13 | 2012-11-12 | Nikon Corp | 撮像装置およびプログラム |
EP2600615A1 (en) * | 2010-08-25 | 2013-06-05 | Huawei Technologies Co., Ltd. | Method, device and system for controlling graph-text display in three-dimension television |
JP2013520924A (ja) * | 2010-02-24 | 2013-06-06 | トムソン ライセンシング | 立体映像用の字幕付け |
EP2421268A3 (en) * | 2010-08-16 | 2014-10-08 | LG Electronics Inc. | Method for processing images in display device outputting 3-dimensional contents and display using the same |
JP2015517236A (ja) * | 2012-04-10 | 2015-06-18 | ホアウェイ・テクノロジーズ・カンパニー・リミテッド | 表示オブジェクトの表示位置を提供し、3次元シーン内の表示オブジェクトを表示するための方法および装置 |
CN111971955A (zh) * | 2018-04-19 | 2020-11-20 | 索尼公司 | 接收装置、接收方法、发送装置和发送方法 |
Families Citing this family (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5390016B2 (ja) * | 2010-03-24 | 2014-01-15 | パナソニック株式会社 | 映像処理装置 |
US8982151B2 (en) * | 2010-06-14 | 2015-03-17 | Microsoft Technology Licensing, Llc | Independently processing planes of display data |
JP2012124653A (ja) * | 2010-12-07 | 2012-06-28 | Canon Inc | 符号化装置、符号化方法およびプログラム |
US8566694B2 (en) * | 2011-04-04 | 2013-10-22 | Xerox Corporation | Multi-dimensional documents for parallel content display on a single screen for multiple viewers |
US9485494B1 (en) * | 2011-04-10 | 2016-11-01 | Nextvr Inc. | 3D video encoding and decoding methods and apparatus |
US20120281073A1 (en) * | 2011-05-02 | 2012-11-08 | Cisco Technology, Inc. | Customization of 3DTV User Interface Position |
WO2012153447A1 (ja) * | 2011-05-11 | 2012-11-15 | パナソニック株式会社 | 画像処理装置、映像処理方法、プログラム、集積回路 |
WO2013080544A1 (ja) | 2011-11-30 | 2013-06-06 | パナソニック株式会社 | 立体画像処理装置、立体画像処理方法、および立体画像処理プログラム |
UA125468C2 (uk) | 2012-04-13 | 2022-03-23 | ДЖ.І. ВІДІЕУ КЕМПРЕШН, ЛЛСі | Кодування картинки з малою затримкою |
CN115442625A (zh) * | 2012-06-29 | 2022-12-06 | Ge视频压缩有限责任公司 | 视频数据流、编码器、编码视频内容的方法以及解码器 |
CN106982367A (zh) * | 2017-03-31 | 2017-07-25 | 联想(北京)有限公司 | 视频传输方法及其装置 |
WO2018225518A1 (ja) * | 2017-06-07 | 2018-12-13 | ソニー株式会社 | 画像処理装置、画像処理方法、プログラム、およびテレコミュニケーションシステム |
US10778993B2 (en) * | 2017-06-23 | 2020-09-15 | Mediatek Inc. | Methods and apparatus for deriving composite tracks with track grouping |
US10873733B2 (en) * | 2017-06-23 | 2020-12-22 | Mediatek Inc. | Methods and apparatus for deriving composite tracks |
CN107959846B (zh) * | 2017-12-06 | 2019-12-03 | 苏州佳世达电通有限公司 | 影像显示设备及影像显示方法 |
US11509878B2 (en) * | 2018-09-14 | 2022-11-22 | Mediatek Singapore Pte. Ltd. | Methods and apparatus for using track derivations for network based media processing |
EP3644604A1 (en) * | 2018-10-23 | 2020-04-29 | Koninklijke Philips N.V. | Image generating apparatus and method therefor |
CN109561263A (zh) * | 2018-11-23 | 2019-04-02 | 重庆爱奇艺智能科技有限公司 | 在vr设备的3d视频中实现3d字幕效果 |
US11589032B2 (en) * | 2020-01-07 | 2023-02-21 | Mediatek Singapore Pte. Ltd. | Methods and apparatus for using track derivations to generate new tracks for network based media processing applications |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11155155A (ja) * | 1997-11-19 | 1999-06-08 | Toshiba Corp | 立体映像処理装置 |
JP2004274125A (ja) | 2003-03-05 | 2004-09-30 | Sony Corp | 画像処理装置および方法 |
JP2005535203A (ja) * | 2002-07-31 | 2005-11-17 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | デジタルビデオ信号を符号化する方法及び装置 |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
NO160374C (no) | 1986-10-17 | 1989-04-12 | Protan As | Fremgangsmaate for modifisering av alginater (polyuronider) slik at de faar endrede fysikalske egenskaper. |
JP3194258B2 (ja) | 1992-11-12 | 2001-07-30 | 日本電気株式会社 | 画像の符号化方式 |
US5619256A (en) * | 1995-05-26 | 1997-04-08 | Lucent Technologies Inc. | Digital 3D/stereoscopic video compression technique utilizing disparity and motion compensated predictions |
US6757422B1 (en) * | 1998-11-12 | 2004-06-29 | Canon Kabushiki Kaisha | Viewpoint position detection apparatus and method, and stereoscopic image display system |
EP1501316A4 (en) * | 2002-04-25 | 2009-01-21 | Sharp Kk | METHOD FOR GENERATING MULTIMEDIA INFORMATION, AND DEVICE FOR REPRODUCING MULTIMEDIA INFORMATION |
US20060203085A1 (en) * | 2002-11-28 | 2006-09-14 | Seijiro Tomita | There dimensional image signal producing circuit and three-dimensional image display apparatus |
EP1617684A4 (en) * | 2003-04-17 | 2009-06-03 | Sharp Kk | THREE-DIMENSIONAL IMAGE CREATION DEVICE, THREE-DIMENSIONAL IMAGE REPRODUCING DEVICE, THREE-DIMENSIONAL IMAGE PROCESSING DEVICE, THREE-DIMENSIONAL IMAGE PROCESSING PROGRAM, AND RECORDING MEDIUM CONTAINING THE SAME |
JP2004357156A (ja) * | 2003-05-30 | 2004-12-16 | Sharp Corp | 映像受信装置および映像再生装置 |
JP2005073049A (ja) | 2003-08-26 | 2005-03-17 | Sharp Corp | 立体映像再生装置および立体映像再生方法 |
WO2006111893A1 (en) * | 2005-04-19 | 2006-10-26 | Koninklijke Philips Electronics N.V. | Depth perception |
JP4364176B2 (ja) | 2005-06-20 | 2009-11-11 | シャープ株式会社 | 映像データ再生装置及び映像データ生成装置 |
JP4081772B2 (ja) * | 2005-08-25 | 2008-04-30 | ソニー株式会社 | 再生装置および再生方法、プログラム、並びにプログラム格納媒体 |
JP2007180981A (ja) * | 2005-12-28 | 2007-07-12 | Victor Co Of Japan Ltd | 画像符号化装置、画像符号化方法、及び画像符号化プログラム |
MY162861A (en) * | 2007-09-24 | 2017-07-31 | Koninl Philips Electronics Nv | Method and system for encoding a video data signal, encoded video data signal, method and system for decoding a video data signal |
JP2009135686A (ja) * | 2007-11-29 | 2009-06-18 | Mitsubishi Electric Corp | 立体映像記録方法、立体映像記録媒体、立体映像再生方法、立体映像記録装置、立体映像再生装置 |
-
2009
- 2009-06-10 US US12/994,063 patent/US9357231B2/en not_active Expired - Fee Related
- 2009-06-10 EP EP09802632.1A patent/EP2306729B1/en not_active Not-in-force
- 2009-06-10 JP JP2010522595A patent/JP5449162B2/ja not_active Expired - Fee Related
- 2009-06-10 WO PCT/JP2009/002614 patent/WO2010013382A1/ja active Application Filing
- 2009-06-10 KR KR1020117004756A patent/KR101340102B1/ko not_active IP Right Cessation
- 2009-06-10 CN CN2009801301535A patent/CN102113324B/zh not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11155155A (ja) * | 1997-11-19 | 1999-06-08 | Toshiba Corp | 立体映像処理装置 |
JP2005535203A (ja) * | 2002-07-31 | 2005-11-17 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | デジタルビデオ信号を符号化する方法及び装置 |
JP2004274125A (ja) | 2003-03-05 | 2004-09-30 | Sony Corp | 画像処理装置および方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP2306729A4 * |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011160431A (ja) * | 2009-06-11 | 2011-08-18 | Panasonic Corp | 半導体集積回路 |
US8593511B2 (en) | 2009-06-11 | 2013-11-26 | Panasonic Corporation | Playback device, integrated circuit, recording medium |
CN102164257A (zh) * | 2010-02-05 | 2011-08-24 | Lg电子株式会社 | 用于提供广播信息的图形用户界面的电子装置和方法 |
US20110193879A1 (en) * | 2010-02-05 | 2011-08-11 | Lg Electronics Inc. | Electronic device and a method for providing a graphical user interface (gui) for broadcast information |
KR101737832B1 (ko) * | 2010-02-05 | 2017-05-29 | 엘지전자 주식회사 | Ui 제공 방법 및 디지털 방송 수신기 |
EP2355495A3 (en) * | 2010-02-05 | 2012-05-30 | Lg Electronics Inc. | An electronic device and a method for providing a graphical user interface (gui) for broadcast information |
JP2013520924A (ja) * | 2010-02-24 | 2013-06-06 | トムソン ライセンシング | 立体映像用の字幕付け |
US20110292174A1 (en) * | 2010-05-30 | 2011-12-01 | Lg Electronics Inc. | Method and apparatus for processing and receiving digital broadcast signal for 3-dimensional subtitle |
US20140375767A1 (en) * | 2010-05-30 | 2014-12-25 | Lg Electronics Inc. | Method and apparatus for processing and receiving digital broadcast signal for 3-dimensional subtitle |
US8866886B2 (en) * | 2010-05-30 | 2014-10-21 | Lg Electronics Inc. | Method and apparatus for processing and receiving digital broadcast signal for 3-dimensional subtitle |
US9578304B2 (en) | 2010-05-30 | 2017-02-21 | Lg Electronics Inc. | Method and apparatus for processing and receiving digital broadcast signal for 3-dimensional subtitle |
JP2011259317A (ja) * | 2010-06-10 | 2011-12-22 | Sony Corp | 立体画像データ送信装置、立体画像データ送信方法、立体画像データ受信装置および立体画像データ受信方法 |
US9288482B2 (en) | 2010-08-16 | 2016-03-15 | Lg Electronics Inc. | Method for processing images in display device outputting 3-dimensional contents and display device using the same |
EP2421268A3 (en) * | 2010-08-16 | 2014-10-08 | LG Electronics Inc. | Method for processing images in display device outputting 3-dimensional contents and display using the same |
EP2600615A1 (en) * | 2010-08-25 | 2013-06-05 | Huawei Technologies Co., Ltd. | Method, device and system for controlling graph-text display in three-dimension television |
EP2600615A4 (en) * | 2010-08-25 | 2013-06-05 | Huawei Tech Co Ltd | METHOD, DEVICE AND SYSTEM FOR CONTROLLING THE GRAPHIC TEXT DISPLAY IN THREE-DIMENSIONAL TELEVISION |
CN102714746A (zh) * | 2010-09-06 | 2012-10-03 | 索尼公司 | 立体图像数据传送设备、立体图像数据传送方法、以及立体图像数据接收设备 |
JP2012060267A (ja) * | 2010-09-06 | 2012-03-22 | Sony Corp | 立体画像データ送信装置、立体画像データ送信方法および立体画像データ受信装置 |
WO2012032997A1 (ja) * | 2010-09-06 | 2012-03-15 | ソニー株式会社 | 立体画像データ送信装置、立体画像データ送信方法および立体画像データ受信装置 |
WO2012036120A1 (ja) * | 2010-09-15 | 2012-03-22 | シャープ株式会社 | 立体画像生成装置、立体画像表示装置、立体画像調整方法、立体画像調整方法をコンピュータに実行させるためのプログラム、及びそのプログラムを記録した記録媒体 |
US9224232B2 (en) | 2010-09-15 | 2015-12-29 | Sharp Kabushiki Kaisha | Stereoscopic image generation device, stereoscopic image display device, stereoscopic image adjustment method, program for causing computer to execute stereoscopic image adjustment method, and recording medium on which the program is recorded |
JP2012065066A (ja) * | 2010-09-15 | 2012-03-29 | Sharp Corp | 立体画像生成装置、立体画像表示装置、立体画像調整方法、立体画像調整方法をコンピュータに実行させるためのプログラム、及びそのプログラムを記録した記録媒体 |
WO2012050366A3 (en) * | 2010-10-12 | 2012-06-21 | Samsung Electronics Co., Ltd. | 3d image display apparatus and display method thereof |
JP2012220874A (ja) * | 2011-04-13 | 2012-11-12 | Nikon Corp | 撮像装置およびプログラム |
JP2015517236A (ja) * | 2012-04-10 | 2015-06-18 | ホアウェイ・テクノロジーズ・カンパニー・リミテッド | 表示オブジェクトの表示位置を提供し、3次元シーン内の表示オブジェクトを表示するための方法および装置 |
CN111971955A (zh) * | 2018-04-19 | 2020-11-20 | 索尼公司 | 接收装置、接收方法、发送装置和发送方法 |
Also Published As
Publication number | Publication date |
---|---|
KR101340102B1 (ko) | 2013-12-10 |
JP5449162B2 (ja) | 2014-03-19 |
EP2306729A1 (en) | 2011-04-06 |
EP2306729A4 (en) | 2011-12-14 |
US20110069153A1 (en) | 2011-03-24 |
EP2306729B1 (en) | 2013-08-14 |
JPWO2010013382A1 (ja) | 2012-01-05 |
US9357231B2 (en) | 2016-05-31 |
CN102113324A (zh) | 2011-06-29 |
KR20110045029A (ko) | 2011-05-03 |
CN102113324B (zh) | 2013-09-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5449162B2 (ja) | 映像符号化装置、映像符号化方法、映像再生装置、及び映像再生方法 | |
JP4942784B2 (ja) | マルチメディア情報生成装置およびマルチメディア情報再生装置 | |
JP5820276B2 (ja) | 3d画像及びグラフィカル・データの結合 | |
JP4693900B2 (ja) | 画像処理装置 | |
JP4861309B2 (ja) | 2.5dグラフィックスをレンダリングするためのゴースト・アーチファクト削減 | |
KR101545009B1 (ko) | 스트레오스코픽 렌더링을 위한 이미지 인코딩 방법 | |
JP4630150B2 (ja) | 立体画像記録装置及びプログラム | |
JP4174001B2 (ja) | 立体画像表示装置、記録方法、及び伝送方法 | |
EP1587330B1 (en) | Device for generating image data of multiple viewpoints, and device for reproducing these image data | |
JP4252105B2 (ja) | 画像データ作成装置及び画像データ再生装置 | |
WO2004093467A1 (ja) | 3次元画像作成装置、3次元画像再生装置、3次元画像処理装置、3次元画像処理プログラムおよびそのプログラムを記録した記録媒体 | |
KR20120049292A (ko) | 3d 비디오 및 보조 데이터의 결합 | |
CN102047669B (zh) | 具有深度信息的视频信号 | |
JP2010049607A (ja) | コンテンツ再生装置および方法 | |
JP2010226500A (ja) | 立体画像表示装置および立体画像表示方法 | |
JP2006352539A (ja) | 広視野映像システム | |
KR101314601B1 (ko) | 콘텐츠 전송 장치, 콘텐츠 표출 장치, 콘텐츠 전송 방법 및 콘텐츠 표출 방법 | |
JP2004207772A (ja) | 立体画像表示装置、記録方法、及び伝送方法 | |
US9723290B2 (en) | Method for generating, transmitting and receiving stereoscopic images and relating devices | |
KR20100128233A (ko) | 영상 처리 방법 및 장치 | |
JP4133683B2 (ja) | 立体画像記録装置、立体画像記録方法、立体画像表示装置および立体画像表示方法 | |
JP2013090170A (ja) | 立体視映像再生装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200980130153.5 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 09802632 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2010522595 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 12994063 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2009802632 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20117004756 Country of ref document: KR Kind code of ref document: A |