WO2013073455A1 - 画像データ送信装置、画像データ送信方法、画像データ受信装置および画像データ受信方法 - Google Patents
画像データ送信装置、画像データ送信方法、画像データ受信装置および画像データ受信方法 Download PDFInfo
- Publication number
- WO2013073455A1 WO2013073455A1 PCT/JP2012/079064 JP2012079064W WO2013073455A1 WO 2013073455 A1 WO2013073455 A1 WO 2013073455A1 JP 2012079064 W JP2012079064 W JP 2012079064W WO 2013073455 A1 WO2013073455 A1 WO 2013073455A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image data
- information
- cropping
- interpretation
- video stream
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/139—Format conversion, e.g. of frame-rate or size
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/161—Encoding, multiplexing or demultiplexing different image signal components
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/167—Synchronising or controlling image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/172—Processing image signals image signals comprising non-image signal components, e.g. headers or format information
- H04N13/178—Metadata, e.g. disparity information
Definitions
- the present technology relates to an image data transmission device, an image data transmission method, an image data reception device, and an image data reception method, and in particular, transmits cropping information together with image data from a transmission side, and the reception side uses the cropping information to transmit image data.
- the present invention relates to an image data transmission device and the like in an image transmission / reception system that performs a cut-out process.
- Patent Document 1 proposes a transmission method using a television broadcast radio wave of stereoscopic image data.
- stereoscopic image data including left-eye image data and right-eye image data is transmitted, and stereoscopic image display using binocular parallax is performed in the television receiver.
- FIG. 22 shows the relationship between the display position of the left and right images of an object (object) on the screen and the playback position of the stereoscopic image (3D image) in stereoscopic image display using binocular parallax.
- the right and left line of sight intersects in front of the screen surface, so The position is in front of the screen surface.
- DPa represents a horizontal disparity vector related to the object A.
- the right and left lines of sight intersect on the screen surface, and therefore the reproduction position of the stereoscopic image is the screen. It becomes on the surface.
- the left and right line of sight intersects behind the screen surface.
- the playback position is behind the screen.
- DPc represents a horizontal disparity vector related to the object C.
- FIG. 23A shows a side-by-side method
- FIG. 23B shows a top-and-bottom method.
- a case where the pixel format is 1920 ⁇ 1080 is shown.
- the side-by-side method transmits pixel data of left eye image data in the first half in the horizontal direction and transmits pixel data of right eye image data in the second half in the horizontal direction. It is.
- the left-eye image data and the right-eye image data are each thinned by 1 ⁇ 2 of the pixel data in the horizontal direction, and the horizontal resolution is halved with respect to the original signal.
- the data of each line of the left eye image data is transmitted in the first half of the vertical direction, and the data of each line of the right eye image data is transmitted in the second half of the vertical direction.
- the lines of the left eye image data and the right eye image data are thinned out to 1 ⁇ 2, and the vertical resolution is halved with respect to the original signal.
- FIG. 24A schematically shows processing related to two-dimensional image data in a 1920 ⁇ 1080 pixel format.
- 8 lines of blank data are added, and encoding is performed as image data of 1920 pixels ⁇ 1088 lines.
- image data of 1920 pixels ⁇ 1088 lines is obtained.
- 8 lines are blank data
- 1920-pixel ⁇ 1080-line image data including substantial image data is cut out based on the cropping information included in the video data stream.
- Display image data for a two-dimensional television receiver (hereinafter referred to as “2DTV” as appropriate) is generated.
- FIG. 24B schematically shows processing related to stereoscopic image data (three-dimensional image data) in a side-by-side format with a 1920 ⁇ 1080 pixel format. Also in this case, on the transmission side, in order to perform encoding for each 16 ⁇ 16 block, 8 lines of blank data are added and encoding is performed as image data of 1920 pixels ⁇ 1088 lines.
- image data of 1920 pixels ⁇ 1088 lines is obtained.
- 1920 pixels ⁇ 1080 lines of image data including substantial image data is cut out based on the cropping information included in the video data stream.
- the image data is divided into right and left, and scaling processing is performed on each of the image data to generate left eye and right eye display image data for a stereoscopic television receiver (hereinafter referred to as “3DTV” as appropriate).
- 3DTV stereoscopic television receiver
- FIG. 24 (c) schematically shows processing relating to top-and-bottom stereoscopic image data (three-dimensional image data) in a 1920 ⁇ 1080 pixel format. Also in this case, on the transmission side, in order to perform encoding for each 16 ⁇ 16 block, encoding is performed as image data of 1920 pixels ⁇ 1088 lines by adding 8 lines of blank data.
- image data of 1920 pixels ⁇ 1088 lines is obtained.
- 1920 pixels ⁇ 1080 lines of image data including substantial image data is cut out based on the cropping information included in the video data stream. Then, the image data is divided into upper and lower parts and subjected to scaling processing to generate display image data for 3D TV left eye and right eye.
- the cropping information included in the video data stream is information for cutting out only the image data of one of the left eye and the right eye, for example, the left eye so that this unnatural image display can be avoided. Can be considered.
- the processing of 2DTV and 3DTV is as follows.
- FIG. 25 (a) schematically shows processing on side-by-side stereoscopic image data (three-dimensional image data) in a 1920 ⁇ 1080 pixel format in 2DTV.
- image data of 1920 pixels ⁇ 1088 lines is obtained after decoding, of which 8 lines are blank data.
- the left eye image data of 960 pixels ⁇ 1080 lines is cut out of the 1920 pixels ⁇ 1080 lines of image data including substantial image data.
- scaling processing is performed on the left eye image data, and display image data for 2DTV is generated. In this case, two-dimensional display (2D display) is correctly performed.
- FIG. 25 (b) schematically shows processing in 3DTV for side-by-side stereoscopic image data (three-dimensional image data) in a 1920 ⁇ 1080 pixel format.
- image data of 1920 pixels ⁇ 1088 lines can be obtained after decoding, of which 8 lines are blank data.
- the left eye image data of 960 pixels ⁇ 1080 lines is cut out of the 1920 pixels ⁇ 1080 lines of image data including substantial image data.
- scaling processing is performed on the left eye image data, and image data of 1920 pixels ⁇ 1080 lines is generated.
- This image data is the same as the 2D TV display image data described above. Since 3DTV is a side-by-side method, the image data is further divided into left and right parts, and scaling processing is performed on each of them to generate left-eye and right-eye display image data for 3DTV. In this case, the left-eye image and the right-eye image are one and the other obtained by dividing one image into left and right, so that stereoscopic display (3D display) is not correctly performed.
- FIG. 26 (a) schematically shows processing on top-and-bottom stereoscopic image data (three-dimensional image data) of a 1920 ⁇ 1080 pixel format in 2DTV.
- image data of 1920 pixels ⁇ 1088 lines is obtained after decoding, of which 8 lines are blank data.
- the left-eye image data of 1920 pixels ⁇ 540 lines is extracted from the 1920 pixels ⁇ 1080 lines of image data including substantial image data.
- scaling processing is performed on the left eye image data, and display image data for 2DTV is generated. In this case, two-dimensional display (2D display) is correctly performed.
- 26 (b) schematically shows processing of stereoscopic image data (three-dimensional image data) in a 1920 ⁇ 1080 pixel format in 3D TV.
- image data of 1920 pixels ⁇ 1088 lines can be obtained after decoding, of which 8 lines are blank data.
- the left-eye image data of 1920 pixels ⁇ 540 lines is extracted from the 1920 pixels ⁇ 1080 lines of image data including substantial image data.
- scaling processing is performed on the left eye image data, and image data of 1920 pixels ⁇ 1080 lines is generated.
- This image data is the same as the 2D TV display image data described above. Since 3DTV is a top-and-bottom method, this image data is further divided into two parts in the vertical direction and subjected to scaling processing to generate left-eye and right-eye display image data for 3DTV. In this case, the left-eye image and the right-eye image are one and the other obtained by dividing one image into upper and lower parts, so that stereoscopic display (3D display) is not correctly performed.
- the purpose of the present technology is to properly generate the image data for display by appropriately performing the clipping process using the cropping information on the receiving side.
- the concept of this technology is An image data transmitting unit that transmits a container of a predetermined format including a video stream including image data and having cropping information inserted in a header portion;
- the image data transmitting apparatus includes an information insertion unit that inserts interpretation information of the parameter value of the cropping information in a layer higher than the video stream.
- a container having a predetermined format including a video stream including image data and having cropping information inserted in a header portion is transmitted by the image data transmission unit.
- the container may be a transport stream (MPEG-2 TS) adopted in the digital broadcasting standard.
- the container may be MP4 used for Internet distribution or the like, or a container of other formats.
- the information insertion unit inserts the interpretation information of the parameter value of the cropping information in a layer higher than the video stream.
- the container may be a transport stream, and the information insertion unit may insert the interpretation information under the program map table or the event information table.
- the information insertion unit describes interpretation information in a descriptor inserted under the program map table or the event information table.
- the video stream is H.264. H.264 / AVC or HEVC encoded data
- the cropping information is defined in the sequence parameter set of the video stream
- the information insertion unit converts the interpretation information into a program map table or event information. -You may be made to describe in the descriptor inserted under the table.
- the interpretation data is stereoscopic image data in which left-eye image data and right-eye image data are divided and arranged in the horizontal direction or the vertical direction in the same frame, so-called frame compatible stereoscopic image data
- the interpretation information indicates that the parameter value of the cropping information should be interpreted as it is when the image data is two-dimensional image data.
- the interpretation information includes the parameter value of the cropping information
- the cropping area May be interpreted to be interpreted to double in the horizontal or vertical direction.
- the interpretation information specifies the interpretation of the parameter value of the cropping information.
- the interpretation information of the parameter value of the cropping information is inserted into a layer higher than the video stream. Therefore, regardless of whether the image data is two-dimensional image data or frame-compatible stereoscopic image data, the receiving side can appropriately interpret the parameter value of the cropping information using the interpretation information, and a clipping process using this cropping information Display image data can be generated correctly by appropriately performing (cropping).
- the image data is two-dimensional image data or stereoscopic image data in which left-eye image data and right-eye image data are divided and arranged in the horizontal direction or the vertical direction in the same frame. Is inserted at a timing preceding the switching timing of the two-dimensional image data and the stereoscopic image data, and the interpretation information changed corresponding to the image data after the switching is inserted into a layer higher than the video stream. Good.
- An image data receiving unit for receiving a container of a predetermined format including a video stream including image data and having cropping information inserted in a header portion; Interpretation information of the parameter value of the cropping information is inserted in a layer higher than the video stream, An information acquisition unit for acquiring the interpretation information from the container; A decoding unit for decoding the video stream of the container to obtain the image data and the cropping information; An image data receiving device further comprising: an image data processing unit that interprets the parameter value of the cropping information based on the interpretation information, and extracts image data of a predetermined region from the image data to generate image data for display It is in.
- the image data receiving unit receives a container of a predetermined format having a video stream including image data and having cropping information inserted in the header portion, for example, a transport stream.
- the interpretation information of the parameter value of the cropping information is inserted in a layer higher than the video stream.
- the interpretation information is acquired from the container by the information acquisition unit.
- the video stream included in the container is decoded by the decoding unit, and image data and cropping information are acquired.
- the parameter value of the cropping information is interpreted based on the interpretation information by the image data processing unit, and image data of a predetermined area is cut out from the image data to generate display image data.
- a container having a predetermined format having a video stream in which the cropping information is inserted in the header part is received, but interpretation information of the cropping information is inserted in a layer higher than the video stream. . Therefore, regardless of whether the image data is two-dimensional image data or frame-compatible stereoscopic image data, the cropping information can be appropriately interpreted by the interpretation information, and the cropping process using the cropping information is appropriately performed for display. Image data can be generated correctly.
- the image data is two-dimensional image data or stereoscopic image data in which left-eye image data and right-eye image data are divided and arranged in the horizontal direction or the vertical direction in the same frame.
- the interpretation information changed corresponding to the image data is inserted into a layer higher than the video stream at a timing preceding the switching timing of the two-dimensional image data and the stereoscopic image data.
- the image data processing unit From the data switching timing, the parameter value of the cropping information is interpreted based on the interpretation information that is inserted at a timing preceding the switching timing and is changed corresponding to the image data after switching. Also good.
- the image data can be appropriately cut out by interpreting the parameter value of the cropping information suitable for the image data after switching immediately from the switching timing. Further, even if the acquisition of interpretation information is not synchronized with the switching timing of image data, it is possible to avoid an unnatural image display.
- the image data for display can be generated correctly by appropriately performing the clipping process using the cropping information on the receiving side.
- FIG. 1 shows a configuration example of an image transmission / reception system 10 as an embodiment.
- the image transmission / reception system 10 includes a broadcasting station 100 and a receiver (3DTV) 200.
- the broadcast station 100 transmits a transport stream TS having a video stream including image data on a broadcast wave.
- the image data included in the video stream is two-dimensional image data or so-called frame-compatible stereoscopic image data in which left-eye image data and right-eye image data are divided and arranged in the horizontal or vertical direction in the same frame. is there.
- the transmission format of the stereoscopic image data is, for example, a side-by-side (Side By Side) method (see FIG. 23 (a)), a top-and-bottom (Top & Bottom) method (see FIG. 23 (b)). Etc.
- the pixel format of the image data is 1920 ⁇ 1080.
- the broadcasting station 100 encodes this image data for each 16 ⁇ 16 block. Therefore, the broadcasting station 100 adds 8 lines of blank data and performs encoding as 1920 pixel ⁇ 1088 line image data.
- the cropping information is inserted in the header part of the video stream.
- This cropping information is information for cutting out 1920 pixel ⁇ 1080 line image data including substantial image data from decoded 1920 pixel ⁇ 1088 line image data when the image data is two-dimensional image data. It has become.
- this cropping information is used to cut out substantial left-eye image data or right-eye image data from 1920-pixel ⁇ 1088-line image data after decoding when the image data is frame-compatible stereoscopic image data.
- Information For example, in the side-by-side type stereoscopic image data, the information is used to cut out image data of 960 pixels ⁇ 1080 lines. Also, for example, in top-and-bottom stereoscopic image data, the information is used to cut out image data of 1920 pixels ⁇ 540 lines.
- the video data stream is, for example, H.264.
- H.264 / AVC Advanced Video Coding
- SPS sequence parameter set
- 2A and 2B show examples of the data structure of access units in a video data stream.
- pictures are defined in units called access units.
- FIG. 2A shows the structure of the top access unit of GOP (Group Of Pictures), and
- FIG. 2B shows the structure of the access unit other than the top of the GOP.
- the cropping information is inserted into the SPS (Sequence Parameter Set) portion present in the top access unit of the GOP.
- FIG. 3 shows the structure (Syntax) of the cropping information defined in the SPS. In this SPS, presence / absence of cropping information is indicated by flag information of “frame_cropping_flag”.
- the cropping information is information that designates a rectangular area as a cutout area of image data.
- “Frame_crop_left_offset” indicates the horizontal start position, that is, the left end position.
- “Frame_crop_right_offset” indicates a horizontal end position, that is, a right end position.
- “Frame_crop_top_offset” indicates the start position in the vertical direction, that is, the upper end position.
- “Frame_crop_bottom_offset” indicates a vertical end position, that is, a lower end position. Both are indicated by an offset value from the upper left position.
- “Frame Packing Arrangement SEI message” is inserted in the SEIs part of the access unit.
- This SEI includes type information indicating what transmission format the stereoscopic image data of the image data is.
- interpretation information of the parameter value of the cropping information is inserted in a layer higher than the video stream.
- This interpretation information is inserted, for example, under a program map table (PMT: Program Map).
- PMT Program Map
- this interpretation information is described in, for example, a descriptor inserted under the video elementary loop of the program map table.
- This descriptor is, for example, an existing AVC video descriptor (AVC video descriptor) or a newly defined cropping interpretation descriptor (Cropping_interpretation_descriptor).
- This interpretation information indicates that the parameter value of the cropping information should be specially interpreted when the image data is frame compatible stereoscopic image data.
- the interpretation information indicates that when the image data is two-dimensional image data, the parameter value of the cropping information should be interpreted as it is. This interpretation information is inserted at a timing preceding the switching timing of the two-dimensional image data and the stereoscopic image data.
- the receiver 200 receives the transport stream TS transmitted from the broadcasting station 100 on a broadcast wave.
- the receiver 200 acquires the interpretation information of the parameter value of the cropping information inserted in the higher layer than the video stream as described above from the transport stream TS.
- the receiver 200 decodes the video stream to obtain image data and cropping information.
- the receiver 200 interprets the parameter value of the cropping information based on the interpretation information, cuts out image data of a predetermined region from the image data, and generates display image data.
- the cropping information is used to cut out 1920 pixel ⁇ 1080 line image data including substantial image data from the decoded 1920 pixel ⁇ 1088 line image data. It becomes information of.
- the receiver 200 interprets the parameter value of the cropping information as it is, and extracts 1920 pixel ⁇ 1080 line image data including substantial image data from the decoded 1920 pixel ⁇ 1088 line image data. Image data for two-dimensional image display is generated.
- the cropping information is used to cut out substantial left-eye image data or right-eye image data from the decoded 1920-pixel ⁇ 1088-line image data. It becomes information of.
- the receiver 200 interprets the parameter value of the cropping information so that the cropping area is doubled in the horizontal direction or the vertical direction. Then, the receiver 200 cuts out 1920 pixel ⁇ 1080 line image data including substantial image data from the decoded 1920 pixel ⁇ 1088 line image data, and each image data portion of the left eye and the right eye Are subjected to scaling processing to generate left-eye and right-eye image data for stereoscopic image display.
- the interpretation information is inserted at a timing preceding the switching timing of the two-dimensional image data and the stereoscopic image data.
- the receiver 200 interprets the parameter value of the cropping information based on the interpretation information that has been inserted in accordance with the image data after switching, which is inserted from the switching timing of the image data at a timing preceding the switching timing. To do. That is, the receiver 200 cuts out the image data immediately after the switching timing by interpreting the cropping information suitable for the image data after switching, and generates display image data.
- FIG. 4 schematically shows a reception process for side-by-side stereoscopic image data in a 1920 ⁇ 1080 pixel format. After decoding, image data of 1920 pixels ⁇ 1088 lines is obtained, of which 8 lines are blank data.
- the cropping information (indicating the offset position with white circles) is interpreted as it is. Therefore, based on this cropping information, for example, 960 pixels ⁇ 1080 lines of left-eye image data is cut out of 1920 pixels ⁇ 1080 lines of image data including substantial image data.
- the left eye image data is subjected to horizontal scaling processing to generate image data for two-dimensional image display. In this case, the two-dimensional image is correctly displayed.
- the cropping information (the white circle mark indicates the offset position) is interpreted so that the cropping area is doubled in the horizontal direction (hatched circle mark). Indicates the offset change position). Therefore, based on this cropping information, image data of 1920 pixels ⁇ 1080 lines including substantial image data is cut out. Then, since it is side-by-side stereoscopic image data, the cut-out image data is divided into right and left, and each is subjected to horizontal scaling processing, and the left and right eyes for stereoscopic image display are displayed. Eye image data is generated. In this case, the stereoscopic image is correctly displayed.
- FIG. 5 schematically shows reception processing for top-and-bottom stereoscopic image data in a 1920 ⁇ 1080 pixel format. After decoding, image data of 1920 pixels ⁇ 1088 lines is obtained, of which 8 lines are blank data.
- the cropping information (indicating the offset position with white circles) is interpreted as it is. Therefore, based on this cropping information, for example, left-eye image data of 1920 pixels ⁇ 540 lines is extracted from 1920 pixel ⁇ 1080 lines of image data including substantial image data. The left eye image data is then subjected to vertical scaling processing to generate image data for two-dimensional image display. In this case, the two-dimensional image is correctly displayed.
- the cropping information (the white circle mark indicates the offset position) is interpreted so that the cropping area is doubled in the vertical direction (with the hatched circle mark). Indicates the offset change position). Therefore, based on this cropping information, image data of 1920 pixels ⁇ 1080 lines including substantial image data is cut out. Since this is top-and-bottom stereoscopic image data, the cut-out image data is divided into upper and lower parts, each of which is subjected to vertical scaling processing, and the left and right eyes for stereoscopic image display are displayed. Eye image data is generated. In this case, the stereoscopic image is correctly displayed.
- FIG. 6 illustrates a configuration example of the transmission data generation unit 110 that generates the above-described transport stream TS in the broadcast station 100.
- the transmission data generation unit 110 includes a data extraction unit (archive unit) 111, a video encoder 112, an audio encoder 113, and a multiplexer 114.
- a data recording medium 111a is detachably attached to the data extraction unit 111, for example.
- the data recording medium 111a is a disk-shaped recording medium, a semiconductor memory, or the like.
- image data of a plurality of programs transmitted by the transport stream TS is recorded.
- the image data of each program is, for example, two-dimensional image data or frame-compatible stereoscopic image data (hereinafter simply referred to as “stereoscopic image data”).
- the transmission format of the stereoscopic image data is, for example, a side-by-side method, a top-and-bottom method, or the like (see FIGS. 23A and 23B).
- the data extraction unit 111 sequentially extracts and outputs image data and audio data of the transmission target program from the data recording medium 111a.
- the video encoder 112 applies H.264 to the image data output from the data extraction unit 111.
- H.264 / AVC Advanced Video Video Coding
- the video encoder 112 generates a video stream (video elementary stream) including the encoded video data by a stream formatter (not shown) provided in the subsequent stage.
- the video encoder 112 inserts cropping information into the header portion of this video stream.
- the cropping information is inserted into the SPS (Sequence Parameter Set) portion present in the head access unit of the GOP (see FIG. 2A).
- the audio encoder 113 performs encoding such as MPEG-2Audio AAC on the audio data output from the data extraction unit 111 to generate an audio stream (audio elementary stream).
- the multiplexer 114 packetizes and multiplexes the elementary streams generated by the video encoder 112 and the audio encoder 113 to generate a transport stream (multiplexed data stream) TS.
- the multiplexer 114 inserts the interpretation information of the parameter value of the cropping information in a layer higher than the video stream.
- the multiplexer 114 inserts interpretation information corresponding to the image data after the switching at a timing preceding the switching timing of the two-dimensional image data and the stereoscopic image data.
- this interpretation information is described in, for example, a descriptor inserted under the video elementary loop of the program map table.
- This descriptor is, for example, an existing AVC video descriptor (AVC video descriptor) or a newly defined cropping interpretation descriptor (Cropping_interpretation_descriptor).
- FIG. 7 shows a configuration example of the transport stream TS.
- flag information of “cropping_normal_interpretation_flag” as interpretation information of a parameter value of cropping information is described in an existing AVC video descriptor.
- the PES packet “Video ⁇ PES1 ”of the video stream is included. If the image data included in this video stream is stereoscopic image data, as described above, “Frame Packing Arrangement SEI message” is inserted in the SEIs portion of the access unit. This SEI includes type information indicating what transmission format the stereoscopic image data of the image data is.
- the transport stream TS includes a PMT (Program Map Table) as PSI (Program Specific Information). This PSI is information describing to which program each elementary stream included in the transport stream belongs. Further, the transport stream includes an EIT (Event Information Table) as SI (Serviced Information) for managing each event.
- PMT Program Map Table
- PSI Program Specific Information
- EIT Event Information Table
- SI Serviced Information
- the PMT includes a program descriptor (Program Descriptor) that describes information related to the entire program.
- the PMT includes an elementary loop having information related to each elementary stream. In this configuration example, there is a video elementary loop (Video ES loop).
- PID packet identifier
- flag information of “cropping_normal_interpretation_flag” is described in “AVC_video_descriptor” included in a video elementary loop (Video ES loop).
- FIG. 8A shows another configuration example of the transport stream TS.
- flag information of “cropping_normal_interpretation_flag” as interpretation information of a parameter value of cropping information is described in a newly defined cropping interpretation descriptor.
- the flag information of “cropping_normal_interpretation_flag” is described in “Cropping_interpretation_descriptor” inserted in the video elementary loop (Video ES loop). Although the detailed description is omitted, the rest of the configuration example is the same as the configuration example shown in FIG.
- “Cropping_interpretation_descriptor” may be inserted under the EIT as shown in FIG. 8B.
- FIG. 9 shows a structural example (Syntax) of “AVC_video_descriptor”. This descriptor itself is already H.264. H.264 / AVC standard. Here, 1-bit flag information of “cropping_normal_interpretation_flag” is newly defined in this descriptor.
- this flag information is whether or not the parameter value of the cropping information defined by the SPS (Sequence Parameter Set) in the head access unit of the GOP is applied as it is. Indicates whether the parameter value of the cropping information should be specially interpreted.
- this flag information is “0”, it indicates that the parameter value of the cropping information should be specially interpreted.
- the receiver assigns the right side to the left side in (1) or (2) below.
- a cropping position is set, and cropping is performed based on the position.
- (1) or (2) can be determined, for example, by whether or not the value interpreted in (1) falls within the range of picture size (picture size).
- frame_crop_right_offset frame_crop_right_offset * 2
- frame_crop_left_offset 0
- frame_crop_bottom_offset frame_crop_bottom_offset * 2
- frame_crop_top_offset 0
- the receiver interprets the parameter value of the cropping information defined by the SPS as it is and applies the cropping (cropping) if none of the above applies. It will be.
- the receiver interprets the parameter value of the cropping information defined by the SPS as it is, and performs cropping.
- FIG. 11 shows a structural example (Syntax) of “Cropping_interpretation_descriptor”.
- the 8-bit field of “descriptor_tag” indicates that this descriptor is “Cropping_interpretation_descriptor”.
- the 8-bit field of “descriptor_length” indicates the number of subsequent data bytes. In this descriptor, 1-bit flag information of “cropping_normal_interpretation_flag” described above is described.
- the operation of the transmission data generation unit 110 shown in FIG. 6 will be briefly described.
- the image data (two-dimensional image data or stereoscopic image data) of the transmission target program sequentially output from the data extraction unit 111 is supplied to the video encoder 112.
- H.P. H.264 / AVC Advanced Video Coding
- a video stream (video elementary stream) including the encoded video data is generated by a stream formatter (not shown) provided in the subsequent stage.
- the video encoder 112 inserts cropping information into the header portion of this video data stream. That is, in this case, cropping information is inserted into an SPS (Sequence Parameter Set) portion present in the head access unit of the GOP (see FIGS. 2 and 3). Further, in the video encoder 112, when the image data is stereoscopic image data, “Frame Packing Arrangement SEI message” is inserted in the SEIs portion of the access unit (see FIG. 2). This SEI includes type information indicating what transmission format the stereoscopic image data of the image data is.
- SPS Sequence Parameter Set
- the audio data corresponding to the image data is also output from the data extracting unit 111.
- This audio data is supplied to the audio encoder 113.
- the audio encoder 113 performs encoding such as MPEG-2Audio AAC on the audio data, and generates an audio stream (audio elementary stream) including the encoded audio data.
- the video stream generated by the video encoder 112 is supplied to the multiplexer 114.
- the audio stream generated by the audio encoder 113 is supplied to the multiplexer 114.
- the multiplexer 114 the elementary streams supplied from each encoder are packetized and multiplexed to generate a transport stream (multiplexed data stream) TS.
- the multiplexer 114 inserts the interpretation information of the parameter value of the cropping information in the upper layer of the video data stream.
- interpretation information corresponding to the image data after the switching is inserted at a timing preceding the switching timing of the two-dimensional image data and the stereoscopic image data.
- flag information of “cropping_normal_interpretation_flag” as interpretation information is described in, for example, a descriptor inserted under the video elementary loop of the program map table (FIGS. 7, 8, and FIG. 9, see FIG.
- interpretation information of the parameter value of the cropping information is inserted into a higher layer than the video stream. Therefore, regardless of whether the image data is two-dimensional image data or stereoscopic image data, the receiving side can appropriately interpret the parameter value of the cropping information based on this interpretation information, and a clipping process (cropping) using this cropping information.
- the display image data can be correctly generated by appropriately performing the above.
- the interpretation information corresponding to the image data after the switching is higher than the video stream at the timing preceding the switching timing of the two-dimensional image data and the stereoscopic image data. Inserted into. Therefore, on the receiving side, the interpretation information changed corresponding to the image data after switching can be acquired before the switching timing of the two-dimensional image data and the stereoscopic image data. Therefore, the image data can be cut out (cropping) by interpreting the parameter values of the cropping information suitable for the image data after switching immediately from the switching timing, and unnatural image display is performed by switching the image data. Can be avoided.
- FIG. 12 shows a configuration example of the receiver (3DTV) 200.
- the receiver 200 includes a CPU 201, a flash ROM 202, a DRAM 203, an internal bus 204, a remote control receiver (RC receiver) 205, and a remote control transmitter (RC transmitter) 206.
- RC receiver remote control receiver
- RC transmitter remote control transmitter
- the receiver 200 includes an antenna terminal 210, a digital tuner 211, a demultiplexer 213, a video decoder 214, view buffers 217L and 217R, an audio decoder 218, and a channel processor 219. .
- the CPU 201 controls the operation of each unit of receiver 200.
- the flash ROM 202 stores control software and data.
- the DRAM 203 constitutes a work area for the CPU 201.
- the CPU 201 develops software and data read from the flash ROM 202 on the DRAM 203 and activates the software to control each unit of the receiver 200.
- the RC receiving unit 205 receives a remote control signal (remote control code) transmitted from the RC transmitter 206 and supplies it to the CPU 201.
- CPU201 controls each part of receiver 200 based on this remote control code.
- the CPU 201, flash ROM 202 and DRAM 203 are connected to the internal bus 204.
- the antenna terminal 210 is a terminal for inputting a television broadcast signal received by a receiving antenna (not shown).
- the digital tuner 211 processes the television broadcast signal input to the antenna terminal 210 and outputs a predetermined transport stream TS corresponding to the user's selected channel.
- the transport stream TS has a video stream including image data, and cropping information is inserted in the header portion.
- the image data is two-dimensional image data or stereoscopic image data.
- the flag information “cropping_normal_interpretation_flag” as the interpretation information of the parameter value of the cropping information is inserted in a higher layer than the video stream.
- this interpretation information is described in, for example, a descriptor inserted under the program map table or the event information table.
- This descriptor is, for example, an existing AVC video descriptor or a newly defined cropping interpretation descriptor.
- the interpretation information corresponding to the image data after the switching is inserted in a higher layer than the video stream at a timing preceding the switching timing of the two-dimensional image data and the stereoscopic image data.
- the demultiplexer 213 extracts video and audio streams from the transport stream TS output from the digital tuner 211.
- the demultiplexer 213 extracts information such as a program map table (PMT) from the transport stream TS and supplies the information to the CPU 201.
- PMT program map table
- This information includes the flag information of “cropping_normal_interpretation_flag” as interpretation information of the parameter value of the cropping information as described above.
- the CPU 201 interprets the parameter value of the cropping information based on the flag information, and controls the image data cut-out process (cropping) from the image data after decoding.
- the video decoder 214 performs a process reverse to that of the video encoder 112 of the transmission data generator 110 described above. That is, the video decoder unit 214 performs a decoding process on the encoded image data included in the video stream extracted by the demultiplexer 213 to obtain decoded image data.
- the transmission data generation unit 110 of the broadcast station 100 performs encoding for each 16 ⁇ 16 block, so that 8 lines of blank data are added and encoded as image data of 1920 pixels ⁇ 1088 lines. Has been made. Therefore, the video decoder unit 214 acquires 1920 pixel ⁇ 1088 line image data to which 8 lines of blank data are added as decoded image data.
- the video decoder unit 214 extracts header information of the video data stream, and supplies this header information to the CPU 201.
- cropping information is included in the SPS portion of the head access unit of the GOP.
- “Frame ⁇ Packing Arrangement SEI message ” including the type information is inserted in the SEIs portion of the access unit.
- the CPU 201 controls image data cut-out processing (cropping) from the decoded image data based on the cropping information and the SEI.
- the video decoder unit 214 performs image data cut-out processing (cropping) from the decoded image data under the control of the CPU 201, and appropriately performs scaling to generate image data for display.
- image data cut-out processing cropping
- the video decoder 214 performs the following process when the image data is two-dimensional image data. That is, the video decoder unit 214 cuts out 1920 pixel ⁇ 1080 line image data including substantial image data from the decoded 1920 pixel ⁇ 1088 line image data, and displays image data for two-dimensional image display. An SV is generated.
- the video decoder 214 when the image data is stereoscopic image data and is in the two-dimensional display mode, the video decoder 214 performs the following processing. That is, the video decoder unit 214 uses the left-eye image data or the right-eye image data among the 1920-pixel ⁇ 1080-line image data including substantial image data from the decoded 1920-pixel ⁇ 1088-line image data. Cut out. Furthermore, the video decoder unit 214 scales the cut-out image data to generate image data SV for two-dimensional image display (see the 2D display mode in FIGS. 4 and 5).
- the video decoder 214 performs the following processing when the image data is stereoscopic image data and is in the stereoscopic display mode. That is, the video decoder unit 214 cuts out 1920 pixel ⁇ 1080 line image data including substantial image data from the decoded 1920 pixel ⁇ 1088 line image data.
- the video decoder unit 214 divides the cut-out image data into left and right or up and down, performs scaling on each, and generates left-eye image data SL and right-eye image data SR for stereoscopic image display (FIG. 4, FIG. 4). (See 3D display mode in FIG. 5). In this case, in the case of side-by-side stereoscopic image data, it is divided into left and right, and in the case of top-and-bottom stereoscopic image data, it is divided in up and down.
- the view buffer 217L temporarily accumulates the two-dimensional image data SV or the left-eye image data SL of 1920 pixels ⁇ 1080 lines generated by the video decoder unit 214 and outputs it to an image output unit such as a display display.
- the view buffer 217R temporarily accumulates the 1920-pixel ⁇ 1080-line right-eye image data SR generated by the video decoder 214 and outputs the right-eye image data SR to an image output unit such as a display.
- the audio decoder 218 performs processing opposite to that of the audio encoder 113 of the transmission data generation unit 110 described above. That is, the audio decoder 218 performs a decoding process on the encoded audio data included in the audio stream extracted by the demultiplexer 213 to obtain decoded audio data.
- the channel processing unit 219 generates audio data SA of each channel for realizing, for example, 5.1ch surround for the audio data obtained by the audio decoder 218, and outputs the audio data SA to an audio output unit such as a speaker.
- the flowchart in FIG. 13 shows an example of the cropping control process of the CPU 201.
- the CPU 201 executes the processing of this flowchart for each picture.
- CPU201 starts a process in step ST1, and moves to the process of step ST2 after that.
- step ST2 the CPU 201 determines whether or not the 3D display mode is set. Note that the user can set the 3D display mode or the 2D display mode by operating the RC transmitter 206.
- step ST3 the CPU 201 determines whether or not “cropping_normal_interpretation_flag” that is interpretation information of the parameter value of the cropping information is “0”. This flag information is set to “0” when the image data is stereoscopic image data and the 3D service considering 2D compatibility.
- Step ST4 the CPU 201 determines whether or not the SEI of “Frame Packing Arrangement SEI message” is detected in Step ST4. This SEI exists when the image data is stereoscopic image data. When detecting this SEI, the CPU 201 determines in step ST5 whether (frame_crop_right_offset ⁇ frame_crop_left_offset) is equal to 1 ⁇ 2 of the horizontal size (horizontal_size) of the picture.
- step ST5 In the case of side-by-side stereoscopic image data, the conditions of step ST5 are satisfied. Therefore, the CPU 201 proceeds to the process of step ST6 when the condition of step ST5 is satisfied. In step ST6, the CPU 201 interprets the cropping information so as to double the cropping area in the horizontal direction, and performs a cropping control process.
- CPU 201 ends the process in step ST7 after the process of step ST6.
- step ST8 the CPU 201 determines whether (frame_crop_bottom_offsetoff ⁇ frame_crop_top_offset) is equal to 1 ⁇ 2 of the vertical size (vertical_size) of the picture (picture).
- step ST8 In the case of top-and-bottom stereoscopic image data, the condition of step ST8 is satisfied. Therefore, the CPU 201 proceeds to the process of step ST9 when the condition of step ST8 is satisfied. In step ST9, the CPU 201 interprets the cropping information so as to double the cropping area in the vertical direction, and performs a cropping control process.
- step ST10 the CPU 201 performs the cropping control process with the parameter value of the cropping information as it is. After the process in step ST10, the CPU 201 ends the process in step ST7.
- FIG. 14 shows an example of the operation of flag information of “cropping_normal_interpretation_flag” described in the AVC video descriptor (AVC_video_descriptor) under the PMT inserted in the system layer.
- AVC_video_descriptor AVC video descriptor
- the maximum insertion cycle of PMT is 100 msec. For this reason, the PMT insertion timing and the video frame timing do not necessarily match.
- the 3D display mode is assumed.
- the image data is switched from the two-dimensional image data to the stereoscopic image data at the timing Tb.
- the AVC / video descriptor in which the flag information of “cropping_normal_interpretation_flag” corresponding to the image data after switching is described is acquired at timing Ta preceding timing Tb.
- the CPU 201 does not specially interpret the parameter value of the cropping information until the timing Tb, but interprets it as it is, and performs the cropping control process. Do. Therefore, until the timing Tb, the video decoder unit 214 correctly generates the image data SV for two-dimensional image display.
- the SEI of “Frame Packing Arrangement SEI message” is detected.
- the type information of the stereoscopic image data included in this SEI is “3”, and it can be seen that the stereoscopic image data is side-by-side type stereoscopic image data.
- the CPU 201 specially interprets the parameter value of the cropping information from this timing Tb, and performs a cropping control process. Therefore, from the timing Tb, the video decoder unit 214 correctly generates the image data SL and SR for stereoscopic image display.
- the image data is switched from the stereoscopic image data to the two-dimensional image data at the timing Td.
- the AVC / video descriptor in which the flag information of “cropping_normal_interpretation_flag” corresponding to the image data after switching is described is acquired at timing Tc preceding timing Td.
- “cropping_normal_interpretation_flag” is always set to “0” in order to perform correct display even when channel switching occurs at the timing of Td, and the receiver side interprets the parameter value of the cropping information.
- the display range can be determined.
- the SEI of “Frame Packing Arrangement SEI message” is not detected.
- the CPU 201 interprets the parameter value of the cropping information as it is from this timing Td, and performs the cropping control process. Therefore, from the timing Td, the video decoder 214 correctly generates image data SV for two-dimensional image display.
- a television broadcast signal input to the antenna terminal 210 is supplied to the digital tuner 211.
- the television broadcast signal is processed, and a predetermined transport stream TS corresponding to the user's selected channel is output.
- the demultiplexer 2113 video and audio elementary streams are extracted from the transport stream TS obtained by the digital tuner 211.
- the demultiplexer 213 extracts information such as a program map table (PMT) from the transport stream TS, and supplies the information to the CPU 201.
- PMT program map table
- This information also includes flag information of “cropping_normal_interpretation_flag” as interpretation information of the parameter value of the cropping information.
- the video stream extracted by the demultiplexer 213 is supplied to the video decoder unit 214.
- decoding processing is performed on the encoded image data included in the video stream, and decoded image data (two-dimensional image data or stereoscopic image data) is obtained.
- This image data is 1920 pixels ⁇ 1088 lines of image data to which 8 lines of blank data are added.
- header information of the video data stream is extracted, and this header information is supplied to the CPU 201. This header information includes cropping information and SEI of “Frame Packing Arrangement SEI message”.
- the CPU 201 performs cropping control on the video decoder unit 214 based on cropping information, interpretation information of parameter values thereof, and SEI including type information of stereoscopic image data. In this case, when the image data is two-dimensional image data, the CPU 201 interprets the parameter value of the cropping information as it is.
- the parameter value of the cropping information is interpreted as it is. Further, in the CPU 201, when the image data is stereoscopic image data in the 3D display mode, the cropping information is interpreted so that the cropping area is doubled in the horizontal direction or the vertical direction.
- image data cut-out processing is performed based on the interpreted cropping information from the decoded image data under the control of the CPU 201. Further, the video decoder unit 214 appropriately scales the cut-out image data to generate display image data.
- the video decoder unit 214 when the image data is two-dimensional image data, the video decoder unit 214 performs the following processing. That is, the video decoder unit 214 cuts out 1920 pixel ⁇ 1080 line image data including substantial image data from the decoded 1920 pixel ⁇ 1088 line image data, and generates an image for two-dimensional image display. Data SV is generated.
- the video decoder unit 214 when the image data is stereoscopic image data and is in the two-dimensional display mode, the video decoder unit 214 performs the following processing. That is, in the video decoder unit 214, left-eye image data or right-eye image data among 1920-pixel ⁇ 1080-line image data including substantial image data from the decoded 1920-pixel ⁇ 1088-line image data. Is cut out. Further, the video decoder unit 214 performs scaling on the cut image data, and generates image data SV for two-dimensional image display.
- the video decoder unit 214 when the image data is stereoscopic image data and is in the stereoscopic display mode, the video decoder unit 214 performs the following processing. That is, the video decoder 214 extracts 1920 pixel ⁇ 1080 line image data including substantial image data from the decoded 1920 pixel ⁇ 1088 line image data. Further, in the video decoder unit 214, the cut out image data is divided into left and right or up and down, and scaled to generate left eye image data SL and right eye image data SR for stereoscopic image display.
- the two-dimensional image display image data SV generated by the video decoder unit 214 or the left-eye image data SL for stereoscopic image display is output to an image output unit such as a display display through the view buffer 217L.
- the right-eye image data SR for stereoscopic image display generated by the video decoder unit 214 is output to an image output unit such as a display display through the view buffer 217R.
- the audio stream extracted by the demultiplexer 213 is supplied to the audio decoder 218.
- the audio decoder 218 performs a decoding process on the encoded audio data included in the audio stream to obtain decoded audio data.
- This audio data is supplied to the channel processing unit 219.
- the channel processing unit 219 generates audio data SA for each channel for realizing 5.1ch surround, for example, for the audio data.
- the audio data SA is output to an audio output unit such as a speaker.
- the CPU 201 inserts into the header portion of the video stream based on the interpretation information of the parameter value of the cropping information inserted in the higher layer than the video stream.
- the cropping information is properly interpreted.
- the CPU 201 controls the image data clipping process (cropping) in the video decoder unit 214. Therefore, regardless of whether the image data is two-dimensional image data or stereoscopic image data, the video decoder unit 214 can appropriately perform image data cut-out processing, and display image data can be generated correctly.
- the CPU 201 obtains the interpretation information changed corresponding to the image data after switching, prior to the switching timing of the image data.
- the interpretation of the parameter value of the cropping information by this interpretation information is performed immediately after the image data is actually switched. Therefore, the image data can be appropriately cut out by interpreting the parameter value of the cropping information suitable for the image data after switching immediately from the switching timing. Further, even if the acquisition of interpretation information is not synchronized with the switching timing of image data, it is possible to avoid an unnatural image display.
- the legacy 2D receiver (2DTV) receives the transport stream Ts from the broadcasting station 100 in the image transmission / reception system 10 shown in FIG.
- the legacy 2D receiver skips the interpretation information of the parameter value of the cropping information inserted in the higher layer than the video stream. Therefore, the interpretation information does not affect the cropping process in the 2D receiver.
- FIG. 15 shows a configuration example of the transport stream TS.
- This configuration example is an example in which mode information of “cropping_interpretation_mode” as interpretation information of a parameter value of cropping information is described in an existing AVC video descriptor.
- the PES packet “Video ⁇ PES ”of the video stream is included. If the image data included in this video stream is stereoscopic image data, as described above, “Frame Packing Arrangement SEI message” is inserted in the SEIs portion of the access unit. This SEI includes type information indicating what transmission format the stereoscopic image data of the image data is.
- the transport stream TS includes a PMT (Program Map Table) as PSI (Program Specific Information). This PSI is information describing to which program each elementary stream included in the transport stream belongs. Further, the transport stream includes an EIT (Event Information Table) as SI (Serviced Information) for managing each event.
- PMT Program Map Table
- PSI Program Specific Information
- EIT Event Information Table
- SI Serviced Information
- the PMT includes a program descriptor (Program Descriptor) that describes information related to the entire program.
- the PMT includes an elementary loop having information related to each elementary stream. In this configuration example, there is a video elementary loop (Video ES loop).
- PID packet identifier
- mode information of “cropping_interpretation_mode” is described in “AVC_video_descriptor” included in the video elementary loop (Video ES loop).
- FIG. 16A shows another configuration example of the transport stream TS.
- This configuration example is an example in which mode information of “cropping_interpretation_mode” as interpretation information of a parameter value of cropping information is described in a newly defined cropping interpretation descriptor.
- mode information of “cropping_interpretation_mode” is described in “Cropping_interpretation_descriptor” inserted in a video elementary loop (Video ES loop).
- Video ES loop Video elementary loop
- “Cropping_interpretation_descriptor” may be inserted under the EIT as shown in FIG.
- FIG. 17 shows a structural example (Syntax) of “AVC_video_descriptor”. This descriptor itself is already H.264. H.264 / AVC standard. Here, 2-bit mode information of “cropping_interpretation_mode” is newly defined in this descriptor.
- This mode information specifies the interpretation of the parameter value of the cropping information defined by the SPS (SequenceParameter Set) in the head access unit of the GOP, as shown in the prescribed content (semantics) of FIG. “01” specifies that the value of frame_crop_right_offset is to be doubled. This is for side-by-side stereoscopic image data. Also, when “10”, frame_crop_bottom_offset is specified. This is for top-and-bottom stereoscopic image data, and when it is “11”, the parameter value of the cropping information is left as it is. Specifies to interpret.
- SPS SequenceParameter Set
- FIG. 19 shows a structural example (Syntax) of “Cropping_interpretation_descriptor”.
- the 8-bit field of “descriptor_tag” indicates that this descriptor is “Cropping_interpretation_descriptor”.
- the 8-bit field of “descriptor_length” indicates the number of subsequent data bytes.
- the 2-bit mode information of “cropping_interpretation_mode” described above is described.
- the video decoder unit 214 of the receiver 200 performs the same processing even when the mode information “cropping_interpretation_mode” is used instead of the flag information “cropping_normal_interpretation_flag” under the control of the CPU 201.
- the video decoder unit 214 when the image data is two-dimensional image data, the video decoder unit 214 performs the following processing. That is, the video decoder unit 214 cuts out 1920 pixel ⁇ 1080 line image data including substantial image data from the decoded 1920 pixel ⁇ 1088 line image data, and generates an image for two-dimensional image display. Data SV is generated.
- the video decoder unit 214 when the image data is stereoscopic image data and is in the two-dimensional display mode, the video decoder unit 214 performs the following processing. That is, in the video decoder unit 214, left-eye image data or right-eye image data among 1920-pixel ⁇ 1080-line image data including substantial image data from the decoded 1920-pixel ⁇ 1088-line image data. Is cut out. Then, the video decoder unit 214 performs scaling on the cut out image data, and generates image data SV for two-dimensional image display.
- the video decoder unit 214 when the image data is stereoscopic image data and is in the stereoscopic display mode, the video decoder unit 214 performs the following processing. That is, the video decoder 214 extracts 1920 pixel ⁇ 1080 line image data including substantial image data from the decoded 1920 pixel ⁇ 1088 line image data. Then, in the video decoder unit 214, the cut out image data is divided into right and left or up and down and scaled to generate left eye image data SL and right eye image data SR for stereoscopic image display.
- the flowchart of FIG. 20 shows an example of the cropping control process of the CPU 201 when the mode information of “cropping_interpretation_mode” is used.
- the CPU 201 executes the processing of this flowchart for each picture.
- CPU201 starts a process in step ST11, and moves to the process of step ST12 after that.
- step ST12 the CPU 201 determines whether or not the 3D display mode is set. Note that the user can set the 3D display mode or the 2D display mode by operating the RC transmitter 206.
- the CPU 201 determines whether or not the mode information of “cropping_interpretation_mode” is “01” in step ST13.
- the CPU 201 determines whether or not SEI of “Frame Packing Arrangement SEI message” is detected in step ST14. This SEI exists when the image data is stereoscopic image data.
- the CPU 201 determines in step ST15 whether (frame_crop_right_offset ⁇ frame_crop_left_offset) is equal to 1 ⁇ 2 of the horizontal size (horizontal_size) of the picture.
- step ST15 In the case of side-by-side stereoscopic image data, the conditions of step ST15 are satisfied. Therefore, the CPU 201 proceeds to the process of step ST16 when the condition of step ST15 is satisfied. In step ST16, the CPU 201 interprets the cropping information so as to double the cropping area in the horizontal direction, and performs a cropping control process.
- CPU 201 ends the process in step ST17 after the process of step ST16.
- step ST18 the CPU 201 performs the cropping control process with the parameter value of the cropping information as it is. After the process in step ST18, the CPU 201 ends the process in step ST17.
- step ST13 the CPU 201 proceeds to the process of step ST19.
- step ST19 the CPU 201 determines whether or not the mode information of “cropping_interpretation_mode” is “10”. When the mode information is “10”, the CPU 201 determines whether or not SEI of “Frame Packing Arrangement SEI message” is detected in step ST20.
- step ST21 the CPU 201 determines whether (frame_crop_bottom_offsetoff- frame_crop_top_offset) is equal to 1 ⁇ 2 of the vertical size (vertical_size) of the picture.
- step ST21 If the top-and-bottom stereoscopic image data is used, the condition of step ST21 is satisfied. Therefore, the CPU 201 proceeds to the process of step ST22 when the condition of step ST21 is satisfied. In step ST22, the CPU 201 interprets the cropping information so that the cropping area is doubled in the vertical direction, and performs a cropping control process.
- CPU 201 ends the process in step ST17 after the process of step ST22.
- step ST19 when the mode information is not “10” at step ST19, when the SEI is not detected at step ST20, or when the condition at step ST21 is not satisfied, the CPU 201 proceeds to the process at step ST18.
- step ST18 the CPU 201 performs the cropping control process with the parameter value of the cropping information as it is. After the process in step ST18, the CPU 201 ends the process in step ST17.
- FIG. 21 shows an example of operating mode information of “cropping_interpretation_mode” described in the AVC video descriptor (AVC_video_descriptor) under the PMT inserted in the system layer.
- AVC_video_descriptor AVC_video_descriptor
- the maximum insertion cycle of PMT is 100 msec. For this reason, the PMT insertion timing and the video frame timing do not necessarily match.
- the 3D display mode is assumed.
- the image data is switched from the two-dimensional image data to the stereoscopic image data at the timing Tb.
- the AVC video descriptor in which the mode information of “cropping_interpretation_mode” corresponding to the image data after the switching is described is acquired at timing Ta preceding timing Tb.
- the CPU 201 does not interpret the value of frame_crop_right_offset twice, but interprets it as it is until the timing Tb, and controls the cropping. Process. Therefore, until the timing Tb, the video decoder unit 214 correctly generates the image data SV for two-dimensional image display.
- the SEI of “Frame Packing Arrangement SEI message” is detected.
- the type information of the stereoscopic image data included in this SEI is “3”, and it can be seen that the stereoscopic image data is side-by-side type stereoscopic image data.
- the CPU 201 interprets the value of frame_crop_right_offset twice and performs the cropping control process. Therefore, from the timing Tb, the video decoder unit 214 correctly generates the image data SL and SR for stereoscopic image display.
- the image data is switched from the stereoscopic image data to the two-dimensional image data at the timing Td.
- the AVC video descriptor in which the mode information of “cropping_interpretation_mode” corresponding to the image data after switching is described is acquired at timing Tc preceding timing Td.
- the CPU 201 continues to interpret the value of frame_crop_right_offset twice until the timing Td, and performs the cropping control process. Therefore, until the timing Td, the video decoder unit 214 correctly generates the image data SL and SR for stereoscopic image display. This can be done by storing in the receiver that “cropping_interpretation_mode” was “01” or “10” in the previous state.
- cropping_interpretation_mode is always set to “01” or “10”, and interpretation of the parameter value of the cropping information is performed in the receiver. It is possible to determine the display range by making it to the side.
- the SEI of “Frame Packing Arrangement SEI message” is not detected.
- the CPU 201 interprets the parameter value of the cropping information as it is from this timing Td, and performs the cropping control process. Therefore, from the timing Td, the video decoder 214 correctly generates image data SV for two-dimensional image display.
- the receiver 200 can perform the same operation as in the above-described embodiment. That is, even in this case, the same effect as that of the above-described embodiment can be obtained.
- H.264 / AVC encoding has been shown.
- the image data may be subjected to other encoding such as MPEG2 video, for example.
- the image data may be subjected to other encoding such as HEVC ⁇ ⁇ (High Efficiency Video Coding).
- HEVC ⁇ ⁇ High Efficiency Video Coding
- stereoscopic image data type information is inserted into, for example, a picture header.
- the image transmission / reception system 10 including the broadcast station 100 and the receiver 200 is shown.
- the configuration of the image transmission / reception system to which the present technology can be applied is not limited thereto.
- the receiver 200 may have a configuration of a set top box and a monitor connected via a digital interface such as HDMI (High-Definition Multimedia Interface).
- the container is a transport stream (MPEG-2 TS)
- MPEG-2 TS transport stream
- the present technology can be similarly applied to a system configured to be distributed to receiving terminals using a network such as the Internet.
- the Internet distribution it is often distributed in a container of MP4 or other formats.
- containers of various formats such as transport stream (MPEG-2 TS) adopted in the digital broadcasting standard and MP4 used in Internet distribution correspond to the container.
- this technique can also take the following structures.
- An image data transmission unit that transmits a container of a predetermined format including a video stream including image data and having cropping information inserted in a header portion;
- An image data transmission apparatus comprising: an information insertion unit that inserts interpretation information of a parameter value of the cropping information in a layer higher than the video stream.
- the above interpretation information is When the image data is stereoscopic image data in which the left eye image data and the right eye image data are divided and arranged in the horizontal direction or the vertical direction in the same frame, The image data transmitting device according to (1), which indicates that the parameter value of the cropping information should be specially interpreted.
- the above interpretation information is The image data transmitting device according to (2), which indicates that the parameter value of the cropping information should be interpreted so that the cropping area is doubled in the horizontal direction or the vertical direction.
- the image data is two-dimensional image data or stereoscopic image data in which left eye image data and right eye image data are divided and arranged in the horizontal direction or the vertical direction in the same frame,
- the information insertion part The interpretation information changed corresponding to the image data after switching is inserted into a higher layer than the video stream at a timing preceding the switching timing of the two-dimensional image data and the stereoscopic image data. ) To (3).
- the container is a transport stream
- the information insertion part The image data transmitting apparatus according to any one of (1) to (4), wherein the interpretation information is inserted under a program map table or an event information table.
- the information insertion unit The image data transmitting apparatus according to (5), wherein the interpretation information is described in a descriptor inserted under the program map table or the event information table.
- the video stream is H.264. H.264 / AVC or HEVC encoded data,
- the cropping information is defined in the sequence parameter set of the video stream,
- the information insertion part The image data transmitting device according to (6), wherein the interpretation information is described in a descriptor inserted under the program map table or the event information table.
- An image data transmission method comprising an information insertion step of inserting interpretation information of a parameter value of the cropping information into a layer higher than the video stream.
- An image data receiving unit that receives a container having a predetermined format including a video stream including image data and having cropping information inserted in a header portion; Interpretation information of the parameter value of the cropping information is inserted in a layer higher than the video stream, An information acquisition unit for acquiring the interpretation information from the container; A decoding unit for decoding the video stream of the container to obtain the image data and the cropping information; An image data receiving device further comprising: an image data processing unit that interprets the parameter value of the cropping information based on the interpretation information, and extracts image data of a predetermined region from the image data to generate image data for display .
- the image data is two-dimensional image data or stereoscopic image data in which left eye image data and right eye image data are divided and arranged in the horizontal direction or the vertical direction in the same frame,
- the interpretation information changed corresponding to the image data after switching is inserted in a layer higher than the video stream at a timing preceding the switching timing of the two-dimensional image data and the stereoscopic image data.
- the image data processing unit The parameter value of the cropping information is interpreted based on the interpretation information that has been inserted from the switching timing of the image data at a timing preceding the switching timing and that has been changed corresponding to the image data after the switching.
- the image data receiving device according to (10).
- the main feature of this technique is that when a transport stream (container) having a predetermined format including a video stream in which the cropping information is inserted in the header portion is transmitted, the parameter value of the cropping information is set to a layer higher than the video stream.
- DESCRIPTION OF SYMBOLS 10 ... Image transmission / reception system 100 ... Broadcasting station 110 ... Transmission data generation part 111 ... Data extraction part 111a ... Data recording medium 112 ... Video encoder 113 ... Audio encoder 114 ... -Multiplexer 200 ... Receiver 201 ... CPU 202 ... Flash ROM 203 ... DRAM 204 ... Internal bus 205 ... Remote control receiver (RC receiver) 206 ... Remote control transmitter (RC transmitter) 210 ... Antenna terminal 211 ... Digital tuner 213 ... Demultiplexer 214 ... Video decoder unit 217L, 217R ... View buffer 218 ... Audio decoder 219 ... Channel processing unit
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Library & Information Science (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
Description
画像データを含み、ヘッダ部にクロッピング情報が挿入されたビデオストリームを有する所定フォーマットのコンテナを送信する画像データ送信部と、
上記ビデオストリームよりも上位のレイヤに、上記クロッピング情報のパラメータ値の解釈情報を挿入する情報挿入部を備える
画像データ送信装置にある。
画像データを含み、ヘッダ部にクロッピング情報が挿入されたビデオストリームを有する所定フォーマットのコンテナを受信する画像データ受信部を備え、
上記ビデオストリームよりも上位のレイヤに、上記クロッピング情報のパラメータ値の解釈情報が挿入されており、
上記コンテナから上記解釈情報を取得する情報取得部と、
上記コンテナが有するビデオストリームをデコードして上記画像データおよび上記クロッピング情報を取得するデコード部と、
上記クロッピング情報のパラメータ値を上記解釈情報に基づいて解釈し、上記画像データから、所定の領域の画像データを切り出して表示用の画像データを生成する画像データ処理部とをさらに備える
画像データ受信装置にある。
1.実施の形態
2.変形例
[画像送受信システム]
図1は、実施の形態としての画像送受信システム10の構成例を示している。この画像送受信システム10は、放送局100、および受信機(3DTV)200により構成されている。放送局100は、画像データを含むビデオストリームを有するトランスポートストリームTSを、放送波に載せて送信する。
図6は、放送局100において、上述したトランスポートストリームTSを生成する送信データ生成部110の構成例を示している。この送信データ生成部110は、データ取り出し部(アーカイブ部)111と、ビデオエンコーダ112と、オーディオエンコーダ113と、マルチプレクサ114を有している。
(2) frame_crop_left_offset = 0
(4) frame_crop_top_offset = 0
図12は、受信機(3DTV)200の構成例を示している。この受信機200は、CPU201と、フラッシュROM202と、DRAM203と、内部バス204と、リモートコントロール受信部(RC受信部)205と、リモートコントロール送信機(RC送信機)206を有している。
CPU201によるビデオデコーダ部214におけるクロッピング(画像データ切り出し処理)の制御について説明する。CPU201は、クロッピング情報、そのパラメータ値の解釈情報、さらには立体画像データのタイプ情報が含まれているSEIなどに基づいて、ビデオデコーダ部214におけるクロッピング制御を行う。
なお、上述実施の形態においては、プログラム・マップ・テーブルのビデオエレメンタリ・ループの配下に挿入されるデスクリプタ(descriptor)に、「cropping_normal_interpretation_flag」のフラグ情報を解釈情報として記述する例を示した。このフラグ情報の代わりに、以下に詳述する「cropping_interpretation_mode」のモード情報を、解釈情報として、デスクリプタに記述することも考えられる。
(1)画像データを含み、ヘッダ部にクロッピング情報が挿入されたビデオストリームを有する所定フォーマットのコンテナを送信する画像データ送信部と、
上記ビデオストリームよりも上位のレイヤに、上記クロッピング情報のパラメータ値の解釈情報を挿入する情報挿入部を備える
画像データ送信装置。
(2)上記解釈情報は、
上記画像データが、同一フレーム内に左眼画像データおよび右眼画像データが水平方向あるいは垂直方向に分割配置された立体画像データであるとき、
上記クロッピング情報のパラメータ値を特殊に解釈すべきことを示す
前記(1)に記載の画像データ送信装置。
(3)上記解釈情報は、
上記クロッピング情報のパラメータ値を、クロッピング領域が水平方向あるいは垂直方向に2倍となるように解釈すべきことを示す
前記(2)に記載の画像データ送信装置。
(4)上記画像データは、2次元画像データあるいは同一フレーム内に左眼画像データおよび右眼画像データが水平方向あるいは垂直方向に分割配置された立体画像データであり、
上記情報挿入部は、
上記2次元画像データおよび上記立体画像データの切り替えタイミングに先行するタイミングで、切り替え後の画像データに対応して変更された上記解釈情報を、上記ビデオストリームよりも上位のレイヤに挿入する
前記(1)から(3)のいずれかに記載の画像データ送信装置。
(5)上記コンテナは、トランスポートストリームであり、
上記情報挿入部は、
上記解釈情報をプログラム・マップ・テーブル、あるいは、イベント・インフォメーション・テーブルの配下に挿入する
前記(1)から(4)のいずれかに記載の画像データ送信装置。
(6)上記情報挿入部は、
上記プログラム・マップ・テーブル、あるいは、イベント・インフォメーション・テーブルの配下に挿入されるデスクリプタに、上記解釈情報を記述する
前記(5)に記載の画像データ送信装置。
(7)上記ビデオストリームは、H.264/AVC、あるいはHEVCの符号化データであり、
上記クロッピング情報は、上記ビデオストリームのシーケンス・パラメータ・セットに定義されており、
上記情報挿入部は、
上記解釈情報を、上記プログラム・マップ・テーブル、あるいは、イベント・インフォメーション・テーブルの配下に挿入されるデスクリプタに記述する
前記(6)に記載の画像データ送信装置。
(8)画像データを含み、ヘッダ部にクロッピング情報が挿入されたビデオストリームを有する所定フォーマットのコンテナを送信する画像データ送信ステップと、
上記ビデオストリームよりも上位のレイヤに、上記クロッピング情報のパラメータ値の解釈情報を挿入する情報挿入ステップを備える
画像データ送信方法。
(9)画像データを含み、ヘッダ部にクロッピング情報が挿入されたビデオストリームを有する所定フォーマットのコンテナを受信する画像データ受信部を備え、
上記ビデオストリームよりも上位のレイヤに、上記クロッピング情報のパラメータ値の解釈情報が挿入されており、
上記コンテナから上記解釈情報を取得する情報取得部と、
上記コンテナが有するビデオストリームをデコードして上記画像データおよびクロッピング情報を取得するデコード部と、
上記クロッピング情報のパラメータ値を上記解釈情報に基づいて解釈し、上記画像データから、所定の領域の画像データを切り出して表示用の画像データを生成する画像データ処理部とをさらに備える
画像データ受信装置。
(10)上記画像データは、2次元画像データあるいは同一フレーム内に左眼画像データおよび右眼画像データが水平方向あるいは垂直方向に分割配置された立体画像データであり、
切り替え後の画像データに対応して変更された上記解釈情報は、上記ビデオストリームよりも上位のレイヤに、上記2次元画像データおよび上記立体画像データの切り替えタイミングに先行するタイミングで挿入されており、
上記画像データ処理部は、
画像データの上記切り替えタイミングから、該切り替えタイミングに先行するタイミングで挿入されている、上記切り替え後の画像データに対応して変更された上記解釈情報に基づいて、上記クロッピング情報のパラメータ値を解釈する
前記(10)に記載の画像データ受信装置。
(11)画像データを含み、ヘッダ部にクロッピング情報が挿入されたビデオストリームを有する所定フォーマットのコンテナを受信する画像データ受信ステップを備え、
上記ビデオストリームよりも上位のレイヤに、上記クロッピング情報のパラメータ値の解釈情報を挿入されており、
上記コンテナから上記解釈情報を取得する情報取得ステップと、
上記コンテナが有するビデオストリームをデコードして上記画像データおよびクロッピング情報を取得するデコードステップと、
上記クロッピング情報のパラメータ値を上記解釈情報に基づいて解釈し、上記画像データから、所定の領域の画像データを切り出して表示用の画像データを生成する画像データ処理ステップとをさらに備える
画像データ受信方法。
100・・・放送局
110・・・送信データ生成部
111・・・データ取り出し部
111a・・・データ記録媒体
112・・・ビデオエンコーダ
113・・・オーディオエンコーダ
114・・・マルチプレクサ
200・・・受信機
201・・・CPU
202・・・フラッシュROM
203・・・DRAM
204・・・内部バス
205・・・リモートコントロール受信部(RC受信部)
206・・・リモートコントロール送信機(RC送信機)
210・・・アンテナ端子
211・・・デジタルチューナ
213・・・デマルチプレクサ
214・・・ビデオデコーダ部
217L,217R・・・ビューバッファ
218・・・オーディオデコーダ
219・・・チャネル処理部
Claims (11)
- 画像データを含み、ヘッダ部にクロッピング情報が挿入されたビデオストリームを有する所定フォーマットのコンテナを送信する画像データ送信部と、
上記ビデオストリームよりも上位のレイヤに、上記クロッピング情報のパラメータ値の解釈情報を挿入する情報挿入部を備える
画像データ送信装置。 - 上記解釈情報は、
上記画像データが、同一フレーム内に左眼画像データおよび右眼画像データが水平方向あるいは垂直方向に分割配置された立体画像データであるとき、
上記クロッピング情報のパラメータ値を特殊に解釈すべきことを示す
請求項1に記載の画像データ送信装置。 - 上記解釈情報は、
上記クロッピング情報のパラメータ値を、クロッピング領域が水平方向あるいは垂直方向に2倍となるように解釈すべきことを示す
請求項2に記載の画像データ送信装置。 - 上記画像データは、2次元画像データあるいは同一フレーム内に左眼画像データおよび右眼画像データが水平方向あるいは垂直方向に分割配置された立体画像データであり、
上記情報挿入部は、
上記2次元画像データおよび上記立体画像データの切り替えタイミングに先行するタイミングで、切り替え後の画像データに対応して変更された上記解釈情報を、上記ビデオストリームよりも上位のレイヤに挿入する
請求項1に記載の画像データ送信装置。 - 上記コンテナは、トランスポートストリームであり、
上記情報挿入部は、
上記解釈情報をプログラム・マップ・テーブル、あるいは、イベント・インフォメーション・テーブルの配下に挿入する
請求項1に記載の画像データ送信装置。 - 上記情報挿入部は、
上記プログラム・マップ・テーブル、あるいは、イベント・インフォメーション・テーブルの配下に挿入されるデスクリプタに、上記解釈情報を記述する
請求項5に記載の画像データ送信装置。 - 上記ビデオストリームは、H.264/AVC、あるいはHEVCの符号化データであり、
上記クロッピング情報は、上記ビデオストリームのシーケンス・パラメータ・セットに定義されており、
上記情報挿入部は、
上記解釈情報を、上記プログラム・マップ・テーブル、あるいは、イベント・インフォメーション・テーブルの配下に挿入されるデスクリプタに記述する
請求項6に記載の画像データ送信装置。 - 画像データを含み、ヘッダ部にクロッピング情報が挿入されたビデオストリームを有する所定フォーマットのコンテナを送信する画像データ送信ステップと、
上記ビデオストリームよりも上位のレイヤに、上記クロッピング情報のパラメータ値の解釈情報を挿入する情報挿入ステップを備える
画像データ送信方法。 - 画像データを含み、ヘッダ部にクロッピング情報が挿入されたビデオストリームを有する所定フォーマットのコンテナを受信する画像データ受信部を備え、
上記ビデオストリームよりも上位のレイヤに、上記クロッピング情報のパラメータ値の解釈情報が挿入されており、
上記コンテナから上記解釈情報を取得する情報取得部と、
上記コンテナが有するビデオストリームをデコードして上記画像データおよびクロッピング情報を取得するデコード部と、
上記クロッピング情報のパラメータ値を上記解釈情報に基づいて解釈し、上記画像データから、所定の領域の画像データを切り出して表示用の画像データを生成する画像データ処理部とをさらに備える
画像データ受信装置。 - 上記画像データは、2次元画像データあるいは同一フレーム内に左眼画像データおよび右眼画像データが水平方向あるいは垂直方向に分割配置された立体画像データであり、
切り替え後の画像データに対応して変更された上記解釈情報は、上記ビデオストリームよりも上位のレイヤに、上記2次元画像データおよび上記立体画像データの切り替えタイミングに先行するタイミングで挿入されており、
上記画像データ処理部は、
画像データの上記切り替えタイミングから、該切り替えタイミングに先行するタイミングで挿入されている、上記切り替え後の画像データに対応して変更された上記解釈情報に基づいて、上記クロッピング情報のパラメータ値を解釈する
請求項9に記載の画像データ受信装置。 - 画像データを含み、ヘッダ部にクロッピング情報が挿入されたビデオストリームを有する所定フォーマットのコンテナを受信する画像データ受信ステップを備え、
上記ビデオストリームよりも上位のレイヤに、上記クロッピング情報のパラメータ値の解釈情報を挿入されており、
上記コンテナから上記解釈情報を取得する情報取得ステップと、
上記コンテナが有するビデオストリームをデコードして上記画像データおよびクロッピング情報を取得するデコードステップと、
上記クロッピング情報のパラメータ値を上記解釈情報に基づいて解釈し、上記画像データから、所定の領域の画像データを切り出して表示用の画像データを生成する画像データ処理ステップとをさらに備える
画像データ受信方法。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP20120848868 EP2651138A4 (en) | 2011-11-18 | 2012-11-09 | IMAGE DATA TRANSMISSION DEVICE, IMAGE DATA TRANSMISSION METHOD, IMAGE DATA RECEIVING DEVICE, AND IMAGE DATA RECEIVING METHOD |
BR112013017736A BR112013017736A2 (pt) | 2011-11-18 | 2012-11-09 | dispositivo e métodos de transmissão e de recepção de dados de imagem |
US13/979,293 US20140049606A1 (en) | 2011-11-18 | 2012-11-09 | Image data transmission device, image data transmission method, image data reception device, and image data reception method |
KR1020137017582A KR20140095012A (ko) | 2011-11-18 | 2012-11-09 | 화상 데이터 송신 장치, 화상 데이터 송신 방법, 화상 데이터 수신 장치 및 화상 데이터 수신 방법 |
CN2012800051619A CN103329545A (zh) | 2011-11-18 | 2012-11-09 | 图像数据发送装置、图像数据发送方法、图像数据接收装置和图像数据接收方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2011253350A JP2013110540A (ja) | 2011-11-18 | 2011-11-18 | 画像データ送信装置、画像データ送信方法、画像データ受信装置および画像データ受信方法 |
JP2011-253350 | 2011-11-18 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013073455A1 true WO2013073455A1 (ja) | 2013-05-23 |
Family
ID=48429515
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2012/079064 WO2013073455A1 (ja) | 2011-11-18 | 2012-11-09 | 画像データ送信装置、画像データ送信方法、画像データ受信装置および画像データ受信方法 |
Country Status (7)
Country | Link |
---|---|
US (1) | US20140049606A1 (ja) |
EP (1) | EP2651138A4 (ja) |
JP (1) | JP2013110540A (ja) |
KR (1) | KR20140095012A (ja) |
CN (1) | CN103329545A (ja) |
BR (1) | BR112013017736A2 (ja) |
WO (1) | WO2013073455A1 (ja) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6528683B2 (ja) * | 2013-07-12 | 2019-06-12 | ソニー株式会社 | 再生装置、再生方法 |
CN117459724A (zh) * | 2014-01-08 | 2024-01-26 | 索尼公司 | 解码设备和编码设备 |
US9948952B2 (en) | 2014-02-21 | 2018-04-17 | Lg Electronics Inc. | Broadcast signal transmitting device and broadcast signal receiving device |
KR20160123216A (ko) * | 2014-02-21 | 2016-10-25 | 엘지전자 주식회사 | 3d 방송 신호를 처리하는 방법 및 장치 |
JP6331882B2 (ja) * | 2014-08-28 | 2018-05-30 | ソニー株式会社 | 送信装置、送信方法、受信装置および受信方法 |
WO2018180511A1 (ja) * | 2017-03-27 | 2018-10-04 | ソニー株式会社 | 画像生成装置および画像生成方法、並びに画像再生装置および画像再生方法 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001094931A (ja) * | 1993-04-02 | 2001-04-06 | Sony Corp | 映像信号伝送方法、映像信号記録媒体、映像信号記録装置及び映像信号再生装置 |
JP2005006114A (ja) | 2003-06-12 | 2005-01-06 | Sharp Corp | 放送データ送信装置、放送データ送信方法および放送データ受信装置 |
JP2011130133A (ja) * | 2009-12-16 | 2011-06-30 | Canon Inc | 立体映像処理装置及び立体映像処理装置の制御方法 |
JP2011130468A (ja) * | 2009-07-10 | 2011-06-30 | Panasonic Corp | 記録媒体、再生装置、及び集積回路 |
JP2011199889A (ja) * | 2011-05-16 | 2011-10-06 | Panasonic Corp | 映像信号処理装置、及び、映像信号処理方法 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3250333B2 (ja) * | 1993-04-02 | 2002-01-28 | ソニー株式会社 | 映像信号処理方法、映像信号記録方法、映像信号再生方法、映像信号処理装置、映像信号記録装置及び映像信号再生装置 |
CN101842840B (zh) * | 2007-11-01 | 2012-03-07 | 松下电器产业株式会社 | 记录媒体、再现装置、记录装置、再现方法及记录方法 |
KR20110088334A (ko) * | 2010-01-28 | 2011-08-03 | 삼성전자주식회사 | 3차원 멀티미디어 서비스를 제공하기 위한 데이터스트림 생성 방법 및 장치, 3차원 멀티미디어 서비스를 제공하기 위한 데이터스트림 수신 방법 및 장치 |
JP5577823B2 (ja) * | 2010-04-27 | 2014-08-27 | ソニー株式会社 | 送信装置、送信方法、受信装置および受信方法 |
US20120075436A1 (en) * | 2010-09-24 | 2012-03-29 | Qualcomm Incorporated | Coding stereo video data |
-
2011
- 2011-11-18 JP JP2011253350A patent/JP2013110540A/ja not_active Abandoned
-
2012
- 2012-11-09 WO PCT/JP2012/079064 patent/WO2013073455A1/ja active Application Filing
- 2012-11-09 EP EP20120848868 patent/EP2651138A4/en not_active Withdrawn
- 2012-11-09 KR KR1020137017582A patent/KR20140095012A/ko not_active Application Discontinuation
- 2012-11-09 BR BR112013017736A patent/BR112013017736A2/pt not_active IP Right Cessation
- 2012-11-09 CN CN2012800051619A patent/CN103329545A/zh active Pending
- 2012-11-09 US US13/979,293 patent/US20140049606A1/en not_active Abandoned
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001094931A (ja) * | 1993-04-02 | 2001-04-06 | Sony Corp | 映像信号伝送方法、映像信号記録媒体、映像信号記録装置及び映像信号再生装置 |
JP2005006114A (ja) | 2003-06-12 | 2005-01-06 | Sharp Corp | 放送データ送信装置、放送データ送信方法および放送データ受信装置 |
JP2011130468A (ja) * | 2009-07-10 | 2011-06-30 | Panasonic Corp | 記録媒体、再生装置、及び集積回路 |
JP2011130133A (ja) * | 2009-12-16 | 2011-06-30 | Canon Inc | 立体映像処理装置及び立体映像処理装置の制御方法 |
JP2011199889A (ja) * | 2011-05-16 | 2011-10-06 | Panasonic Corp | 映像信号処理装置、及び、映像信号処理方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP2651138A4 * |
Also Published As
Publication number | Publication date |
---|---|
KR20140095012A (ko) | 2014-07-31 |
BR112013017736A2 (pt) | 2016-10-11 |
EP2651138A1 (en) | 2013-10-16 |
JP2013110540A (ja) | 2013-06-06 |
US20140049606A1 (en) | 2014-02-20 |
CN103329545A (zh) | 2013-09-25 |
EP2651138A4 (en) | 2015-04-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5577823B2 (ja) | 送信装置、送信方法、受信装置および受信方法 | |
US9756380B2 (en) | Broadcast receiver and 3D video data processing method thereof | |
KR101648455B1 (ko) | 방송 송신기, 방송 수신기 및 3d 비디오 데이터 처리 방법 | |
JP5594002B2 (ja) | 画像データ送信装置、画像データ送信方法および画像データ受信装置 | |
WO2013105401A1 (ja) | 送信装置、送信方法、受信装置および受信方法 | |
US9485490B2 (en) | Broadcast receiver and 3D video data processing method thereof | |
WO2013108531A1 (ja) | 受信装置、受信方法および電子機器 | |
WO2013161442A1 (ja) | 画像データ送信装置、画像データ送信方法、画像データ受信装置および画像データ受信方法 | |
KR20140134212A (ko) | 송신 장치, 송신 방법 및 수신 장치 | |
WO2013073455A1 (ja) | 画像データ送信装置、画像データ送信方法、画像データ受信装置および画像データ受信方法 | |
WO2012070364A1 (ja) | 画像データ送信装置、画像データ送信方法、画像データ受信装置および画像データ受信方法 | |
US9693033B2 (en) | Transmitting apparatus, transmitting method, receiving apparatus and receiving method for transmission and reception of image data for stereoscopic display using multiview configuration and container with predetermined format | |
WO2014050447A1 (ja) | 送信装置、送信方法、受信装置および受信方法 | |
WO2013054775A1 (ja) | 送信装置、送信方法、受信装置および受信方法 | |
KR20140000136A (ko) | 화상 데이터 송신 장치, 화상 데이터 송신 방법, 화상 데이터 수신 장치 및 화상 데이터 수신 방법 | |
JP2012199897A (ja) | 画像データ送信装置、画像データ送信方法、画像データ受信装置および画像データ受信方法 | |
JP2012235308A (ja) | 画像データ送信装置、画像データ送信方法、画像データ受信装置および画像データ受信方法 | |
WO2014042034A1 (ja) | 送信装置、送信方法、受信装置および受信方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 20137017582 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012848868 Country of ref document: EP |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12848868 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13979293 Country of ref document: US |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112013017736 Country of ref document: BR |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 112013017736 Country of ref document: BR Kind code of ref document: A2 Effective date: 20130710 |