CN106416239A - Methods and apparatus for delivering content and/or playing back content - Google Patents
Methods and apparatus for delivering content and/or playing back content Download PDFInfo
- Publication number
- CN106416239A CN106416239A CN201580028645.9A CN201580028645A CN106416239A CN 106416239 A CN106416239 A CN 106416239A CN 201580028645 A CN201580028645 A CN 201580028645A CN 106416239 A CN106416239 A CN 106416239A
- Authority
- CN
- China
- Prior art keywords
- content
- image
- stream
- module
- environment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 109
- 238000003860 storage Methods 0.000 claims description 64
- 230000000007 visual effect Effects 0.000 claims description 57
- 239000000463 material Substances 0.000 claims description 53
- 230000005540 biological transmission Effects 0.000 claims description 48
- 238000012913 prioritisation Methods 0.000 claims description 17
- 230000036961 partial effect Effects 0.000 claims description 11
- 230000015572 biosynthetic process Effects 0.000 claims description 10
- 238000003786 synthesis reaction Methods 0.000 claims description 10
- 238000013459 approach Methods 0.000 claims description 7
- 238000009826 distribution Methods 0.000 claims description 6
- 230000001052 transient effect Effects 0.000 claims description 4
- 230000003068 static effect Effects 0.000 abstract description 5
- 210000003128 head Anatomy 0.000 description 128
- 230000008859 change Effects 0.000 description 63
- 238000009877 rendering Methods 0.000 description 43
- 238000013507 mapping Methods 0.000 description 37
- 238000005243 fluidization Methods 0.000 description 29
- 230000006870 function Effects 0.000 description 23
- 238000004891 communication Methods 0.000 description 19
- 238000009434 installation Methods 0.000 description 17
- 230000008569 process Effects 0.000 description 17
- 238000010276 construction Methods 0.000 description 13
- 239000000872 buffer Substances 0.000 description 12
- 230000009471 action Effects 0.000 description 11
- 238000004088 simulation Methods 0.000 description 11
- NJPPVKZQTLUDBO-UHFFFAOYSA-N novaluron Chemical compound C1=C(Cl)C(OC(F)(F)C(OC(F)(F)F)F)=CC=C1NC(=O)NC(=O)C1=C(F)C=CC=C1F NJPPVKZQTLUDBO-UHFFFAOYSA-N 0.000 description 9
- 230000011218 segmentation Effects 0.000 description 9
- 238000012545 processing Methods 0.000 description 8
- 230000011664 signaling Effects 0.000 description 8
- 238000007906 compression Methods 0.000 description 6
- 230000006835 compression Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 6
- 239000002131 composite material Substances 0.000 description 5
- 238000001514 detection method Methods 0.000 description 5
- 238000002156 mixing Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000001914 filtration Methods 0.000 description 4
- 238000007654 immersion Methods 0.000 description 4
- 239000007787 solid Substances 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 239000004033 plastic Substances 0.000 description 3
- 229920003023 plastic Polymers 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 230000035807 sensation Effects 0.000 description 3
- 235000019615 sensations Nutrition 0.000 description 3
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 3
- 241000251468 Actinopterygii Species 0.000 description 2
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 2
- 230000002860 competitive effect Effects 0.000 description 2
- 230000003111 delayed effect Effects 0.000 description 2
- 238000000151 deposition Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 239000003365 glass fiber Substances 0.000 description 2
- 229910002804 graphite Inorganic materials 0.000 description 2
- 239000010439 graphite Substances 0.000 description 2
- 230000014759 maintenance of location Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 210000001747 pupil Anatomy 0.000 description 2
- 238000013468 resource allocation Methods 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 241001269238 Data Species 0.000 description 1
- 241000196324 Embryophyta Species 0.000 description 1
- 235000010627 Phaseolus vulgaris Nutrition 0.000 description 1
- 244000046052 Phaseolus vulgaris Species 0.000 description 1
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 1
- 229910000831 Steel Inorganic materials 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013144 data compression Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 239000012530 fluid Substances 0.000 description 1
- 210000001061 forehead Anatomy 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 1
- 239000010931 gold Substances 0.000 description 1
- 229910052737 gold Inorganic materials 0.000 description 1
- 230000012447 hatching Effects 0.000 description 1
- 235000019692 hotdogs Nutrition 0.000 description 1
- 238000004020 luminiscence type Methods 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 229910052751 metal Inorganic materials 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000000465 moulding Methods 0.000 description 1
- 238000005375 photometry Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000033764 rhythmic process Effects 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 239000010959 steel Substances 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 235000019640 taste Nutrition 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 210000003454 tympanic membrane Anatomy 0.000 description 1
- 238000005303 weighing Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/194—Transmission of image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/161—Encoding, multiplexing or demultiplexing different image signal components
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/243—Image signal generators using stereoscopic image cameras using three or more 2D image sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
- H04N13/344—Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/21805—Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2187—Live feed
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/23439—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements for generating different versions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/24—Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
- H04N21/2402—Monitoring of the downstream path of the transmission network, e.g. bandwidth available
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/266—Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
- H04N21/2662—Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
- H04N21/44218—Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/63—Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
- H04N21/64—Addressing
- H04N21/6405—Multicasting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/65—Transmission of management data between client and server
- H04N21/658—Transmission by the client directed to the server
- H04N21/6587—Control parameters, e.g. trick play commands, viewpoint selection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/816—Monomedia components thereof involving special video data, e.g 3D video
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Social Psychology (AREA)
- Computer Networks & Wireless Communication (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Information Transfer Between Computers (AREA)
Abstract
Content delivery and playback methods and apparatus are described. The methods and apparatus are well suited for delivery and playback of content, corresponding to a 360 degree environment and can be used to support streaming and/or real time delivery of content, e.g., 3D content, corresponding to an event such as a sports game, e.g., while the event is ongoing or after the event is over. Portions of the environment are captured by cameras located at different positions. The content captured from different locations is encoded and made available for delivery. A playback device selects the content to be received based in a user's head position. Streams may be prioritized and selected for delivery based on the user's current field of view and/or direction of head rotation. Static images or synthesized images can be used and combined with content from one or more streams, e.g., for background, sky and/or ground portions.
Description
Technical field
The present invention relates to content delivery and/or playback, for example, the playback of stereoscopic image content.
Background technology
It is intended to provide the display device of immersion experience to allow generally for user and rotate his head and experience shown field
Corresponding change in scape.Head mounted display supports 360 degree of viewings sometimes, because user can be when wearing head mounted display
Rotate, the head position with user changes, the scene change of display.
Using this kind equipment, when eyes front it should be presented on the scene capturing before camera position to user, when with
It should be presented on the scene that camera position captures below when family is turned round completely.After although his head can be gone to by user,
It is in any given time, because people perceives the essence of the ability of Limited eyeshot at any given time, the visual field of user leads to
Often it is limited to 120 degree or less.
In order to support 360 degree of viewing angle, 360 degree of scene can be captured using multiple cameras, and image is combined
To generate the 360 degree of scenes that will can be used for watching.
It should be appreciated that 360 degree of views are included than usually common TV and many other Video Applications captures, coding
The simple much more view data of frontal view, in normal tv and many other Video Applications, user has no chance to change
It is used for determining the viewing angle of the image that will show in particular point in time.
The given transmission constraint with the relevance being fluidized, for example, network data constraint is it may not be possible to seeking to receive
The content all consumers interacting to fluidize complete 360 degree of views with overall height definition video.When content is including being intended to
Corresponding to left eye and right-eye view picture material with allow 3D viewing effect stereo content when, especially this situation.
In view of above discussion is it will be appreciated that changed by for example rotating his or her head for permission individual consumer
Become his viewing location and see that the mode of the expectation part of environment supports that the fluidisation of content and/or the method and apparatus of playback are deposited
Needing.If can observe due to bandwidth or other pay related constraint and while the data streaming constraint of possible application
Provide a user with the option changing his/her head position and thus changing view direction, this will be expected to.Although no
It is all required to all embodiments, but expect that at least some embodiment allows to be in multiple users receiving stream simultaneously of diverse location
And watch any different piece in their desired environment, but regardless of other users are in viewing which or which part.
Content of the invention
Describe for support corresponding to the payment (for example, fluidizing) of the video of 360 degree of viewing areas or other content
Method and apparatus.Methods and apparatus of the present invention be particularly suitable for following in the case of three-dimensional and/or other picture material fluidisation, its
Middle data transfer constraint may make 360 degree of content to pay the quality level being difficult to support with maximum and (for example utilize optimal matter
Amount coding and the frame rate of highest support) paid.But, methods described is not limited to stereo content.
In various embodiments, corresponding to will obtain 3D model and/or the 3D dimensional information of the environment of video content from it
It is generated and/or access.Camera position in environment is recorded.There may be multiple different camera positions in the environment.Example
As, different end objectives camera position and one or more midfields camera position can be supported and for capturing real time camera
Feeding.
3D model and/or other 3D information Store being used to video streaming to the server of one or more users or
In image capture device.
3D model is provided to user's playback apparatus with image rendering and synthesis capability, and for example, consumer guard station sets
Standby.The 3D of consumer's premises equipment build environment represents, this 3D represents that being for example shown to consumer via head mounted display stays
The user of ground equipment.
In various embodiments, the environment less than complete 360 degree is streamed to individual consumer in any given time
Premises equipment.Based on user input, consumer's premises equipment indicates which camera feed is to be streamed.User can be via conduct
A part for consumer's premises equipment or be attached to the input equipment of consumer's premises equipment and to select place and/or phase seat in the plane
Put.
In certain embodiments, 180 degree video flowing from the server of responsible streamable content and/or video camera with for example real
Condition, in real time or near real-time stream is sent to client's playback apparatus.Playback apparatus monitor the head position of user and therefore play back
Equipment knows the viewing areas of viewing in the 3D environment that the user of playback apparatus is being generated by playback apparatus.When 3D environment one
Part is when can be used for being watched, and consumer's premises equipment assumes video, and wherein video content is replaced or is shown as not having
The replacement of the simulation 3D environment being presented in the case of video content.When the user of playback apparatus rotates his or her head, it is in
The part of the environment of user now to can come from being supplied to the video content of (being for example streamed to) playback apparatus, wherein other parts
From 3D model and/or previous provide synthetically generate in the picture material capturing with video content different time.
Therefore, playback apparatus can show the video for example providing via fluidisation, match simultaneously, concert or other event
Still carrying out, corresponding to for example front 180 degree camera view, and the rear and/or lateral parts of 3D environment or completely synthetic ground or
Person generates from the picture material in the side of different time environment or Background Region.
Although user can be entered between camera position by the change to the server signaling position providing flow content
Row selects, but provides the server of flow content to may provide for the part generation synthetic environment for the 3D environment not being fluidized
Useful information.
For example, in certain embodiments, multiple rear portions and lateral plan are captured in different time, for example, in fluidisation one
Before partial content or from the beginning of time point earlier.Image is buffered in playback apparatus.The server providing content can
With (and certain in certain embodiments) in one group of non real-time scene of playback apparatus signaling or image which will by with
Synthesis in the environment division not provided in video streaming.For example, concert participant is seated image and concert participant
Stand in camera position another image below to be provided and be stored in playback apparatus.Server can be with signaling when specific
Between point should using which group storage view data.Therefore, when crowd stands, server can be stood corresponding to crowd with signaling
Image should be used for background 180 degree view during image synthesis, and when crowd is seated, server can be stayed to consumer
Ground equipment indicates that it should be when synthesizing the side of 3D camera environment or rear portion using corresponding to the image of the crowd being seated or figure
As composite signal.
In at least some embodiments, camera direction of each of one or more positions position in 3D environment exists
Tracked during image capture.Mark in environment and/or identification point can be used to promote captured image, such as activity
Image, with being aligned and/or other mapping of the 3D environment of the previous modeling that will be simulated by consumer's premises equipment and/or mapping.
The mixing of the environment division of synthesis and true (video of fluidisation) provides immersion video tastes.Environment can and
Sometimes measured using 3d photometric measurement or model, with create be used for when video is unavailable, such as before environment not by
During modeling, the 3D information of simulated environment.
In real world space, reference mark contributes to video and the 3D model being previously generated in the use of commitment positions
Calibration and be aligned.
The position of each camera is followed the tracks of and is realized when video is captured.With respect to the camera location information in place, for example, with
Spend and map X, Y, Z and driftage (yaw) (therefore, it is understood that the position pointed to of each camera) for unit.This allows easily to examine
Survey what part that captured image corresponds to environment, and, set when being sent to playback together with the video being captured
It is allowed to playback is to assume period automatically by our Video Capture and the synthetic environment being generated by playback apparatus in image when standby
Overlap, for example, to user's playback.Fluidisation content can be limited to less than 360 degree of views, for example captured in phase seat in the plane
Put the 180 degree view in region above.When beholder looks around, when back track, beholder will be seen that the background of simulation (no
It is that black is blank), and when turning to front, beholder will be seen that video.
Synthetic environment can be that (and being in certain embodiments) is interactively.In certain embodiments, multiple reality
The beholder users of consumer's premises equipments (such as different) be included in simulated environment so that user can be in virtual 3D
Environment neutralizes his/her friend and comes to matches together, and appears to user's reality in stadium.
The image of user can (and in certain embodiments) by the phase including or being attached to consumer's premises equipment
Machine captures, is supplied to server and is supplied to other users, the member of such as group, uses for generating simulated environment.User schemes
As needing not to be realtime graphic, but it can be realtime graphic.
Methods described can be used in real time or near real-time encode and content be provided, but be not limited to this real-time application.
Given support is in real time and near real-time encodes and be streamed to the ability of multiple users, and method described herein and device are very suitable for flowing
Change the scene of competitive sports, concert and/or other place, wherein individual like viewing event and not only observe stage or
Place, and can rotate and appreciate the view of environment, such as stadium or crowd.By supporting 360 degree of viewings and 3D, this
Bright method and apparatus is very suitable for being used together with the head mounted display being intended to provide a user with the experience of 3D immersion, has
Rotate and from different viewing angles observe scene the free degree, just as user be in environment and user head port,
The right side or rear.
Describe the method and apparatus for transmitting picture material (for example, corresponding to the content in 360 degree of visuals field).Various
In embodiment, the visual field corresponds to the different piece of environment, such as front portion, at least one rear portion, top and bottom.In some enforcements
In example, the left back and right back (for example, rear portion) of environment is separately generated and/or is transmitted.Playback apparatus monitor user's head
Position and generate image, the stereo-picture of the environment division for example seen in preset time corresponding to user, then will scheme
As being shown to user.In the case of three-dimensional playback, generate detached left-eye image and eye image.The image being generated is permissible
(and certain in certain embodiments) corresponds to one or more scenes, such as environment division.
When playback starts, user's forward-looking head level position is arranged to be default to corresponding to front to scene portion
Point.When user rotates his/her head and raises or reduce his or her head, the other parts of environment can enter use
The visual field at family.
Bandwidth on many playback apparatus and image decoding ability are subject to the disposal ability of equipment and/or are used for receiving in image
The restriction of the bandwidth held.In certain embodiments, playback apparatus determine environment which partly correspond to user main view wild.So
Afterwards, the part that equipment will be received with two-forty (for example for specified stream using full resolution) from preferential angle Selection is as main
Stream.Content from one or more of the other stream of the content of the other parts providing corresponding to environment can also be received, but
Generally with relatively low data rate.Content delivery for specific stream can be started by playback apparatus, for example, be used for by transmission
The signal of triggering content delivery.This signal can be used to add multicast group, thus providing the content of the part corresponding to environment
Or start the payment of switched digital data interconnect.Do not need the other signals asked or such as multicast group adds signal in broadcasted content
In the case of, equipment can start to receive by being transferred to the available channel of content.
It is assumed that user is generally mainly partly interested in the front view of environment, because this ground that to be main actions be normally carried out
Side, especially when content corresponds to competitive sports, Springsteen, fashion show or multiple different event, in some embodiments
In, the frontal view part of environment is given data transport priority.In at least some embodiments, corresponding to front viewing position
The image put is fluidized with the higher speed of the one or more of the other part than 360 degree of environment.Other portions corresponding to environment
The image dividing is sent using relatively low data rate or sends as still image.For example, it is possible to send top (for example, sky) and
One or more still images of bottom (for example, ground).
In certain embodiments, be environment one or more parts (such as rear view section or sky portion) send many
The image of individual static capture.In certain embodiments, which in the still image of a part of environment transmission instruction be used for
The control information that should be used in preset time.In the case that the still image of the part for environment is sent, it
Can in a coded form by send then be stored in memory with decoded form, for combining with other picture materials.
By this way, required during event decoding resource can reduce, because multiple stream does not need with identical frame rate by simultaneously
Row decoding.Still image can be sent before the content of fluidisation main matter.As an alternative it is contemplated that the head of user
From the change of viewing location forward, a few image is directed to environment in the case of can needing them during playing back for position
Different piece sends and stores.Static or image infrequently may be provided as main, such as forward direction, view direction
A part for the content stream of content is encoded and sends, or can be sent as single content stream.
Still image corresponding to rear portion can (and sometimes certain) be the image being captured before the event, and right
Can (and certain in many cases) should include for example entering in real time in event in the content of the forward portion of environment
The content being captured during row and fluidizing.
Consider that such as two different rearview scenes are delivered and stored in the situation in playback apparatus.One scene can
So that corresponding to the crowd being in standing place, another image can correspond to be in the crowd of seated position.Control information can
With (and certain in certain embodiments) if his/her head is turned to the visible position in rear portion of environment by instruction user
Words are taken one's seat or whether standing place crowd's image is used in preset time.
Similarly, the multiple images of sky can be sent to playback apparatus and are stored in back with coding or decoded form
Put in equipment.In certain embodiments, which image of sky portion will use in preset time and be transmitted in the control information.
In other embodiments, which scene of sky is to be used is based on corresponding to front one or more images to scene areas
Brightness automatically determine, wherein sky portion is partly consistent or close with selected forward direction environment scene, for example, bright before
Scape scene areas can be detected and be used for controlling the selection to the bright sky image with a small amount of cloud.Similarly, at some
The cloudy sky image leading to darkness is used to the detection of environmental area to before darkness in embodiment.
In the visual field be used for environment a part image disabled in the case of, scene parts can be for example from from ring
The information of available other parts in border or content are becoming.For example, if rear images are partly unavailable, it is derived from frontal scene
The content on the left side in region and/or right side can be replicated and for filling the rear portion of the disappearance of environment.Replicate it unless the context
Outward, obscure and/or other image processing operations also can be used to fill the lack part of environment in certain embodiments.As replacing
Dai Di, in certain embodiments, pictorial information provides in the content stream and playback apparatus are that the part generation lost is closed completely
The image becoming.As video game content, this content can be substantially reality and can include from draw and/or
Various image effects and/or content that the other image creation rules being stored in playback apparatus generate.
According to some embodiments, the illustrative methods of playback system are operated to include determining the head position of beholder, described
Head position corresponds to present viewing field;Receive the first content stream of the content that the Part I corresponding to environment is provided;Based on bag
Include in the content of at least some of described first content stream reception and the i) storage corresponding to the Part II of described environment
Inside perhaps ii) simulate described environment Part II composograph, generate corresponding to present viewing field one or more outputs
Image;And output or display the first output image, described first output image be one or more generations output image it
One.According to some embodiments, example content playback system includes beholder's head position determining module, is configured to determine that sight
The head position of the person of seeing, described head position corresponds to present viewing field;Content stream receiver module, being configured to receive provides correspondence
First content stream in the content of the Part I of environment;Based on the generation module of output image content stream, it is configured to be based on
Including the content receiving at least some of described first content stream and i) correspond to the depositing of Part II of described environment
Perhaps ii in storage) simulate described environment Part II composograph, generate one or more defeated corresponding to present viewing field
Go out image;And in the middle of following at least one:It is configured to export the output module of described first output image or be configured
For showing the display module of described first output image.
Numerous changes and embodiment are possible and discuss in the following detailed description.
Brief description
Fig. 1 shows the example system realized according to some embodiments of the invention, and this system can be used to capture and flow
Change content, subsequently to show for one or more users together with one or more composite parts of environment.
Fig. 2A shows exemplary stereo scene, for example, not yet divided complete 360 degree of stereo scenes.
Fig. 2 B shows the exemplary stereo field being divided into 3 exemplary scenario according to an exemplary embodiment
Scape.
Fig. 2 C shows the exemplary stereo scene being divided into 4 scenes according to an exemplary embodiment.
Fig. 3 shows according to an exemplary embodiment, 360 degree of stereo scenes of encoding examples example process.
Fig. 4 shows how expression input picture part is encoded using various encoders to generate identical input picture portion
The example of the different coding version dividing.
Fig. 5 shows the coded portion being stored of the input stereo scene being divided into 3 parts.
Fig. 6 is the illustrative methods illustrating the streamable content according to the exemplary embodiment realized using the system of Fig. 1
The flow chart of step.
Fig. 7 show according to feature of present invention, include can be used to encode and streamable content code capacity example
Property content delivery system.
Fig. 8 shows the example content playback that can be used to receive, decode and show the content by the system fluidisation of Fig. 7
System.
Fig. 9 shows that expression has the exemplary camera equipment of 3 cameras pair being arranged on 3 different installation sites
(rig) together with the figure that can be used for the calibration target calibrating this camera equipment.
Figure 10 shows that expression has the more concentration view of the camera equipment of 3 cameras pair being arranged in camera equipment
Figure.
Figure 11 shows the detailed diagram of the exemplary camera equipment realized according to an exemplary embodiment.
Figure 12 shows exemplary 360 degree of scene environment, such as 360 degree scene areas, and it can be divided into should with capture
The different cameral position corresponding difference viewing areas/part of the respective camera of the different piece of 360 degree of scenes.
Figure 13 includes three different figures representing the different piece of exemplary 360 degree of scene areas of Figure 12, these three portions
Divide and can be caught by the different cameral of the viewing areas/part corresponding to and/or being positioned to exemplary 360 degree of scene areas
Obtain.
Figure 14 A is the stream of the step of illustrative methods illustrating operation playback apparatus according to an exemplary embodiment of the present invention
The Part I of journey figure.
Figure 14 B is the stream of the step of illustrative methods illustrating operation playback apparatus according to an exemplary embodiment of the present invention
The Part II of journey figure.
Figure 14 includes Figure 14 A and the combination of Figure 14 B.
Figure 15 is the flow chart of the step illustrating the stream selection subroutine according to exemplary embodiment.
Figure 16 is the flow chart of the step illustrating the stream priorization subroutine according to exemplary embodiment.
Figure 17 is the flow chart illustrating the step rendering subroutine according to exemplary embodiment.
Figure 18 shows the exemplary table including stream information corresponding with multiple content streams.
Figure 19 shows the exemplary playback system realized according to the present invention.
Figure 20 A is first of the flow chart of the illustrative methods of the operation content playback system according to exemplary embodiment
Point.
Figure 20 B is second of the flow chart of the illustrative methods of the operation content playback system according to exemplary embodiment
Point.
Figure 20 C is the 3rd of the flow chart of the illustrative methods of the operation content playback system according to exemplary embodiment
Point.
Figure 20 D is the 4th of the flow chart of the illustrative methods of the operation content playback system according to exemplary embodiment
Point.
Figure 20 E is the 5th of the flow chart of the illustrative methods of the operation content playback system according to exemplary embodiment
Point.
Figure 20 includes Figure 20 A, the combination of Figure 20 B, Figure 20 C, Figure 20 D and Figure 20 E.
Figure 21 is the figure of the example content playback system according to exemplary embodiment, wherein example content playback system
For example it is coupled to content playback device or the computer system of display.
Figure 22 is the figure that can include the example modules assembly in the example content playback system of Figure 21.
Figure 23 is to illustrate to select mould according to the exemplary stream used in the playback system of Figure 19 of some embodiments
The figure of block.
Figure 24 is to illustrate to be implemented as a part for stream selecting module of Figure 23 or the example being embodied as separate modular
The figure of property stream prioritization module.
Specific embodiment
Fig. 1 shows the example system 100 realized according to some embodiments of the invention.System 100 support to be located at disappear
One or more consumer devices (such as playback apparatus/content player) of expense person guard station carry out content delivery, for example, be imaged
Content delivery.System 100 includes example images capture device 102, content delivery system 104, communication network 105 and multiple disappears
The person of expense guard station 106...110.Image capture device 102 supports the capture of stereo-picture.Image capture device 102 is according to the present invention
Feature capture and process image content.Communication network 105 can be such as Hybrid Fiber Coax (HFC) network, satellite network
And/or internet.
Content delivery system 104 includes code device 112 and stream content device/server 114. code device 112 can
Included for the one or more encoders according to coded image of the present invention with (and certain in certain embodiments).Encoder
Can be used in parallel, to encode the different piece of scene and/or the given part of coding scene, to generate with different pieces of information speed
The version of code of rate.Parallel can be useful especially using multiple encoders when supporting in real time or near real-time fluidizes.
Stream content equipment 114 is configured to fluidize the content that (for example sending) encodes, so that for example through communication network 105
Pay the picture material of coding to one or more consumer devices.Via network 105, content delivery system 104 can send
And/or with the devices exchange information positioned at consumer guard station 106,110, if in figure is by link 120 institute through communication network 105
As expression.
Although code device 112 and content delivery server 114 are illustrated as single physical equipment in the example in fig 1,
But in certain embodiments, they are implemented as encoding the individual equipment with streamable content.Cataloged procedure can be 3D (for example
Three-dimensional) image encoding process, the information of the left eye and right-eye view that wherein correspond to scene parts is encoded and includes in coding
View data in so that can support 3D rendering watch.The specific coding method being used is not crucial for the application,
And extensive encoder is used as or for realizing code device 112.
Each consumer guard station 106,110 can include multiple equipment/player, for example, be used for decoding and playing back/show
The playback system of the image content being fluidized by stream content equipment 114.Consumer guard station 1 106 includes being coupled to display device
124 decoding apparatus/playback apparatus 122, and consumer guard station N 110 includes being coupled to the decoding apparatus of display device 128/return
Put equipment 126.In certain embodiments, display device 124,128 is wear-type stereoscopic display device.In certain embodiments,
Playback apparatus 122/126 constitute playback system together with headset equipment 124/128.
In various embodiments, decoding apparatus 122,126 assume image content on corresponding display device 124,128.
Decoding apparatus/player 122,126 can be to be able to carry out following equipment:The one-tenth that decoding receives from content delivery system 104
As content, generate image content and render image content, such as 3D rendering on display device 124,128 using the content of decoding
Content.Any one in decoding apparatus/playback apparatus 122,126 is used as the decoding apparatus/playback apparatus shown in Fig. 8
800.System/playback apparatus shown in such as Fig. 8 and 19 are used as any in decoding apparatus/playback apparatus 122,126
One.
Fig. 2A shows exemplary stereo scene 200, for example, not yet divided complete 360 degree of stereo scenes.Three-dimensional field
Scape can be and typically combine from the multiple cameras being usually mounted to single Video Capture platform or camera pedestal (for example
Video camera) result of view data that captures.
Fig. 2 B shows the segmentation version 2 50 of exemplary stereo scene 200, and its Scene is according to an exemplary enforcement
Example is divided into the individual sample portion of 3 (N=3), such as 90 degree of parts after front 180 degree part, left back 90 degree of parts and the right side.
Fig. 2 C shows another segmentation version 2 80 of exemplary stereo scene 200, and it is according to an exemplary reality
Apply example and be divided into the individual part of 4 (N=4).
Although Fig. 2 B and 2C shows two example division but it is to be understood that other segmentation is possible.For example,
Scene 200 can be divided into the individual 30 degree of parts of 12 (n=12).In one suchembodiment, it is not individually to encode often
Individual part, but some is grouped together and is encoded as group.Partial different groups can be encoded and fluidize
To user, the size of wherein each group is identical for total number of degrees of scene, but corresponds to may rely on user's
The different piece of the image that head position (for example with the measured viewing angle of scale of 0 to 360 degree) is fluidized.
Fig. 3 shows according to an exemplary embodiment, 360 degree of stereo scenes of encoding examples example process.
The input of the method 300 shown in Fig. 3 is included by multiple camera captures of 360 degree of views being for example arranged to capturing scenes
360 degree of stereoscopic image datas.Stereoscopic image data (such as three-dimensional video-frequency) can be any one of various known formats and,
In most embodiments, including left eye and the right eye image data for allowing 3D experience.Although methods described is particularly suitable for
Three-dimensional video-frequency, but technique described herein and method can also be applied to the 2D image of such as 360 degree or little scene areas.
In step 304, contextual data 302 is divided into the data corresponding to different scenes region, for example, correspond to not
N number of scene areas with view direction.For example, in all embodiments as shown in Figure 2 B, 360 degree of scene areas are divided
It is slit into three subregions:Corresponding to 90 degree of parts behind the left back portion of 90 degree of parts, front 180 degree part and the right side.Different parts can
To be captured by different cameras, but this is optional, it is true that 360 degree of scenes can by being divided into as Fig. 2 B and
Before N number of scene areas shown in 2C, the data from the capture of multiple cameras builds.
Within step 306, the data corresponding to different scenes part is encoded according to the present invention.In some embodiments
In, each scene parts is independently encoded by multiple encoders, to support the multiple possible bit rate flow for each part.
In step 308, the scene parts of coding are stored in the content delivery server 114 of such as content delivery system 104, with
For being streamed to consumer's playback apparatus.
Fig. 4 is to illustrate to show how input picture part (180 degree of such as scene is anterior) is compiled using various encoders
Code is to generate Figure 40 0 of the example of the different coding version of identical input picture part.
As shown in diagram 400, input scene part 402 (180 degree of such as scene is anterior) is supplied to multiple encoders
Encoded.In this embodiment, there is K different encoder, these encoders utilize different resolution and using different volumes
Code technology for encoding input data, to support the different pieces of information speed stream of picture material with the data generating coding.This K encoder
Including fine definition (HD) encoder 1 404, single-definition (SD) encoder 2 406, the frame rate SD encoder 3 reducing
408..., and high compression reduce frame rate SD encoder K 410.
HD encoder 1 404 is configured to execute overall height definition (HD) coding, to produce high bit rate HD coded image
412.SD encoder 2 406 is configured to execute low resolution single-definition coding, encodes version with the SD producing input picture
This 2 414.The frame rate SD encoder 3 408 reducing is configured to execute the frame rate low resolution SD coding reducing, to produce
The speed SD version of code 3 416 of the reduction of raw input picture.The frame rate reducing can for example be used by SD encoder 2 406
Half in the frame rate of coding.The frame rate SD encoder K 410 that high compression reduces is configured to execute has high compression
Reduce frame rate low resolution SD coding, speed SD version of code K420 is reduced with the high compression producing input picture.
It will thus be appreciated that the control of space and/or temporal resolution can be used to produce the data of different pieces of information speed
Stream, the control that the other encoders of such as data compression level etc are arranged can also by individually or except space and/or
Used beyond the control of temporal resolution, to produce corresponding to the scene parts with one or more desired data rates
Data flow.
Fig. 5 shows the coded portion 500 being stored of the input stereo scene being divided into 3 sample portion.
The coded portion being stored can be stored in content delivery system 104, such as in memory as data/information.Three-dimensional
The coded portion 500 being stored of scene includes 3 different sets of coded portion, and wherein each part corresponds to different fields
Scene area and each set includes correspondence scene parts multiple different coding versions.Each version of code is encoded video number
According to version and therefore represent and be coded of multiple frames.It should be appreciated that each version of code 510,512,516 corresponds to
The video of multiple periods, and when fluidisation, the part (such as frame) corresponding to the period being just played will be used for transmitting mesh
's.
As mentioned above for shown by Fig. 4 and discuss, each scene parts (for example forward and backward scene parts) can utilize many
Individual different encoder to encoding, to produce K different editions of same scene part.Corresponding to given input scene
The output of each encoder is grouped together as set and is stored.The first set of coding scene parts 502 corresponds to
Front 180 degree scene parts, and include the version of code 1 510 of front 180 degree scene, the version of code 2 of front 180 degree scene
512 ..., and front 180 degree scene version of code K 516.The second set of coding scene parts 504 corresponds to scene portion
Points 2, such as 90 degree left back scene parts, and include 1 520,90 degree of left back scenes of version of code of 90 degree of left back scene parts
Partial version of code 2 522 ..., and 90 degree of left back scene parts version of code K 526.Similarly, encode scene portion
Divide 506 the 3rd gathers corresponding to scene parts 3, such as scene parts behind the 90 degree right sides, and includes scene parts behind 90 degree of right sides
1530,90 degree of the version of code right side after scene parts version of code 2 332 ..., and 90 degree right after scene parts coding
Version K 536.
The various different coded portion being stored of 360 degree of scenes can be used to generate for being sent to consumer's playback
The various difference bit rate flow of equipment.
Fig. 6 is the flow chart 600 of the step of the illustrative methods illustrating the offer picture material according to exemplary embodiment.
In certain embodiments, the method for flow chart 600 is to be realized using the capture systems shown in Fig. 1.
The method starts in step 602, and for example, delivery system is energized and initializes.The method is before starting step 602
Enter step 604.In step 604, content delivery system 104 (server 114 in such as system 104) receives to content
Request, for example the request to the program of previous coding or, in some cases, encoded and fluidize by real-time or near real-time
Live events, for example, when event is still when carrying out.
In response to this request, in step 606, server 114 determines the data rate that can be used for paying.Data rate can
Can be used for content delivery with according to the information including in the request of the data rate that instruction is supported and/or according to such as instruction
The other information of the network information of maximum bandwidth to request equipment etc is determining.It should be appreciated that available data rate can
To depend on offered load to change and can change during the period that content is fluidized.Change can be by user equipment report
Accuse or detect from message or signal, wherein this message or signal designation packet is dropped or is delayed over instruction network and be difficult to
Support the expected time amount of the data rate being currently in use and currently available data rate is available original less than being confirmed as
Data rate.
Operation proceeds to step 608 from step 606, there, starts, from it, user that the request of content is initialised
The current head position (for example, the current head position in request) of equipment will be 0 degree of position.In certain embodiments, 0 degree
Or front view position can be signaled using playback apparatus by user and will carry out reinitializing.With when
Between passage, the head position of user and/or user's head position are for example reported to respect to the change of original header position
Content delivery system 104, and the position as will be discussed updating is used to make content delivery and determines.
Operation proceeds to step 610 from step 608, wherein corresponds to the part quilt of asked 360 degree of scenes of content
Send, to initialize playback apparatus.In at least some embodiments, initialization is related to send complete 360 degree of collection of contextual data
Close, be such as N number of part when 360 degree of scenes are divided into N number of part.
As result initialized in step 610, playback apparatus by have corresponding to 360 degree may viewing areas each
The contextual data of different piece.Thus, if the unexpected back track of the user of playback apparatus, then, even if not being to exist with user
That is watching before rotation head is partly equally up-to-date, and also at least some data will can be used for displaying to the user that.
Operation proceeds to step 612 and step 622 from step 610.Step 622 corresponds to global scene more new route, it
It is used to assure that each overall update cycle of playback apparatus receives the more redaction of whole 360 degree of scenes at least one times.In step
It has been initialised in 610, overall renewal process is delayed by the predetermined period in waiting step 622.Then in step 624
In, execute 360 degree of scene updates.Which scene dotted arrow 613 represents during the auxiliary period corresponding to step 622 with regard to
Part is sent to the transmission of the information of playback apparatus.In step 624, whole 360 degree of scenes can be sent.But,
In some embodiments, not all part is sent all in step 624.In certain embodiments, during waiting period 622
The part of the scene being updated is omitted from the renewal executing step 624, because they are in the head based on user
It is refreshed during the normal fluid mapper process of at least some part of portion position transmission scene.
Operation is advanced back waiting step 622 from step 624, there, executes wait before next overall renewal.Should
Understand, by adjusting the waiting period using in step 622, different overall refresh rates can be supported.In some enforcements
In example, when the type based on the scene content being provided for the content server is to select waiting period and thus to select overall reference
Section.Main actions are the possible change of outdoor lighting condition one of in face forward region and the reason refreshing wherein
In the case of sport event, waiting period can be relatively long, such as about one minute or a few minutes.Action in crowd wherein
In the case of the Springsteen played and frequently change such as different songs with activity, overall refresh rate is permissible, and
And be exactly sometimes that, higher than sport event, user may want to turn round and see crowd reaction and except obtaining in foreground viewing
Region still wants to outside there is what sensation obtain and there occurs what sensation in crowd.
In certain embodiments, the overall situation changed as the function of the part presenting just being fluidized with reference to the period.For example,
During the Competition Segment of sport event, overall refresh rate can be relatively low, but is in event wherein or via playback
Equipment watch the people of event more likely during main region forward rotates the time-out of his or her head or intermission or
Between the later stage in touchdown moment, overall reference rate can (and in certain embodiments) be made by reducing in step 622
Wait (such as refresh period control) and increase.
Although describing overall refresh process, the normal confession of the part of description scene by reference to step 622 and 624
Give.As it should be appreciated, the normal refresh of scene or scene parts will allow in data rate in the case of with regarding of being supported
Frequency frame rate occurs at least one portion.Accordingly, with respect at least one frame part, (for example his/her head is indicated as face
To part) will be provided it is assumed that available data rate is enough with full video streaming frame rate.
In step 612, field to be provided is selected based on the head position (for example, viewing angle) indicated by user
Scape part.Selected part is for example periodically sent (for example fluidizing) and is arrived playback apparatus.In certain embodiments, corresponding
Depend on video frame rate in the speed that the data of these parts is fluidized.For example, at least one selected part will be to be propped up
The full motion held is fluidized.Although have selected at least one scene parts in step 612, generally select multiple scenes
Part, such as user just towards scene parts and next nearest scene parts.If available data rate be enough to
Support the communication of multiple frame parts, then the scene parts adding also can be chosen and provide.
After selecting scene parts to be fluidized in step 612, operation proceeds to step 614, and being wherein for example based on can
With data rate and the viewing location of user select the version of code of selected stream part.For example, as the head by current report
The full rate high-resolution version of the scene parts of the partly indicated user plane pair in portion can and generally will be fluidized.Working as
One or more scene parts on the fore head position left side and/or the right can be chosen so as to low resolution, speed of relatively low time
Rate or be fluidized using reducing another kind of coding method sending the amount of bandwidth needed for the scene areas currently do not watched.Phase
The selection of the version of code of adjacent scene parts will depend on the quality version in the scene parts currently watched to be sent it
The amount of bandwidth left afterwards.Although the scene parts currently do not watched can be as low resolution version of code or as in frame
Between there is the version of code of larger time gap sent, but can use if there are enough bandwidth, then can periodically or
Continually send full resolution quality version.
In step 616, the selected version of code of selected scene parts is sent to the playback apparatus of request content.
Therefore, in step 616, corresponding to the encoded content of one or more parts, for example, correspond to the stereopsis of multiple successive frames
Frequency content, is streamed to playback apparatus.
Operation proceeds to step 618 from step 616, wherein receives the information of the current head position of instruction user.This
Information can be periodically and/or in response to the change of detection head position and send from playback apparatus.Except head position
Outside change, the change of available data rate also can affect that what content is fluidized.Operation proceeds to step 620 from step 618,
Wherein determine and can be used for carrying out the current data rate of content delivery to playback apparatus.Therefore, content delivery system can be examined
Survey the change that can be used for the amount of bandwidth of fluidisation supporting request equipment.
Operation proceed to step 612 from step 620, wherein fluidisation continue, until content is paid completely, for example program or
Event terminates, or until from the playback apparatus of request content receive instruction session signal to be terminated or fail from
Playback apparatus receive expected signal, the head position that instruction playback apparatus are no longer communicated such as is detected with content server 114
Put renewal.
According to the contextual data paid in the above described manner, playback apparatus will at least have can be used for user's quick rotation he
Or some data corresponding to each scene parts showing in the case of her head.It should be appreciated that user is seldom very
Head is fully turned over, because this is uncomfortable change of viewing location for many people in the short time.Thus although
Complete 360 degree of scenes may not always be sent, but the height of the scene parts most possibly watched at any given time
Quality version can be fluidized and allow user can use.
Content delivery system 104 can support substantial amounts of concurrent user because cataloged procedure allow scene N number of part with
Different modes are sent to different users and are processed, without for each single user individually encoded content.Therefore,
Although multiple parallel encoders can be used to support real-time coding, to allow the real-time or near real-time stream of physical culture or other event
Change, but the quantity of the encoder being used tends to the quantity of playback apparatus being streamed to much smaller than content.
Although the part of content be described as part corresponding to 360 degree of views but it is to be understood that scene can (
In some embodiments really) represent the flat version in the space also with vertical dimensions.Playback apparatus can utilize 3D environment
Model (such as space) carrys out mapping scenarios part, and adjusts vertical viewing location.Therefore, discussed herein 360 degree
Refer to respect to horizontal head position, just look like that user changes it while keeping his eye level to the left or to the right
Viewing angle.
Fig. 7 show according to feature of present invention, include can be used to encode and streamable content code capacity example
Property content delivery system 700.
This system can be used to execute coding, the storage of the feature according to the present invention, and transmission and/or content output.
In certain embodiments, system 700 or element therein execute the operation corresponding to process shown in Fig. 6.Content delivery system
700 systems 104 being used as Fig. 1.Although the system shown in Fig. 7 is used for coding, process and the fluidisation of content, should
Work as understanding, system 700 can also include decoding and for example show to operator the energy of view data that is treated and/or encoding
Power.
System 700 includes display 702, input equipment 704, input/output (I/O) interface 706, processor 708, network
Interface 710 and memory 712.The various parts of system 700 are coupled via bus 709, and this allows data in system 700
Part between transmit.
Memory 712 includes various modules, such as routine, and when it is executed by processor 708, control system 700 is realized
According to the segmentation of the present invention, coding, storage, and fluidisation/transmission and/or output function.
Memory 712 includes various modules, such as routine, when it is executed by processor 708, control computer system
700 realize according to the immersion stereo video acquisition of the present invention, coding, storage, and transmit and/or output intent.Memory
712 include control routine 714, segmentation module 716, encoder 718, fluidisation controller 720, receive input picture 732 (for example
360 degree of three-dimensional video-frequencies of scene), the scene parts 734 of coding, and timing information 736.In certain embodiments, module quilt
It is embodied as software module.In other embodiments, module is implemented within hardware, for example, be embodied as single circuit, wherein often
Individual module is implemented as the circuit for the corresponding function of performing module.In also other embodiments, module is to utilize software
Combination with hardware is realizing.
Control routine 714 includes equipment control routine and Communications routines, with the operation of control system 700.Segmentation module 716
It is configured to the feature according to the present invention and the 360 degree of versions of solid of the scene receiving are divided into N number of scene parts.
Encoder 718 can (and certain in certain embodiments) include being configured to the feature coding according to the present invention
The 360 of multiple encoders of the picture material receiving, wherein picture material such as scene and/or one or more scene parts
Degree version.In certain embodiments, encoder includes multiple encoders, and wherein each encoder is configured to encoded stereoscopic scene
And/or the scene parts of segmentation, to support given bit rate flow.Therefore, in certain embodiments, each scene parts is permissible
Encoded using multiple encoders, to support the multiple different bit rate flow for each scene.The output of encoder 718 is
The scene parts 734 of coding, it is stored in memory, for being streamed to consumer device, such as playback apparatus.Coding
Content can be streamed to one or more different equipment via network interface 710.
Fluidisation controller 720 is configured to control the fluidisation of encoded content, for for example encoding through communication network 105
Picture material consign to one or more consumer devices.In various embodiments, each step of flow chart 600 is by fluidizing
The element of controller 720 is realized.Fluidisation controller 720 includes request processing module 722, data rate determination module 724, current
Head position determining module 726, selecting module 728 and fluidisation control module 730.Request processing module 722 is configured to process
The request to image content receiving from consumer's playback apparatus.In various embodiments, it is via network to the request of content
Receiver in interface 710 receives.In certain embodiments, the request of content is included indicate the identity of request playback apparatus
Information.In certain embodiments, the request of content can be included with data rate, the user being supported by consumer's playback apparatus
Current head position, the position of such as head mounted display.What request processing module 722 process received asking and will retrieve
To information be supplied to fluidisation controller 720 other elements, to take further action.Although the request to content is permissible
Including data-rate information and current head positional information, but in various embodiments, the data supported by playback apparatus is fast
Rate can the network test between system 700 and playback apparatus exchange to determine with other network informations.
Data rate determination module 724 be configured to determine that can be used to by image content be streamed to consumer device can
With data rate, for example, due to supporting multiple coding scene parts, therefore content delivery system 700 can be supported with many numbers
According to speed to consumer device streamable content.Data rate determination module 724 is additionally configured to determine in by asking from system 700
The data rate that the playback apparatus holding are supported.In certain embodiments, data rate determination module 724 is configured to based on network
Measurement determines the available data rate paid for picture material.
The information that current head position determination module 726 is configured to according to receiving from playback apparatus determines that user's is current
Viewing angle and/or current head position, the position of such as head mounted display.In certain embodiments, the playback apparatus cycle
Property ground send current head positional information to system 700, wherein current head position determination module 726 receives and processes this letter
Breath, to determine currently viewing angle and/or current head position.
Selecting module 728 is configured to determine 360 degree of scenes based on the currently viewing angle/head position information of user
Which partly playback apparatus to be streamed to.Selecting module 728 is additionally configured to select to be determined based on available data rate
Scene parts version of code, to support the fluidisation of content.
Fluidisation control module 730 is configured to feature according to the present invention with the data rate control image of various supports
The fluidisation held, the wherein some of picture material such as 360 degree of stereo scenes.In certain embodiments, fluidize control module
730 are configured to control the fluidisation of N number of part of 360 degree of stereo scenes to the playback apparatus of request content, to initialize playback
Scene memory in equipment.In various embodiments, fluidisation control module 730 is configured to the velocity periods for example to determine
Property ground send determined by scene parts selected version of code.In certain embodiments, fluidisation control module 730 is also configured
It is that 360 degree of scene updates are sent to playback apparatus according to time interval (for example per minute once).In certain embodiments, send
360 degree of scene updates include sending N number of scene parts of complete 360 degree of stereo scenes or N-X scene parts, and wherein N has been
The sum of part that whole 360 degree of stereo scenes are divided into and X represents the selected field being sent to playback apparatus recently
Scape part.In certain embodiments, fluidisation control module 730 waits the scheduled time after the N number of scene parts of initial transmission, uses
In initialization before sending 360 degree of scene updates.In certain embodiments, control the timing of the transmission of 360 degree of scene updates
Information is included in timing information 736.In certain embodiments, fluidisation control module 730 is additionally configured to identify and is refreshing
Interim is not yet sent to the scene parts of playback apparatus;And be not sent to playback during being sent in refresh interval and set
The more redaction of the standby scene parts being identified.
In various embodiments, fluidisation control module 730 be configured to periodically to playback apparatus transmit at least enough
N number of part of quantity, to allow playback apparatus to refresh 360 degree of described scene during each refresh cycle at least one times completely
Version.
Fig. 8 show according to the present invention realize playback system 800, its can be used to receive, decode, store and show from
The image content that all content delivery system as shown in figs. 1 and 7 receive.System 800 can be implemented as including display 802
Single playback apparatus 800', or (for example, wear-type shows to be embodied as such as being coupled to the external display of computer system 800'
Show device 805) element combination.
In at least some embodiments, playback system 800 includes 3D head mounted display.Head mounted display can utilize
The OCULUS RIFT of head mounted display 805 can be includedTMVR (virtual reality) earphone is realizing.Other head mounted displays
Can be used as.In certain embodiments, the wear-type helmet or other headset equipment, wherein one or more display screens by with
Come the left eye to user and right eye display content.By showing different images to left eye and right eye on single screen, wherein
Headset equipment is configured to for the different piece of single screen to be exposed to different eyes, it is possible to use individual monitor is showing
Show the left eye being perceived respectively by the left eye of beholder and right eye and eye image.In certain embodiments, mobile phone screen quilt
Display as head-mounted display apparatus.In at least some such embodiment, mobile phone be inserted into headset equipment and
Mobile phone is used to display image.
Playback system 800 has the coded image data that receives of decoding and generates the 3D figure for showing to consumer
As the ability of content, wherein coded image data such as left eye and eye image and/or the different portions corresponding to environment or scene
Point monophonic (single image), wherein display for example is by the different left eye perceiving user and right-eye view renders
And it is shown as 3D rendering.In certain embodiments, playback system 800 be located at consumer guard station position, such as family or office, but
Image capture place can also be located at.The signal that system 800 can execute according to the present invention receives, decodes, showing and/or be other
Operation.
System 800 includes display 802, display device interfaces 803, input equipment 804, input/output (I/O) interface
806th, processor 808, network interface 810 and memory 812.The various parts 800 of system are via permission data in system 800
Between part communication bus 809 and/or connected by other or be coupled by wave point.Although in some enforcements
In example, display 802 is included as optional element, as shown in using dotted line frame, but in certain embodiments, outside aobvious
Show that equipment 805 (such as wear-type stereoscopic display device) can be coupled to playback apparatus via display device interfaces 803.
For example, it is used as processor 808 in cell phone processor and mobile phone generates and display image in headset equipment
In the case of, system can include processor 808, display 802 and memory 812, as a part for headset equipment.Place
Reason device 808, display 802 and memory 812 can be parts for mobile phone.In the other embodiments of system 800, process
Device 808 can be a part for the games system of such as XBOX or PS4, and wherein display 805 is arranged in headset equipment simultaneously
And it is coupled to games system.Processor 808 or memory 812 whether be located at wear in overhead equipment be not crucial simultaneously
And, although as it would be appreciated, in some cases, in headwear (headgear), common location processor can be convenient,
But from the perspective of power, heat and weight, at least some cases it may be desirable to make processor 808 and memory
812 are coupled to the headwear including display.
Although various embodiments contemplate head mounted display 805 or 802, the method and device can also with can prop up
The non-head mounted display holding 3D rendering is used together.Thus although in many examples system 800 include wear-type and show
Device, but it can also be realized using non-head mounted display.
Memory 812 includes various modules, such as routine, when it is executed by processor 808, controls playback apparatus 800
Execute the decoding according to the present invention and output function.Memory 812 includes control routine 814, the request generation module to content
816th, head position and/or viewing angle determining module 818, decoder module 820, also referred to as 3D rendering generation module is vertical
Body image rendering module 822, and include coded image content 824, decoding 826, the 360 degree of decoded fields of picture material receiving
Scape buffer 828 and the data/information of the stereo content 830 generating.
Control routine 814 includes equipment control routine and Communications routines, with the operation of control device 800.Request generates mould
Block 816 is configurable to generate the request to content, to be sent to content delivery system for providing content.In various embodiments
In, the request to content is to send via network interface 810.Head position and/or viewing angle determining module 818 are configured
For determining the currently viewing angle of user and/or current head position, for example, the position of head mounted display, and will determine
Position and/or viewing angle information are reported to content delivery system 700.In certain embodiments, playback apparatus 800 are periodically
Send current head positional information to system 700.
Decoder module 820 is configured to decode the coded image content 824 receiving from content delivery system 700, to produce
The view data 826 of raw decoding.The view data 826 of decoding can include the scene portion of the stereo scene and/or decoding decoding
Point.
3D rendering rendering module 822 according to the feature of the present invention, for example, generates 3D figure using the picture material 826 of decoding
Picture, the left eye for example being shown in the way of being perceived as 3D rendering and eye image, in display 802 and/or aobvious
Show and display to the user that on equipment 805.The stereoscopic image content 830 being generated is the output of 3D rendering generation module 822.Therefore,
3D rendering content 830 is rendered into display by rendering module 822.In certain embodiments, display device 805 can be such as
A part for the 3D display device of Oculus Rift.The operator of playback apparatus 800 can control one via input equipment 804
Individual or multiple parameters and/or select operation to be performed, for example, select display 3D scene.
Fig. 9 shows the figure representing exemplary camera assembly 900 (also sometimes referred to as camera equipment or camera array), its
Have and be arranged on 3 cameras of 3 different installation sites to 902,904,906 and the school that can be used for calibration camera assembly 900
Standard target 915.According to some embodiments of the present invention, camera equipment 900 is used to capture images content.In some embodiments
In, camera equipment 900 is used as the image capture apparatus 102 of Fig. 1.Camera equipment 900 includes for camera being maintained at indicating positions
Supporting construction (shown in Figure 11), 3 total to 902,904,906 stereoscopic cameras (901,903), (905,907), (909,911)
Totally 6 cameras.Supporting construction is included herein the pedestal of also referred to as installing plate (referring to the element 1120 shown in Figure 11)
1120, base supports camera and be provided with the plate of camera thereon and can be fixed to this pedestal.Supporting construction can be by plastics, gold
Belong to or the composite of such as graphite or glass fibre is made, and represented by triangle line, it is also used to show
Go out the spacing between camera and relation.Dotted line intersect central point represent Centroid, at some but be not necessarily all enforcements
In example, camera can rotate around this Centroid to 902,904,906.In certain embodiments, Centroid corresponds to example
As steel pole or thread center's installed part of tripod mount, the camera supporting framework 912 being represented by triangle line can be around it
Rotation.Support frame can be wherein to be provided with plastic casing or the tripod structure of camera.
In fig .9, each pair camera 902,904,906 corresponds to different cameras to position.First camera corresponds to 902
Face forward 0 degree of position.This position generally corresponds to home court scene area interested, for example, is just carrying out sports tournament thereon
Place, stage or main actions it may happen that certain other region.Second camera corresponds to 120 degree of phase seats in the plane to 904
Put and be used to viewing areas behind the capture right side.Third camera corresponds to 240 degree of positions (with respect to 0 degree of position) and left back to 906
Viewing areas.Note, three camera positions separate 120 degree.Each camera viewing location includes one of Fig. 9 embodiment camera
Right, wherein each camera is to the left camera including for capture images and right camera.Left camera capture is sometimes referred to as left eye figure
The content of picture, right camera capture is sometimes referred to as the content of eye image.Image can be in the capture of one or more times
View sequence or a part for rest image.Generally, the front camera position at least corresponding to camera to 902 will be imaged with high-quality
Machine is filled.Other camera positions can be with single camera or the high-quality for capturing static or monochrome image (mono image)
Video camera, lower quality video camera are filling.In certain embodiments, second and third camera embodiment keep be not filled simultaneously
And the gripper shoe of camera is installed thereon is rotated, thus allow first camera to 902 captures corresponding to all three phase seat in the plane
Put but the image in different time.In some such embodiments, left and right rear images are captured and stored by relatively early, then
The video of forward direction camera position is captured during event.The image of capture for example still can carried out in event by real time
While, it is encoded and is streamed to one or more playback apparatus.
First camera shown in Fig. 9 is to the left camera 901 of inclusion and right camera 903.Left camera 901 has and is fixed to first
First lens subassembly 920 of camera, right camera 903 has the second lens subassembly 920' being fixed to right camera 903.Lens subassembly
920th, 920' includes allowing the lens of capture wide-angle view.In certain embodiments, each lens subassembly 920,920' include fish
Glasses head.Therefore, each of camera 902,903 can capture the 180 degree visual field or about 180 degree.In certain embodiments,
It is captured less than 180 degree, but in certain embodiments, in from adjacent camera to the image of capture, yet suffer from least one
Overlapping a bit.In the embodiment in fig. 9, camera is in first (0 degree), second (120 degree) and the 3rd (240 degree) camera installation site
Each of place positioning, wherein at least 120 degree of each pair capturing ambient or more, but in many cases, each camera
180 degree to capturing ambient or about 180 degree.
2nd 904 and the 3rd 906 camera pair is same or similar to 902 with first camera, but with respect to front 0 degree of position position
In 120 and 240 degree of camera installation sites.Second camera includes left camera 905 and left lens assembly 922 and right camera to 904
907 and right camera lens assembly 922'.Third camera includes left camera 909 and left lens assembly 924 and right camera to 906
911 and right camera lens assembly 924'.
In fig .9, D represents the axle base to 901,903 for first stereoscopic camera.In fig .9, example D is 117mm, this
Same or similar with the distance between the pupil of the left eye of ordinary people and right eye.Dotted line 950 in Fig. 9 depicts from panorama array
The entrance pupil to right camera lens 920' for the central point distance (skew of aka node).In a reality corresponding to Fig. 9 example
Apply in example, be 315mm by the distance that reference 950 indicates, but other distance is possible.
In a particular embodiment, the area of coverage (footprint) of camera equipment 900 is relatively small, its horizontal area
For 640mm2Or it is less.This small size allows camera equipment to be placed in spectators, for example generally can position in bean vermicelli or spectators
In or positioning seated position at.Therefore, in certain embodiments, camera equipment is placed in audience area, thus allowing to see
The person of seeing has the sensation as a member in the spectators expecting this effect.In certain embodiments, the area of coverage corresponds to one
The supporting construction that a little embodiments include centre post is installed to the size that itself or support tower navigate to its pedestal.Should manage
Solution, in certain embodiments, camera equipment can be around the central point rotation of the pedestal corresponding to the central point between 3 pairs of cameras
Turn.In other embodiments, camera is fixing and not around the central rotation of camera array.
Camera equipment can capture and be relatively close to and remote object.In a particular embodiment, camera array
Minimum image-forming range is 649mm, but other distance is also possible and this distance is not crucial.
From the distance in the crosspoint 951 of the center of photomoduel to first and the view of third camera part represent can by with
In the example calibration distance calibrating the image to capture by the first and second cameras.It should be pointed out that target 915 can be placed in away from
It is located at or is just at the known distance of the camera pair in region of maximum distortion.Calibration target includes known fixed calibration mould
Formula.Calibration target can be and be used for calibrating the size of the image by the camera capture of camera pair.This calibration is possible
, because with respect to the camera of the image of capture calibration target 915, the size of calibration target and position are known.
Figure 10 is more detailed Figure 100 0 of the camera array shown in Fig. 9.Although camera equipment is shown to have 6 again
Individual camera, but in certain embodiments, camera equipment is filled with two cameras, and such as camera is to 902.As illustrated,
Each camera is to the interval having 120 degree between installation site.If consider such as each camera between center correspond to
The direction of camera installation site, then in this case, first camera installation site corresponds to 0 degree, second camera installation position
Put corresponding to 120 degree and third camera installation site corresponds to 240 degree.Therefore, each camera installation site separates 120 degree.
If extend through each camera to 902,904,906 center center line be extended and line between angle measured,
This point then can be seen.
In the example of Figure 10, camera can (certain in certain embodiments) be equipped around camera to 902,904,906
Central point rotation, thus allow different time capture different views, without change camera equip pedestal position.That is,
Camera can rotate around the center support of equipment and be allowed to capture different scenes in different time, thus allowing only to fill out at it
It is filled with and carry out 360 degree of scene capture using the equipment shown in Figure 10 while two cameras.In view of the cost of stereoscopic camera,
This configuration is especially desired to from the perspective of cost, and is very suitable for many applications it may be desirable to illustrate from identical
Viewpoint but the time that the front scene from the main actions including during sport event or other event can occur different when
Between locate capture background.Consider that object can be placed in after camera for example during event, then do not show during main event
Show that it will be preferred.In this case, rear image can (sometimes) captured and together with taking charge before main event
The image of the real-time capture of part can use together, is gathered with provide view data 360 degree.
Figure 11 shows the detailed diagram of the exemplary camera equipment 1100 realized according to an exemplary embodiment.As from
Figure 11 it should be understood that camera frame 1100 includes 3 pairs of cameras 1102,1104 and 1106, at some but be not all of embodiment
In, they are stereoscopic cameras.In certain embodiments, each camera is to two cameras of inclusion.Camera is to 1102,1104 and 1106
Same or similar to 902,904,906 with the camera discussing above for Fig. 9-10.In certain embodiments, camera to 1102,
In 1104 and 1106 supporting constructions 1120 being arranged on camera equipment 1100.In certain embodiments, three pairs of camera (six phases
Machine) 1102,1104 and 1106 via corresponding camera, installing plate is arranged in supporting construction 1120.Supporting construction 1120 includes
For installing three installation sites of stereoscopic camera pair, wherein each installation site corresponds to 120 degree of different viewing areas.?
In the illustrated embodiment of Figure 11, the first pair of stereoscopic camera 1102 is arranged on first in three installation sites, such as front position
Put, and correspond to 120 degree of front viewing areas.Second pair of stereoscopic camera 1104 is arranged in three installation sites second
Individual, for example turn clockwise 120 degree of background positions with respect to front position, and corresponding to 120 degree of different viewing areas.
The 3rd pair of stereoscopic camera 1106 is arranged on the 3rd in three installation sites, for example, turn clockwise with respect to front position
240 degree of background positions, and correspond to another 120 degree viewing areas.Although three cameras on camera equipment 1100
Installation site offsets relative to each other 120 degree, but in certain embodiments, each camera being arranged in camera frame has
The visual field of about 180 degree.In certain embodiments, the visual field of this expansion be by camera apparatus using fish eye lens
Realize.
Although not every installing plate is all visible in shown accompanying drawing, show for installing camera to 1102
Camera installing plate 1110.Installing plate for camera has slit for screw, with narrow in pedestal 1120 through supporting
In groove the screwed hole from bottom entrance installing plate.This allows by unclamping and from the close screw in bottom and then can tighten screw
Come to adjust for this to installing plate, camera is fixed to supporting construction to installing plate.Each camera position can also be adjusted
Whole, then it is locked after the adjustment.In certain embodiments, each camera can be adjusted/be fixed to installing plate from top, and
And camera installing plate can be adjusted from bottom/fix.
In various embodiments, camera equipment 1100 includes pedestal 1122, and supporting construction 1120 is rotatably mounted to base
Seat 1122.Therefore, in various embodiments, the photomoduel in supporting construction 1120 can pass around the axle rotation of base central
Three-sixth turn.In certain embodiments, pedestal 1122 can be tripod or a part for another installation equipment.Supporting construction can
Made with the composite by plastics, metal or such as graphite or glass fibre.In certain embodiments, camera is to can enclose
Central point rotation around sometimes referred to as Centroid.
In addition to the aforementioned components, in certain embodiments, camera equipment 1100 also include two simulation lugs 1130,
1132.These simulation lugs 1130,1132 imitate human ear and in certain embodiments by with the silicon tree of human ear shape molding
Fat is made.Simulation lug 1130,1132 includes microphone, and two of which lug is separated from each other equal or approximately equal to common
Spaced apart between the human ear of people.The microphone being arranged in simulation lug 1130,1132 is arranged on Front camera pair
On 1102, but can be arranged on as an alternative in supporting construction (such as platform 1120).With with human ear on the number of people
Eyes front surface positioning similar fashion, simulation lug 1130,1132 perpendicular to camera to 1102 front surface position.
Simulation lug 1130, the hole of 1132 sides serve as the audio gateway hole of simulation lug side, wherein simulation lug and hole
Combination operation, audio frequency is pointed to the microphone being arranged in each simulation lug, and audio sound is pointed to by extraordinary image human ear
Including the eardrum in human ear.The microphone that left and right is simulated in lug 1130,1132 provides stereo capture, similar to place
Vertical by perceive via the left and right ear of people when the position that the people that 1100 positions equipped by camera equips if located in camera
Body sound.
Although Figure 11 shows a kind of configuration of the exemplary camera equipment 1100 with three stereoscopic cameras pair, should
Work as understanding, other modifications be possible and in the range of.For example, in one implementation, camera equipment 1100 includes single phase
Machine pair, for example, it is possible to a pair of stereoscopic camera of the central point rotation around camera equipment, thus allowing in different time capture
The different piece of 360 degree of scenes.Therefore, single camera is to the central supported that may be mounted in supporting construction and around equipment
Rotate and be allowed to capture different scenes in different time, thus allowing 360 degree of scene capture.
Figure 12 shows exemplary 360 degree of scene environment 1200, such as 360 degree scene areas, and it can be divided into correspondence
Different viewing areas/parts in the different cameral position phase of the respective camera of the different piece of 360 degree of scenes of capture.Shown
Example in, 360 degree of scene areas 1200 be divided into corresponding to by three different cameral/cameras to (for example, being such as arranged on
On camera equipment 1100 and positioning as shown in 9,10 and 11 camera) three parts in three 180 degree areas of capturing.360
0 scale designation in degree scene 1200 is considered the center of scene.In not using some embodiments fish-eye, often
The visual field of individual camera is about 120 degree, therefore allows the scene areas that camera captures about 120 degree.In such embodiments,
The border of 120 degree of different scene parts is in figure using the solid black that 360 degree of scenes are divided into each 120 degree of 3 part
Line illustrates.It is furnished with fish-eye embodiment in camera, the visual field of each camera expands to about 180 degree (± 5 degree), thus
Allow the scene areas of camera capture about 180 degree (± 5 degree).
Cover the left side of 0 scale designation and 90 degree of right side first area (corresponding to from 270 to 90 180 degree front court scenic spot
The area 1 in domain) can be captured by first camera, first camera is for example oriented to capture front scene areas, is equipped with permission camera
Equipment has the fish-eye camera in the about 180 degree visual field to 1102.Second area (area 2) corresponds to from 30 to 210 180
Degree rear right scene areas, it can be captured by second camera, and second camera is for example oriented to capture right back scene areas, joins
Have fish-eye camera to 1104, the 3rd areas (area 3) correspond to from 150 to 330 the left back scene areas of 180 degree, its
Can be captured by third camera, third camera is for example oriented to capture left back scene areas, equipped with fish-eye camera
To 1106.Legend 1250 includes identifying the information being used to refer to the not different line images on same district border, for example, is marked at difference
The beginning and end of capped scene areas below area.It can be understood from the graph that in three differences being covered by different cameral
There is substantial overlap between scene areas below area.In example shown in Figure 12, the overlap between area 1 and area 2 is
60 degree, i.e. the scene areas below 30 to 90, the overlap between area 2 and area 3 is also 60 degree, i.e. the scene below 150 to 210
Region, the overlap between area 3 and area 1 is 60 degree, i.e. the scene areas below 270 to 330.Although weighing in an example shown
Folded is 60 degree but it is to be understood that different change overlaps is possible.In certain embodiments, two different scenes are covered
Overlapping between 30 degree to 60 degree between cover area.
In certain embodiments, content provider issues (such as multicast) to consumer's playback apparatus and includes corresponding to by not
Content stream with the content of the different piece of 360 degree of scene areas of camera capture.In certain embodiments, corresponding to by
The content of multiple versions in the different scenes region differently encoding is by content supplier's multicast, and supports and/or prefer
The playback apparatus of particular version can select suitable content stream to decode and to play back.According to the one side of some embodiments, return
The current head position that equipment of putting follows the tracks of the present viewing field of instruction user, and determine to select including corresponding to 360 degree of scenes
Which or multiple available content stream in the available content stream of the content of the part in region is receiving for playback.Example
As if user's head position instruction user sees/look straight ahead, the 180 degree of playback apparatus decoding 360 degree of scenes of transmission
Anterior stream, but when head position and viewing angle that user is detected have changed, playback apparatus decode 360 degree of scenes
The stream of the scene parts of currently viewing angle corresponding to suitable user (for example, behind the right side, left back, rear) in region.Real at some
Apply in example, the stream including the content of the Part I (for example, front 180 degree) corresponding to 360 degree of scene areas is included by for catching
Obtain the scene areas of the left and right camera capture of the Front camera pair of the front portion of 360 degree of scenes.
Figure 13 shows the example of the different piece of exemplary 360 degree of scene areas representing Figure 12, and it can be by for example
The not homophase of the viewing areas/part covering exemplary 360 degree of scene areas be can be positioned so that on exemplary camera equipment 1100
Machine captures.The example of Figure 13 includes the Figure 130 0,1320 and 1350 illustrating the different piece of exemplary 360 degree of scene areas.So
And, such as from figure it should be understood that in certain embodiments, each of different piece of scene being captured by different cameral
Least partially overlapped.Different hatching patterns is used in each of Figure 130 0,1320,1350, to illustrate corresponding to not
Part with the scene areas of camera position.In certain embodiments, the different scenes shown in Figure 130 0,1320,1350
Part is transmitted corresponding to the different content stream of the content of different view directions via offer.Although the example Scene in Figure 13
Part is illustrated as covering the viewing areas of about 180 degree, but scene parts can cover 120 degree and arrive in certain embodiments
Between 180 degree.
Figure 130 0 shows the first exemplary scenario part 1305 of 360 degree of scenes 1200.First exemplary scenario part
1305 correspond to front view direction and cover the viewing areas of 180 degree in 360 degree of scene environment or about 180 degree.The
One scene parts 1305 can be captured to 902 or 1102 by the first camera being for example located at 0 degree of camera position.First exemplary field
Scape part 1305 area below is illustrated using pattern of diagonal lines in Figure 130 0.First exemplary scenario part 1305 can be by
Transmission is corresponding to the first streaming of the content frame of first (for example, front) view direction.
Figure 132 0 shows that (it includes showing in Figure 132 0 for the second exemplary scenario part 1307 of 360 degree of scenes 1200
Part 1307' going out and 1307 ").Second exemplary scenario part 1307 corresponds to right back view direction and covers 360 degree of fields
From 180 degrees or the about 180 degree viewing areas of 30 to 210 extensions in scape environment.Second scene parts 1307 can be by positioned at such as
The such as second camera of 120 degree of camera positions shown in Fig. 9-10 captures to 904.Below the second exemplary scenario part 1307
Region illustrated using horizontal line pattern in Figure 130 0.Consider Figure 130 0 and 1320.Note in the first and second scene portions
Divide the lap 1308 of the scene areas between 1305 and 1307.Lap 1308 shows to scene areas 1200
One and second the public institute's capturing scenes region of scene parts 1305,1307 a part.In certain embodiments, first and
Overlapping between 30 degree to 60 degree between two scene parts 1305,1307, has the change of plus-minus 2-3 degree.Shown in Figure 13
Example in, lap 1308 is 60 degree, for example, from 30 to 90 region.Therefore, in certain embodiments, corresponding to by
At least a portion of the scene areas of different view directions that different content stream provides and/or captured by different cameral is overlapping.?
In some other embodiments, between the scene areas corresponding to the different view directions being captured by different cameral, there is no overlap.
In certain embodiments, the second exemplary scenario part 1307 can be by transmission corresponding to the of the content frame of the second view direction
Two streaming.
Figure 135 0 shows that the 3rd exemplary scenario part 1309 of 360 degree of scenes 1200 (includes shown in Figure 135 0
Part 1309' and 1309 ").3rd exemplary scenario part 1309 corresponds to left back view direction and covers 360 degree of scene rings
From 180 degrees or the about 180 degree viewing areas of 150 to 330 extensions in border.3rd scene parts 1309 can by be located at as Fig. 9-
The such as third camera of 240 degree of camera positions shown in 10 captures to 906.In the 3rd exemplary scenario part 1309 area below
Figure 130 0 is illustrated using vertical-line pattern.Consider Figure 132 0 and 1350.Note lap 1310,1310', they combine
Overlapping region part between composition second and the 3rd scene parts 1307 and 1309.Below lap 1310,1310' altogether
With region show to scene areas 1200 second and the 3rd public scene area being captured of scene parts 1307,1309
The part in domain.In certain embodiments, second and the 3rd between scene parts 1307,1309 overlap 30 degree to 60 degree it
Between, there is the change of plus-minus 2-3 degree.In example shown in Figure 13, lap 1310,1310' include about 60 degree together
Covering, for example, from 150 to 210 region.Consider now Figure 130 0 and 1350 further.Note lap 1312, it refers to
Show the first and the 3rd overlapping region part between scene parts 1305 and 1309.In certain embodiments, the 3rd exemplary field
Scape part 1309 can be by the 3rd streaming of the content frame transmitting corresponding to the 3rd view direction.
Although as illustrating an example shown in a part for Figure 13 example, in order to understand the present invention some
Aspect is but it is to be understood that other modification is possible and scope of the present disclosure interior.
Figure 14 including the combination of Figure 14 A and 14B is to illustrate operation playback system according to an exemplary embodiment of the present invention
The step of illustrative methods flow chart 1400.This system can be playback system 800 or the application shown in Fig. 8
Shown in any other accompanying drawing playback system.
Illustrative methods start from step 1402, wherein playback apparatus (playback apparatus 1900 of such as Figure 19 or any its
The playback apparatus of its accompanying drawing) it is unlocked and initialize.For discussion purposes it is considered to playback system includes being coupled to wear-type shows
Show computer system 1900' of equipment 1905, head-mounted display apparatus 1905 include assuming the display of picture material thereon,
For example, different images are assumed to the left eye of user and right eye in the case of stereo content.Although computer system 1900' quilt
It is shown as the outside in the headset equipment including display, but computer system 1900' can be incorporated into head mounted display
In, rather than outside it.
Operation proceeds to step 1404 from beginning step 1402.In step 1404, playback system 1900 receives with regard to many
Individual content stream and/or the information of initialization data, such as a part for program guide.The information being received can be Figure 18
Shown in type and include indicating which content stream is or by available information, together with the letter that can be used to receive stream
Breath, such as multicast group identifier or can be used to request content or be tuned to content other identifiers.For example, close with content stream
The multicast address of connection can include in the information being received or can be used to when content provides via switched digital video ask
Ask in the program identifier of content.In the case of broadcast content, the information being received can (and sometimes certain) include
Indicate playback apparatus in order to receive particular content stream and should be tuned to channel and/or frequency tuning information.
The information receiving in step 1404 can include the information for one or more programs.For given program,
Such as sport event, concert etc., provide corresponding to respect to camera at the environment that different stream can be used for corresponding in content
Position is in the content of different view directions.Camera position corresponds to the viewing location during playback.Therefore, use during playing back
The viewing angle at family is partly related to the environment representing in the content that can be received.The different piece of environment can be in difference
Stream in transmitted.For each part of environment, the such as part of 360 degree of environment, corresponding to one of different pieces of information speed
Or multiple stream can list in the information being provided.The top of environment and bottom can also be provided.In certain embodiments,
The content of each stream is stereo content, and wherein different information is left eye and eye image provides, thus allowing to display to the user that
Different images, to provide desired 3D effect.In certain embodiments, the top of spherical environment and bottom are as monochrome image
It is provided, wherein left eye and right-eye view are identicals, therefore need only provide for an image rather than two images.
The given finite bandwidth that can be used for flow content, with regard to the information of the data rate of program, the part of environment and stream
Can (and certain in certain embodiments) be played system for be prioritized which stream to be received.Receive in preset time
The priorization of which stream and select (and certain in certain embodiments) head position based on user and/or user to work as
End rotation direction that is front or passing by.
In the case of the fluidisation of stereo content, give limited bandwidth and/or data constraint, to suitable stream to be received
Selection be important for gratifying and high-quality viewing experience.Receive in step 1404 can be with Figure 18 institute
The same or analogous information of information (such as stream information 1405) shown is stored in memory and be used for will be in special time
The selection of one or more streams that point is received, and it is used for starting the payment of selected stream, for example pass through to add corresponding to selected
Stream multicast group, be tuned to provide selected stream channel, and/or by the network equipment instruction will to playback apparatus provide phase
That hopes flows through it and is carried out the payment of asked stream by the switched digital video channel of transmission.
Operation proceeds to step 1406 from step 1404.In step 1406, during initializing, detection user works as front
Portion position.User knows that the head position detecting in during the initialisation phase will be assumed to as eyes front position, therefore in step
Generally his head is maintained comfort level towards front position during rapid 1406.
Operation proceeds to step 1408 from step 1406.In step 1408, detect in step 1406 uses account
Portion position 1407 is considered (0 degree) environment viewing location forward, and is shown when his/her head of user is in this position
The view location showing will be corresponding to 0 degree of environment position, i.e. by for capturing and then being encoded and include in the spy corresponding to environment
Determine the forward location of the camera capture of image in the content stream of part.In the case of sport event, generally will in this position
Corresponding to the main actions region in environment, for example, it is stage in the case that one or more streams are corresponding to concert, in stream
Corresponding to the center in the case of sport event being place.Therefore, in step 1408, the viewing location of user will be arranged to
It is interpreted zero degree viewing location, the forward direction/forward portion of such as scene areas.It should be pointed out that partly the corresponding to of 360 degree of views
In horizontal viewing location, if user rotates his/her head, different piece is visible.By moving up or down
User's head, user individually or with one or more of the other part can see sky portion and/or above ground portion in combination.
Because, in the case of assuming horizontal head position, home court scene area is divided into some along 360 degree of rotations, therefore right
More bandwidth should be commonly accorded in the stream of these parts, and top/bottom scene parts can be using still image or not frequently
The image of numerous change is presenting.
Operation proceeds to step 1410 from step 1408, and wherein environment depth map is received.Depth map defines content
The image of stream will be mapped to that the surface of 3D environment thereon.In the case of not receiving depth map, spheroid is environment
The inner surface of default assumption shape, wherein spheroid be render before display during the image of environment to be mapped to table thereon
Face.By providing and using depth map it is achieved that more really experiencing because the image in content stream will be mapped to that trueer
Reproduce the surface on the shape of environment to be modeled and surface on the spot.Therefore, the depth map receiving in step 1410 corresponds to
In by user's selection content also corresponding environment to be received.The environment mapping receiving in step 1410, or do not connecing
Receive the default map in the case of mapping, be stored as environment mapping 1411, for subsequently using when rendering image.
Operation proceeds to step 1412 from step 1410.In step 1412, reception will be used for reflecting 2D picture material
It is mapped to the one or more UV mappings at least a portion on 3D surface.In one embodiment, for can be by different indivedual
Each part of the environment of graphical representation receives at least one UV mapping.In some such embodiments, when being spread by content
When the image sending is captured by different camera (such as left eye and right eye camera), different UV mapping can and sometimes certain
It is provided for different cameras.Therefore although receiving corresponding to environment such as in step 1414 in the example of Figure 14 A
First UV mapping of the Part I of forward portion, but receive the such as left back portion corresponding to environment in step 1416
2nd UV mapping of Part II, receives the 3rd of the Part III of such as right-rearward portion corresponding to environment the in step 1417
UV maps.The UV mapping of the top corresponding to environment and bottom receives respectively in step 1418 and 1420.If these parts
It is of the same size, then identical UV mapping can be used for some.But, in certain embodiments, different UV reflects
Penetrate the image for being captured by different cameral.
Thus, in one suchembodiment, provide each part of the environment of stereoscopic image data, example for for it
As left eye and eye image, can receive and store single UV mapping, therefore UV for each of left eye and eye image
Mapping can consider to be used to the particular characteristics of the photomoduel capturing specific left eye or eye image content.
Each UV mapping provides and is used to the two dimensional image of corresponding for this mapping content stream is mapped to the surface of 3D environment
Corresponding part mapping.By this way, the image being captured by camera can be sent as 2D image, then as texture
It is mapped to by the surface of 3D model or the part on surface.
Received with 3D model information and UV mapping, can quilt in the case of the other contents for scene parts
Image as default value can be received and be stored.In step 1422, corresponding to first, second, third and the 5th scene
In partly, one or more content (such as image) is received in step 1422 and is stored.In certain embodiments, corresponding
Multiple alternate image in a part (for example, background parts or sky portion) for environment are received in step 1422 and are deposited
Storage.Control information can be received, this control information indicates that given point in time is for multiple default for which stores during event
Which default image is a part for image will use.For example, in certain embodiments, crowd is seated background image and crowd station
Vertical background area image is stored as two default images.Which background image is control information be used to refer to and be used for event
Given part.For example, during the part of the rhythm of standing corresponding to drama or concert, if user turns to background side
To then crowd's image of standing will be background image to be shown by denoting signaling.But, in the event being generally seated as crowd
During major part, if user rotates his/her head towards background, control information should will be seated using crowd by signaling
Default image.Control information can separate signaling with content stream, or can be with the content stream one of the part for environment
Rise and be included, this is partially different than the related part of one or more default images.For example, corresponding to the content stream of forward direction
Image corresponding to forward direction, such as left eye and eye image can be provided, and indicate which default image should be in thing
Each time during part is used for sky, the control information on ground, right background parts and left background parts.As an alternative, return
The equipment of putting can determine to use based on the brightness in particular point in time and the similarity of one or more features of foreground image
Which background or sky portion.For example, when foreground image is dark, this can be detected and very cloudy sky image is examined automatically
Measure, and when foreground image is brighter, this can also be detected and from the available default sky empty graph being received and stored
Partly more cloudy, brighter sky image is automatically selected in picture.
In step 1422, the default image corresponding to varying environment part is generally received and stored in a coded form.
In step 1424, one or more in the image being received be decoded, then in step 1426 decoding content stored
In one or more frame buffers.By this way, default image can be decoded and be stored with decoded form, therefore
Do not need when they need to render to be decoded again during playing back.Because default image can be used multiple times, therefore solve
Code and storage can reduce decoding request, otherwise may need to decode image when rendering or before rendering.It is assumed that processing money
Source may be not enough, was decoded with wherein image and then was no longer required just by it Yi Dan decoding image before and then showing
From such as memory delete embodiment compare, the pre decoding of default image and with decoded form storage improve processor provide
The use in source.
Although identical default decoding image can be used multiple times, such as sky etc., it can with its
Before the picture material combination that it receives processed so that other images of its environment in combination of its more tight fit, with
Build environment by viewing part.For example, in certain embodiments, the default image of decoding is based on their images in combination
Part accepts brightness adjustment, or in default image by least edge when combination with the image of the another part corresponding to environment
Edge to be blurred.Therefore, at least some embodiments, during use, image luminescence and/or color characteristics filtered or
Modification, so that they are more closely similar to the identical characteristics of their ground image in combination.
It has been stored for using in the future with initialization data and default image, operation proceeds to step 1428, wherein
The set (set of for example current stream selecting) of one or more streams to be received is initialised.Each stream can provide vertical
Body or monochromatic image data.Alternatively, corresponding audio frequency can also be received in selected stream, but is more commonly at one
Or received in multiple single audio stream.Description will focus on video flowing reception but it is to be understood that audio stream generally also
To be received the decode by playback apparatus, and audio frequency can include stereo audio.
In illustrative steps 1428, before the set of currently selected stream is set equal to transmit corresponding to environment
To/forward portion content first-class.This is because on startup initial position be arranged to forward viewing location and because
This, as initialized result, user expectation will see frontal scene region when starting.
Operation proceeds to step 1429 from step 1428.In step 1429, resource allocation information is received.Resource allocation
Information can be bandwidth and/or the form of data rate allocation control information.In certain embodiments, receive in step 1429
Information include distributing to the different piece corresponding to environment with regard to how many bandwidth or data communication capacity one or many
The information of individual communication stream.Information can represent with regard to bandwidth or data rate but it is to be understood that data rate generally and bandwidth
Unanimously.For example, it is contemplated that being used for the type of the data encoding through bandwidth communication, the data volume that can be received can be according to amount of bandwidth
And change.
The information receiving in step 1429 can indicate distribute to image-receptive corresponding to environment specific part can
Relative maximum amount with message capacity.For example, it can indicate that at most 80% bandwidth or supported data rate should divide
Dispensing primary traffic, such as forward data flow, and last the 20% of bandwidth is assigned to one or more of the other stream.Resource is not to
Equidirectional distribution can according to the picture material in the corresponding part of (and in certain embodiments really basis) environment and/
Or the audience feedback that detects and change.For example, in certain embodiments, the midfield occurring during the corresponding event of content is stopped
During breath, the information that receives in step 1429 can indicate the resource of incrementss should distribute to corresponding to one of environment or
The reception image at two rear portions.This is because during intermission, user more likely rotates their head and starts remote
From home court ground or stage look and it may be desirable to provide certain video for rear portion so that seem during intermission
Action is had to carry out in spectators.For example, buy hot dog in baseball game or change the image of the people at seat can be (and at some
In embodiment really) sent so that seeming that background is movable during intermission, and be static at other times
's.Similarly, in background, the image of billboard can change during intermission, for advertisement and/or amusement purpose.Cause
This is it may be desirable to trigger playback apparatus, to distribute more resources during intermission than during the other parts of event
To receive background parts.The control information receiving in step 1429 is permissible during the major part of event, (and sometimes
Really) different from during the intermission in event or other discrete portions.In at least some embodiments, in the main matter phase
Between, the control information receiving in step 1429 makes more bandwidth and/or data rate compared with rear portion be assigned to ring
Main (such as forward direction) region in border.But, during intermission or other discrete portions, can force and distribute to one
Or the data rate at two rear portions increases.
Distribute to the segmentation between the resource of assigned direction can based on environment partly in present content, spectators note
The measurement of meaning power and/or the part of ongoing event, such as major part, intermission part, performance after part.?
In some embodiments, the control information providing in step 1429 is specified and will be distributed for the one or more portions corresponding to environment
The bandwidth of image-receptive divided or the maximum of data rate and/or minimum, such as when event is carried out.In some enforcements
In example, the instruction of this information does not have bandwidth or data should be allocated for receiving ground to sky image part during event,
And therefore these parts will be filled using still image when needed in this case.
Bandwidth/data rate allocation control information can change over, and different information was received in the different time.
Control information can for example be embedded into as single control information set before sent in content stream and/or individually.
Operation proceeds to step 1430 from step 1429, wherein starts the content of stream during the selected adfluxion not yet being received is closed
Pay.This can be related to add selected flow corresponding multicast group, send to the network equipment of the payment asking selected stream message and/
Or be tuned to the broadcast channel that sent thereon of selected one or more streams.Pass through the situation of step 1430 in first time
Under, this is by the payment of the content stream of the forward portion involving starting up corresponding to environment, because this is arranged to be selected for
The initial viewing part paid.But, when the head position of user changes, for example, user rotates his/her to the left or to the right
Head, the set of selected stream can and generally will change.For example, if his head is turned left so that front by user
Square region and right back region be partly into the visual field, then the set of selected stream will be varied so that corresponding to front area
The content of domain and left back part is received.If left back portion is not received and is chosen, in stream, correspond to left back portion
The payment of the content divided will be activated in step 1430.If in the two directions maximum data rate stream can not all be supported,
Then can select lower data rate forward-flow, and therefore the forward-flow of lower data rate and left back content stream startup all
To be activated.Stream outside selected set was terminated before the stream in the new set selecting receives.The termination of stream and startup
It is performed so that the significantly change of time slot and/or the content being received is minimized in the way of smooth, wherein fuzzy
And/or filtration is used to reduce the quality of image or the significant changes in source when stream switching occurs.For example, obscuring can be across conduct
Render or show that the part of the image that a part for process is stitched together is implemented.
With content delivery, the current collection of selected stream is started, operation proceeds to step 1432 from step 1430.In step
In rapid 1432, the stream from selected content stream set receives content.This can (and certain in various embodiments) be related to
Receive the content corresponding to highest priority stream, for example, provide the content corresponding to most of visual field and corresponding to environment one
The stream of the content of individual or multiple other parts (for example, a fraction of part of the picture material corresponding to the visual field is provided).Carry
A fraction of stream for the content for present viewing field can be described as minor flow.Single stream is provided for entirely regarding wherein
In a wild embodiment of content, 20% or less available bandwidth/can support that receiving data speed is retained and is used for
Receive one or more secondary or lower priority stream, for example, these are one or more secondary or lower priority stream is in user
Side outside the visual field provides the content outside the visual field in the case of being rotated up.In the visual field corresponding to two different contents
In the case of generally uniform segmentation between the region of stream, each stream can be allocated approximately half of available data rate/reception number
According to speed, because they are contributed in a substantially even manner and user would be impossible to quickly change position and to watch two and to flow is
It provides the region of the environment outside the region of image.
It is activated with content reception, in step 1432, receive content from the set of selected stream, for example, scheme
Picture.In the case of main (for example, limit priority) stream, content will be generally stereo content, wherein left eye and eye image
Content is received all in stream.For the stream of low priority flows and/or allocated low data rate, monochrome image can be connect
Receive, wherein single image is received, for being shown to left eye and eye image.Forward direction scene parts are usually as in solid
Hold and received, but one or more rear portion can be provided as monochrome image stream.
In step 1432, the picture material of coding is received generally in stream.In step 1434, the content receiving
It is decoded, then in the step 1438 reaching via connecting node A 1436, the content of decoding is stored in one or more
In frame buffer.In certain embodiments, decoded picture buffer is maintained to each part of environment.Although the figure receiving
The only a part of picture is finally shown, but the complete frame receiving generally is decoded and buffers.The buffers images of decoding can
Keep in memory with (and certain in certain embodiments), until it is used for the figure closer to the phase of same scene part
As replacing.Therefore, in any given time, decoding image can be used for each part of environment, to be based on present viewing field root
According to needing to render final output image.Content due to decoding image is maintained in memory, until it is by closer to the phase
Decoding image is replaced, and the decoding therefore corresponding to the image of each part of 360 degree of environment does not need during each frame time
Occur.Therefore although the frame rate of 30 frames per second can be supported, but do not need to decode 150 frames in each frame period, for example,
There is a frame for each of top, bottom, front, left back, right-rearward portion, and be equal to or a little higher than frame rate to be supported
The frame of low amount can be decoded, wherein some parts of image are derived from the still image of early decoding or early decoding
The image section being updated with the lower speed of the part wilder than corresponding to main view.
Therefore, after the renewal in step 1438, the frame of current decoding can be used for present viewing field based on user come wash with watercolours
Dye image.
Operation proceeds to step 1440 from step 1438.In step 1440, using rendering content of such as getting off for display:
From frame buffer available solution digital content, define one or more image sections and for example will be applied to table thereon as texture
The environment mapping in face, and the UV mapping of the information of model that 2D decoding image is applied to regard to how 3D surface is provided.Ring
The 3D grid model in border can be the form of grid, and the point in wherein UV mapping will be applied to it corresponding to the image defining environment
The grid model on surface summit.
As the part rendering, the part in the user visual field will be fully taken up when the content from single stream is not provided that
When, the image section corresponding to the content receiving from various flows will be combined, to generate the figure of the environment corresponding to the user visual field
Picture.Filter or obscure and can (and certain in certain embodiments) be applied across image, wherein image is connected, right to be formed
Should be in the composograph in the user visual field.This tend to reduce seam user is had how obvious.Additionally, in certain embodiments, provide
The brightness of the image section of a part for composograph is adjusted, and divides it to reduce the image portion being combined to form composograph
Between luminance difference, wherein, when brightness adjustment is carried out to the image section being combined to form composograph, frontal view
Brightness value starts to be given the priority of the brightness value higher than rear portion, top or sidepiece.
In step 1442, it can be the synthesis of one or more image sections of the image receiving from different content streams
Render image to be stored, show or send.This expression current collection based on selected content stream is to one or more content frame
Generation and display.
As time goes on, due to change or the network problem of communication channel condition, maximum support data speed meeting
Change.This can be detected in step 1443 and be considered when selecting which content stream should be received and processed.?
In step 1443, maximum support data speed and/or bandwidth are confirmed as being represented by data 1444, to make in subsequent step
With.In step 1445, the current head position of detection user.This can be by using including on the headwear of head mounted display
Position sensor completing.
Operation proceeds to step 1446 from step 1445, wherein determines whether the head position of user changes.If used
The head position at family does not change, then operation proceeds to step 1447, wherein checks that currently available maximum bandwidth or maximum can
Whether the data rate supported is altered, for example, from the beginning of the last time carries out flowing selection.Can prop up without maximum is detected
Hold the change of data rate or head position, then previous stream selects to keep effectively and selected content stream set is not entered
Row changes.Therefore, playback system will continue to the content of the present viewing field of the user constant corresponding to holding.In step 1447
In be not detected by change in the case of, operation return to step 1429 via connecting node B 1456.
But, if change is detected in step 1446 or 1447, operation proceeds to stream and selects step 1448, and it relates to
And convection current selects calling of subroutine.By this way, the head position detecting and/or the change of support data speed can
To be considered, and the selection flowed in view of can be used to receive data (for example, picture material) support data speed and/or
User's head position is reconsidered.
Select subroutine to be chosen once stream has passed through stream, operation advances to step 1450, wherein check selected
The stream whether set different from the current stream selecting.If the new set selecting and the currently selected set of the stream being currently in use
Identical, then do not need to align the stream receiving and be changed, and the current adfluxion conjunction selecting keeps constant in step 1452,
Operation proceeds to step 1429 via connecting node B 1456.But, if the adfluxion newly selecting is closed different from current selection
Adfluxion is closed, then in step 1454, the current adfluxion conjunction selecting is updated, to reflect the change that selected adfluxion is closed, for example, when
The set of front selection is set equal to be closed by the adfluxion that stream selects the new selection of subroutine selection.
Operation proceeds to step 1455 from step 1454, the connecing of the stream during wherein currently selected adfluxion in the updated is not closed
Receipts are terminated.This may relate to playback system signaling, and it is no longer desire to become one of the multicast group corresponding to the stream no longer being received
Member, or take another action to come for another object (for example, the receptions of the new one or more streams selecting) to use resource
(before such as, being used to receive the tuner of the stream being terminated).
Operation proceeds to step 1429 via connecting node B 1456 from step 1455.Then, in step 1430, any
The reception of the new stream selecting will be activated, and the content receiving will be used to render one or more images.Therefore, with
The passage of time, when user changes his or her head position and/or the data rate that can be supported changes, selected
Stream can also change.
Figure 15 is the flow chart of the step illustrating the stream selection subroutine according to exemplary embodiment, and it is used as in figure
The stream calling in the step 1448 of 14 methods selects subroutine.
When subroutine is called by the step 1448 of such as Figure 14, stream selects subroutine 1500 to start in step 1502.Behaviour
Make to proceed to stream selection step 1504 from beginning step 1502.Stream selects the input of step 1504 to include bandwidth and/or data speed
Rate constraint 1503.These can be included with regard to being assigned to the available of one or more scene parts, stream and/or stream direction
The largest portion of bandwidth or the information of total receiving data speed.Constraint can also be included to scene parts, stream and/or view direction
Minimum bandwidth or data distribution.Therefore, constraint can limit and distribute to the resource receiving the content corresponding to specific direction
Maximum and/or mandate (mandate) distribute to the least resource amount in direction so that corresponding to the part in direction in event midfield
When action occurs in a part (such as Background Region) for environment at least infrequently or in special time during rest
Place is updated.
Maximum support data speed and/or maximum available bandwidth 1444 are another inputs selecting step 1504.This
Information 1444 instruction can be used for receiving the maximum bandwidth of content and/or can be used to and supports to receive a content stream or content stream
The maximum support data speed of combination.The current head position 1407 of user and (for example, all with regard to the information of available stream 1405
Stream guidance information as shown in Figure 18) it is also input to step 1504.
Select in step 1504 in stream, selecting corresponding to program or event (is for example, ongoing in some cases
Real-time event) one or more content streams.The current head position selecting based on user of one or more streams, such as stream band
The stream information of width or stream data rate demand and/or the information that bandwidth can be supported with regard to maximum support data speed or maximum.
Can be can be (and sometimes true with regard to the bandwidth on view direction or flow priority basis and/or data rate constraints 1503
In fact) stream carrying out in step 1504 is considered when selecting and uses.
Exemplary stream selects step 1504 to include step 1506, and wherein content stream is current and/or the past based on user
Head position is prioritized.This can be related to be prioritized calling of subroutine to all stream as shown in Figure 16.
In certain embodiments, the stream corresponding to equivalent environment direction is allocated identical priority.Therefore, corresponding to
Multiple streams of the corresponding content of same section of environment and/or view direction (and sometimes certain) can be allocated identical
Priority.Although corresponding to identical view direction, in some cases, stream has different data rates to stream, wherein
Some streams provide high-resolution stereo content with high frame rate, and low resolution stream is provided in monochrome image in some cases
Hold and/or low-resolution image and/or support low frame (image) speed.Therefore although specific direction may be considered that with height
Priority and provide the content corresponding to high priority direction all streams all identical, but in certain embodiments to available
Selected in receiving the amount of bandwidth corresponding to the content of specific direction.
After the priorization of content stream, operation proceeds to step 1508 from step 1506.In step 1508, determine quilt
For having the maximum bandwidth of stream and/or the data rate of limit priority.This determines can be based on bandwidth or other constraint
1503 make, and constraint can indicate to be maximum, minimum available reception resource of highest priority stream distribution or part thereof.One
In a little embodiments, the minimum bandwidth/data rate allocation for highest priority stream is 50% or more, but other distribution is also
Possible.
In step 1510, determine the maximum bandwidth of each stream and/or data rate that will be used for that there is lower priority.
In certain embodiments, at least the 20% of data rate or bandwidth is used for secondary or lower priority stream.
With the data rate of the stream having determined that in step 1508,1510 for different priorities, operation proceeds to step
Rapid 1512, wherein permissible based on carrying out checking to determine whether for the maximum bandwidth of highest priority stream and/or data rate
Support highest priority stream.If corresponding to any one stream of limit priority can be supported, then in step 1512
Decision for "Yes" and will operate and proceed to step 1514, and the first water stream there corresponding to limit priority will be selected
Select.This is usually directed to from being assigned the set of stream of limit priority selection peak data rate stream.If considering for example front
It is assigned limit priority to the stream in direction, then, the given data rate that can be used for limit priority content stream, can be supported
Peak data rate forward direction content stream will be chosen.
Operation proceeds to step 1516 from step 1514.In step 1516, it is determined whether can support that second is high preferential
Level stream.In certain embodiments, this is related to determine that how many bandwidth/data may be used to after highest priority stream is chosen
And determine that in that bandwidth/data, how many can be used for the second highest priority stream based on the constraint being received.Without right
Second high priority flows are put to constrain, then bandwidth/the data reception capabilities of whole amount can be used for the second high priority flows.If
Determine that the second highest priority stream can be supported in step 1516, then operation proceeds to step 1518, the wherein second highest
Priority flow be for example from be assigned the set of one or more streams of the second limit priority select.Step 1518 is permissible
(and certain in certain embodiments) is related to the peak data rate stream that can be supported selecting to have the second high priority.
For example, if the second high priority corresponds to the right-rearward portion of environment, step 1518 will be related to select the right side corresponding to environment
The peak data rate stream that can be supported of part afterwards.
Although in most of the cases limit priority and minor flow will be supported, selecting two limit priorities
After stream may remaining enough bandwidth, to receive one of another part (for example, unviewed part) corresponding to environment
A little contents.If the second highest priority stream can not be supported, operate and proceed to step 1520 or directly from step 1518
Proceed to step 1520 from step 1516.
In step 1520, such as using available bandwidth/number after being chosen in the first and/or second priority flow
To check whether the 3rd highest priority stream can be supported according to receiving resource.If determining that in step 1520 the 3rd is high preferential
Level stream can be supported, then operation proceeds to step 1522, is wherein for example selected using the remaining bandwidth/data rate that can be supported
Select the 3rd high priority flows.If given available bandwidth and/or received data and/or bandwidth allocation constraint, the 3rd highest
Priority flow can not be supported, then operate and proceed to step 1524 from step 1522 or directly proceed to step from step 1520
1524.
In step 1524, checked to determine whether residue can be used to receive in additional after other streams select
Any bandwidth (for example, the ability of receiving data) held.If remaining additional bandwidth, operation proceeds to step 1526, its
In one or more lower priority stream be chosen, with using remaining available bandwidth/data rate.Operation is before step 1526
Enter step 1530.Can use without additional bandwidth, then operate and proceed to return to step 1530 from step 1524.
Return to step 1530 makes process return to stream and selects the invoked point of subroutine 1500, for example, the new adfluxion selecting
Close and determined by routine 1500.
Figure 16 is the flow chart of the step illustrating the stream priorization subroutine according to exemplary embodiment.Figure 160 0 shows
The exemplary stream prioritization routine of (for example calling) can be used by one or more of the other routine described herein or subroutine.
For example, the step 1506 of Figure 15 can be by realize to calling of subroutine 1600.When routine is called, flow preferential beggar
Routine 1600 starts in step 1602.Operation proceeds to step 1604 from beginning step 1602.In step 1604, based on user
The head position detecting determine the present viewing field of user.Assume that the visual field of user is less than 180 degree, present viewing field can correspond to
A part in the environment capturing from single camera position (for example, forward-looking camera position).But, when user moves his
During head, such as when rotating and/or see up or down to the left or to the right, the visual field of user can correspond to by positioned at not homophase
The part of the environment of camera capture that seat in the plane is put.For example, meet corresponding to the image being captured by different cameral or overlap when seeing
Point environment a part of when, the visual field of user can correspond to the content that will be transmitted in two various flows.Real at some
Apply in example, provide the stream of the image of the largest portion corresponding to the visual field generally will be given limit priority.
The visual field with user is determined in step 1604, and operation proceeds to step 1605, and wherein identification transmission is right
Should be in the stream of the content of user's present viewing field, wherein content such as monochrome image or the stereo-picture including left eye and eye image
Right.Then, operation proceeds to step 1606, wherein determines the available field corresponding to user's present viewing field from the stream identifying
The size of one or more parts (for example, image section) of scene area.Therefore, at the end of step 1606, with regard to which stream
There is provided and can use and convection current can be used to and arranged corresponding to the picture material of present viewing field and the information of partial relative size
Name, for example, be prioritized.
Operation proceeds to step 1608 from step 1606.In step 1608, priority be assigned to offer corresponding to
One or more streams of the picture material of the present viewing field at family.Being prioritized (such as ranking) is the user visual field being provided based on stream
The size of one or more parts.For example, the stream providing the picture material corresponding to the 80% of the user visual field will correspond to than providing
The stream ranking of the picture material in the 15% of the user visual field is high, provides the stream of remaining 5% (for example, top or bottom) will be assigned
Third priority, this is less than the limit priority of the stream distributing to the image providing 80% part corresponding to the visual field.
Step 1608 (and certain in certain embodiments) can include step 1610 and/or step 1612.In step
In 1610, limit priority is assigned to the stream of the largest portion providing the visual field.Step 1610 can include providing the visual field
The stream of largest portion is appointed as main flow.Step 1612 includes next limit priority is assigned to by highest priority stream tribute
The stream of the visual field contribution part outside the part offered.These streams can be to present viewing field contribution based on them in step 1612
Partial size, to be prioritized, wherein contributes the stream of more fraction to be assigned lower priority.
Assume that multiple streams for example with different pieces of information speed can contribute identical part to the visual field, although potential be located in
In different resolution ratio or frame rate, but multiple stream can be assigned identical priority, for example, corresponding to frontal view
Stream can be assigned identical priority, provides the stream of left back view can be assigned identical priority, such as with appointment
To the different priority of forward-flow, and provide the stream of right back view can be provided identical priority, such as with appointment
To front to or the different priority of left back view stream.
Therefore, in step 1608, stream contributive to the visual field will be ranked, i.e. is prioritized.Priorization can be led to
Cross and list stream in ranked list to represent, wherein main flow is assigned limit priority, and other stream is assigned lower priority.
Not all stream can correspond to the visual field.For example, top or two scene parts can outside the visual field, and
The stream therefore providing this view also may not be prioritized in step 1608.Operation proceeds to step from step 1608
1614.In step 1614, it is determined whether have any remaining stream will be prioritized.If there is no residue to be prioritized
Stream, such as because they both correspond to the present viewing field of user, then operation proceeds to return to step 1630.But, if in step
Determine in rapid 1614 that one or more streams still will be prioritized, then operation proceeds to step 1616.
In step 1616, priority is assigned to one or more additional streams of transmission content, such as in present viewing field
Outside stream.In certain embodiments, the priorization executing in the step 161 is based on the content being provided by the stream being prioritized
The end rotation direction of the current or past with the degree of approach of content visible in present viewing field and/or based on user.For example,
If stream provide corresponding to close proximity present viewing field environment a part picture material, in certain embodiments, it
Ratio is provided the stream corresponding to the content of the image section further from user's present viewing field to be assigned higher priority.Similar
Ground is it is assumed that the content on end rotation direction is than on the rightabout away from the end rotation direction detecting of user
Content more likely rapidly enters the visual field of user, provide picture material in user's head direction of rotation stream can than
It is given higher priority away from the content on the end rotation direction detecting of user.
In at least one embodiment, step 1616 includes step 1618, is wherein checked to determine head position
Whether change indicates end rotation, for example, tilts contrary rotation to the left or to the right up or down with head.If in step
It is not detected by end rotation, then operation proceeds to step 1620, wherein, in certain embodiments, stream is based on them in 1618
With respect to user present viewing field provide view data which to be partly prioritized.When top and base section and left or right
When rear portion is outside the visual field, provide top and/or the stream of base section can be default to ratio offer left or right rear portion
Stream be assigned lower priority.Operation proceeds to return to step 1630 from step 1620.
Have been detected by user's head rotation if determined in step 1618, operation proceeds to step 1622.In step
In rapid 1622, determine the direction of end rotation, for example, user's head is to the left or to turn right.This allows to consider head rotation
The direction turning because to enter the next part of the environment in the access customer visual field usually more likely on the direction of end rotation not
It is remote from it.
Operation proceeds to step 1624 from step 1622, wherein assigns a priority to one based on the direction of end rotation
Or multiple content stream, such as the stream outside the visual field.In at least one embodiment, step 1624 includes step 1626, wherein
Next minimum unused priority is assigned to the content providing the part on end rotation direction corresponding to environment
Stream.For example, if the forward portion seeing environment user by his head rotated right, after the right side is provided outside the visual field
The stream of square content will be allocated higher priority than the offer also stream of left back content outside the visual field.Operation is from step 1624
Proceed to step 1628, there, the residual stream not also being prioritized is more relatively low than the stream being assigned priority is assigned
Priority, indicate relatively low importance.
Operation proceeds to return to step 1630 from step 1628.When reaching return to step 1630, content stream is by root
It is prioritized according to priority, such as ranking or sequence.
Figure 17 is the flow chart illustrating the step 1700 rendering subroutine according to exemplary embodiment.
When needing image rendering, rendering subroutine 1700 can be by one or more routine call.In stereo content
In the case of, be user left eye and right eye in every eye render single image.In the case of monochromatic content, to user's
In left eye and right eye, every eye renders and uses single image.Render and often refer to picture material from one or more streams
Combination.Therefore although some parts of environment can be provided as monochromatic content, but other parts can be used as in solid
Appearance is provided, and in this case, different left eyes and eye image can be rendered, and some of them content is three-dimensional, and
Other contents are monochromatic, but, when at least a portion of environment is presented as stereo content, it is left eye and eye image
Each of generate single image.
Render routine 1700 to start in beginning step 1702, and proceed to rendering step 1706.Rendering step 1706 defeated
Enter including environment mapping 1411, the decoding picture material 1703 corresponding to one or more views, and be used to one or
Multiple decoding images or image section are mapped to by the one or more UV mappings 1704 on the surface of environment mapping 1411 definition.
As discussed above, in the case of not providing more complicated geometry, environment mapping 1411 can be default to be spheroid,
Wherein image is mapped to the inner surface of spheroid.
In certain embodiments, rendering step 1706 includes step 1708, and this step includes including using by decoding
The content generating corresponding to the image in one or more content streams of user's present viewing field and environment mapping and at least one
Individual UV maps and to generate at least one image corresponding to user's present viewing field.Situation in three-dimensional (for example, 3D rendering content)
Under, render and the form leading to be suitable for showing is generated left eye and eye image.In certain embodiments, rendered left eye and
It is 3D that the difference of eye image leads to user to perceive image.
Operation proceeds to step 1710 from step 1706, and step 1710 is return to step, and it makes rendered image quilt
Return to program or routine, to be supplied to display device, storage and/or output.
Every time when the frame wanting the visual field or when more redaction is to be displayed, rendering subroutine 1700 can be called.Therefore, wash with watercolours
With image (for example, frame), dye is generally to show that the consistent speed of speed occurs.
Figure 18 shows the exemplary table 1800 including the stream information corresponding to multiple content streams.In certain embodiments,
Received as a part for tutorial message (such as program guide) including the stream information in exemplary table 1800, thus providing
The information of the content stream of reception can be selected with regard to playback system.Legend 1840 includes indicating and is used as what table 1800 included
The information of the implication of various letters of the abbreviation of information.
The content stream corresponding to access information can be used to including the information in table 1800.As will in certain embodiments
Discuss, for multiple available content streams, stream information includes being added into receive the multicast group of given corresponding content stream
Multicast address, can be used to ask access for provide given content stream the information of switched digital video channel or can quilt
Be used for controlling playback system tuner be tuned to the channel tuning information of broadcast channel broadcasted thereon of given content stream work as
In at least one.
In table 1800, often go corresponding to the independent content stream transmitting content, wherein go corresponding content stream by row 1812
Flow identifier identification shown in middle corresponding entry.Each entry identification in row 1804 is spread by the corresponding independent content of this row
The programme content sending.As from table 1800, it should be understood that first group of row 1820 corresponds to programme content " football ", this can indicate
Program/event title as shown in the corresponding entry in row 1804.There may be corresponding to various different program/event
Multiple such groups.Each group includes content stream, and each content stream corresponds to view direction and supports given data rate, such as
As discussing.For simplicity, in figure illustrate only two groups, wherein corresponds to second group 1822 of row only by part
Illustrate, this is simply to illustrate that concept.Second group of row 1822 corresponds to programme content " HI ", such as by the corresponding bar in row 1804
As mesh is indicated.
In row 1806 each entry instruction by the streamed scene areas of corresponding content a part, such as 360 degree
The area 1200 of scene areas.Therefore, the first three rows in group 1820 (every a line corresponds to different content streams) transmission frontal scene
Partly (for example, it is mapped to the area 1 shown in Figure 12, cover 270 ° to 90 ° of viewing areas).Ensuing three row in group 1820
(every a line corresponds to different content streams) transmission right back scene parts (for example, are mapped to the area 2 shown in Figure 12, cover
30 ° to 210 ° of viewing areas).Left field after last three row (every a line corresponds to different content streams) transmission in group 1820
Scape part (for example, is mapped to the area 3 shown in Figure 12, cover 150 ° to 330 ° of viewing areas).
The data rate that the corresponding content stream of each entry instruction in row 1808 is supported.Each entry in row 1810 refers to
Show the multicast that can be added into the corresponding content stream to receive the flow identifier identification shown in the corresponding entry in row 1812
The identifier/address of group.Each entry in row 1814 is included for the traffic identifier shown in the corresponding entry in row 1812
The flow descriptors of the corresponding content stream of symbol identification.Each entry in row 1816 includes being used to access or asks corresponding content
The access information of stream, such as tuner parameters and/or other access parameter.
It should be understood that in the example shown, exist corresponding to given view direction as from exemplary table 1800
Multiple (such as three) of each content stream are available in different editions used in playback, and wherein each version of content stream is supported
Different data rates.Therefore, the feature according to the present invention, playback system can be based on one or more factors, for example, such as
The bandwidth supported, data rate, user's head position etc., to select one or more streams used in playback, such as with regard to
As Figure 14-17 discusses in detail.
How information in order to be more clearly understood that in table 1800 can be played system for selecting and/or accessing one
Or multiple content stream is it is considered in the first row in group 1820 and row 1804,1806,1808,1810,1812,1814 and 1816
First entry in each row.The flow identifier S that first entry instruction of row 1804 is included by row 18121D1Identification
The streamed event/program of first content " football ".In row 1806, corresponding entry indicates that first streaming corresponds to front
The content of scene parts (for example, 270 ° to 90 ° viewing areas).This viewing areas information is played system for identifying transmission
Corresponding to one or more streams of the content of the current head position of user/spectators, this current head position corresponds to works as forward sight
Wild.Continue this example, in row 1808, corresponding first entry instruction first content stream is supported and/or needed data rate D1.Row
In 1810, corresponding entry instruction first content stream can be accessed by adding multicast group M1, and wherein M1 indicates Multicast group address
And/or it is mapped to the identifier of address.In row 1814, corresponding entry includes the flow descriptors corresponding to first content stream
“V1C1D1F1", it indicates first-class corresponding camera viewing angles (V1), correspond to first frame rate (F1), support number
According to speed (D1) and codec type (C1).In last row 1816, the instruction of corresponding entry can be used to access or ask the
The access tuner parameters of one content stream and/or other access parameter (being illustrated as A123).
Can be used with this type of information with regard to available content stream discussed above, such as playback system 1900 return
Place system according to the feature selecting of the present invention and can access one or more content streams to use in playback.In order to more preferable
Understand it is considered to a simply example, wherein playback system determine that user's head position instruction user is seeing 360 degree of scenes
Anterior.In this case, in one embodiment, playback system selects at least one content of transmission frontal scene part
Stream.Depend on the various other factors as discussed, such as available bandwidth, supported data rate, stream bandwidth with regard to Figure 14-17
And/or data rate constraints, playback system can be from three different available stream (S1D1、S1D2、S1D3) in select transmission in front of
One stream of scene parts.If constraint allows, playback system is by from the multiple content streams corresponding to frontal scene part
Select first water stream, for example, flow S1D1.The information providing in table 1800 is easy to select the suitable stream for playback, because can quilt
At least some information being used for carrying out selection is provided by stream information 1800.After stream selects, playback system can reuse
Stream information 1800 by adding corresponding to the multicast group (for example, M1) of selected stream or to obtain content by using access information
Stream is starting content delivery (for example, content reception).
Figure 19 shows the playback system 1900 realized according to the present invention, and it can be used to receive, decodes, stores and show
The image content receiving from content delivery system.The system 1900 that can be implemented is the single playback apparatus including display 1902
1900', or it is embodied as such as being coupled to the external display (for example, head mounted display 1905) of computer system 1900'
Element combination.
In at least some embodiments, playback system 1900 includes 3D head mounted display.Head mounted display can profit
With the OCULUSRIFT of head mounted display 1905 can be includedTMVR (virtual reality) earphone is realizing.Can also be using other
Head mounted display.In certain embodiments, wherein one or more display screens are used to left eye to user and right eye shows
The wear-type helmet of content or other headset equipment are used as equipment 1905.By showing to left eye and right eye on single screen
Show different images, wherein headset equipment is configured to for the different piece of single screen to be exposed to different eyes, single
Display can be used to show the left eye separately being perceived by left eye and the right eye of beholder and eye image.In some embodiments
In, mobile phone screen is used as the display of head-mounted display apparatus.In at least some such embodiment, mobile phone is inserted into head
Wear in formula equipment and mobile phone is used to display image.In certain embodiments, display device 1905 can be such as Oculus
A part for the 3D display device of Rift.
Playback system 1900 has following ability:Decode received coded image data (for example, corresponding to environment or
The left eye of the different piece of scene and eye image and/or monochrome (single image)) and for example pass through to render and show by with
Family is perceived as the different left eye of 3D rendering and right-eye view to generate 3D rendering content for showing to consumer.At some
In embodiment, playback system 1900 is located at consumer guard station position, such as family or office, but can also be located at image capture
Place.The signal that system 1900 can execute according to the present invention receives, decodes, display and/or other operate.
System 1900 includes display 1902, display device interfaces 1903, input equipment 1904, input/output (I/O) connect
Mouth 1906, processor 1908, network interface 1910 and memory 1912.The various parts of system 1900 via allow data be
System 1900 part between communication bus 1909 and/or connected by other or be coupled by wave point.Although
Display 1902 is included as optional element in certain embodiments, as shown in using dotted line frame, but in some enforcements
In example, external display device 1905, such as wear-type stereoscopic display device, can be coupled to back via display device interfaces 1903
Put equipment.
For example, it is used as processor 1908 in cell phone processor and mobile phone generates and display image in headset equipment
In the case of, system can include processor 1908, display 1902 and memory 1912, as one of headset equipment
Point.Processor 1908, display 1902 and memory 1912 can be parts for mobile phone.Other enforcement in system 1900
In example, processor 1908 can be a part for the games system of such as XBOX or PS4, and wherein display 1905 is arranged on and wears
In formula equipment and be coupled to games system.Whether processor 1908 or memory 1912 are located at and wear in overhead equipment simultaneously
Although not being crucial and, as it would be appreciated, in headwear, common location processor can be convenient in some cases,
But from the perspective of power, heat and weight, at least some cases it may be desirable to make processor 1908 and memory
1912 are coupled to the headwear including display.
Although various embodiments contemplate head mounted display 1905 or 1902, the method and device can also with can
The non-head mounted display supporting 3D rendering is used together.Thus although in many examples system 1900 include wear-type
Display, but it can also be realized using non-head mounted display.
The operator of playback system 1900 can control one or more parameters via input equipment 1904 and/or select to want
The operation of execution, for example, select display 3D scene.Via I/O interface 1906, system 1900 may be coupled to external equipment and/or
Exchange signal and/or information with miscellaneous equipment.In certain embodiments, via I/O interface 1906, system 1900 can receive by
The image of various camera captures, these cameras can be a part for the camera equipment of such as camera equipment 900.
Processor 1908 (such as CPU) execution routine 1914 simultaneously controls playback system 1900 with basis using various modules
The present invention is operated.Processor 1908 is responsible for controlling the overall general operation of playback system 1900.In some embodiments various
In, processor 1908 is configured to execute the function of being discussed as being executed by playback apparatus.
Via network interface 1610, system 1900 sets to various outsides through communication network (for example, such as communication network 105)
Standby transmission and/or receipt signal and/or information (for example, including image and/or video content).Network interface 1910 includes receiving
Device 1911 and transmitter 1913, execute via them and receive and send operation.In certain embodiments, system is via network interface
1910 receive one or more selected content streams from content provider.In certain embodiments, system 1900 is via interface
1910 receiver 1911 receives one or more selected content streams for playback.The content stream being received can be used as volume
Code data (scene parts 1952 of such as coding) is received.Receiver 1911 be additionally configured to receive stream information 1946 and/or
Initialization data, such as a part for program guide.System 1900 also for example via receiver 1911 reception bandwidth and/or
Data rate allocation control information 1952, this includes bandwidth constraint for different view directions, specifies and will be used for receiving one
Or the individual bandwidth constraint of the maximum bandwidth of multiple content stream, provide to constrain corresponding viewing side corresponding to individual bandwidth
To content.In certain embodiments, receiver 1911 is additionally configured to receive the mapping of at least one environment, for example, limit adopted 3D
The 3D depth map on surface, and to be used for being mapped to picture material one or more at least a portion on 3D surface
UV maps, for example, in during the initialisation phase or other time.In certain embodiments, receiver 1911 receives corresponding to field
First UV mapping of the Part I of scape environment, the 2nd UV corresponding to the Part II of scene environment map, correspond to the 3rd
The 3rd partial UV maps, corresponds to the 5th of the 4th UV mapping of Part IV and the Part V corresponding to scene environment
Mapping.In certain embodiments, during initializing, system 1900 for example receives corresponding to field via the receiver of interface 1910
The first, second, third, fourth and fifth of scape partly in one or more content, such as image.
Memory 1912 includes various modules, for example, when it is executed by processor 1908, control playback system 1900 to hold
Decoding and the output function gone according to the present invention.Memory 1912 include control routine 1914, head position determination module 1916, when
Front viewing location initialization module 1918, decoder module 1920, currently selected stream initialization module 1922, content delivery start
Module 1924, frame buffer 1926, frame buffer update module 1928, the image rendering of also referred to as image generation module
The data rate determination module 1932 of module 1930, available bandwidth and/or support, head position change determining module 1934, can
Change determining module 1936, stream selecting module 1938, selected adfluxion conjunction change determination mould with the data rate of bandwidth and/or support
Block 1940, selected adfluxion close update module 1942, stream terminates module 1944, and include stream information 1946, the reception receiving
The bandwidth arriving and/or data rate assignment information 1948, the current maximum available bandwidth determining and/or support data speed
1950th, the coded image content 1952 receiving, the environment mapping 1954 receiving, the UV mapping 1956 receiving, decoding figure
As the 3D content 1960 of content 1958 and generation is in interior data/information.
Control routine 1914 includes equipment control routine and Communications routines, with the operation of control system 1900.Head position
Determining module 1916 is configured to determine that the current head position of user, the position of such as head mounted display.Head position is true
Cover half block 1916 can and/or collaborative work integrated with position sensor, its Position Sensor can for example include wear
On the headwear of formula display.Current viewing position initialization module 1918 is configured to for example pass through to incite somebody to action in during the initialisation phase
The current head position of the user detecting will be initial for the current viewing position of user to (zero degree) viewing location before being set to
To (0 degree) environment viewing location before turning to.
Decoder module 1920 is configured to decode the coded image content 1952 receiving from content delivery system 700, with
Produce the view data 1958 of decoding.The view data 1958 of decoding can include the field of the stereo scene and/or decoding decoding
Scape part.In certain embodiments, the content of decoding is stored in one or more frame buffers 1926.Currently selected stream is just
Beginningization module 1922 is configured to initialize the current collection of selected one or more content streams to be received.Currently quilt
Choosing stream initialization module 1922 be configured to close the current adfluxion selecting be set to transmit forward direction corresponding to environment/scene/
The content of forward portion first-class.
Content delivery starting module 1924 is configured to start the payment of selected content stream.In certain embodiments,
Content delivery starting module 1924 starts the payment of the content stream not yet being received in selected set.In certain embodiments, interior
Hold payment starting module 1924 to be configured to send request signal, to add the multicast group corresponding to selected content stream, for example
Close the multicast group of corresponding content stream corresponding to transmission and the current adfluxion selecting.In some other embodiments, content delivery
Starting module 1924 is configurable to generate and sends request to the equipment in network, request pay selected content stream thereon by
The exchange digital channel of transmission.
Frame buffer update module 1928 is configured in the renewal for example by selected content stream set transmission
Hold and update frame buffer 1926 using the content updating when being received and decoded.
Image rendering module 1930 according to the feature of the present invention, such as, using the picture material 1958 of decoding, generates 3D figure
Picture, the left eye for example being shown in the way of being perceived as 3D rendering and eye image, in display 1902 and/or aobvious
Show and display to the user that on equipment 1905.In certain embodiments, image rendering module 1930 is configured to, with corresponding to user
The picture material 1958 of the decoding in currently viewing region, environment mapping 1954 and UV mapping carry out rendering content for display.Cause
This, in certain embodiments, image rendering module 1930 is configured to execute the function of discussing with regard to the step shown in Figure 17.
The picture material 1960 being generated is the output of 3D rendering generation module 1930.Therefore, rendering module 1930 renders to display
3D rendering content 1960.In certain embodiments, image rendering module 1930 is configured to the image of one or more generations
Export such as display device or another equipment.The image being generated can connect via network interface 1910 and/or display device
Mouth 1903 output.
The data rate determination module 1932 of available bandwidth and/or support is configured to determine that and can use (example in preset time
As for receiving content stream) current maximum available bandwidth and/or current maximum support data speed.Due to available bandwidth
And/or supported data rate can be changed over due to the change of communication channel condition or network problem, therefore one
A bit in embodiments, determining module 1932 execution on the basis of continuing monitors and/or determines, to detect available bandwidth and/or to prop up
The change of the data rate held.Determined by currently maximum support data speed and/or bandwidth 1950 are to determine module 1932
Output, and can update when needed.
Head position changes determining module 1934 and is configured to for example pass through to check and compares head position determining module
Whether the change of 1916 output is altered to determine user's head position.The data rate of available bandwidth and/or support changes
Become determining module 1936 be configured to detection with the current maximum available bandwidth being determined by determining module 1932 and/or currently maximum
Support data speed Comparatively speaking available bandwidth and/or support data rate whether there is any change.
Stream selecting module 1938 is configured to which select in multiple content streams based on the current head position of user
To be received for using in playback in preset time.The change of the current head position based on user and/or other because
Element, stream selecting module 1938 can select different streams in different time.It is stream selecting module that currently selected adfluxion closes 1961
1938 output, and indicate the set currently selecting for the content stream receiving.In certain embodiments, flow selecting module
1938 multiple submodules including being configured to the part execution various functions as stream selection operation.Figure 23 shows in more detail
Go out stream selecting module 1938 and the various modules being included therein, and be discussed later.
Selected adfluxion is closed change determining module 1940 and is configured to determine that currently selected adfluxion closes whether 1961 changed,
For example, because selecting module have selected one or more additional content streams and/or due to one or more streams of being received
It is terminated/stop.Selected adfluxion is closed update module 1942 and is configured as selected adfluxion and closes have that when changing, (for example content flows to
Selected adfluxion closes 1961 interpolation or termination) update currently selected adfluxion conjunction 1961, to reflect any the changing to the conjunction of selected adfluxion
Become.Stream termination module 1944 is configured to termination/stopping reception and is previously received but no longer in currently selected adfluxion conjunction 1961
One or more content streams, for example closed due to currently selected adfluxion and 1961 be updated due to flowing the change of selection.
Stream information 1946 includes the information with regard to can be used for reception and the multiple content streams using in playback.Including in stream
Information in information 1946 with shown in Figure 18 and previously discussed same or similar.The bandwidth being received and/or data rate
Distribution control information 1948 includes seeing for various differences with regard to corresponding to for the bandwidth constraint of different view directions and/or instruction
See that direction provides the information of the constraint of the data rate of content stream of content.Determined by current maximum support data speed
And/or bandwidth 1950 indicates the maximum support data speed and/or bandwidth being determined in preset time by playback system 1900.
The environment mapping 1954 being received includes defining the 3D depth map of the environment on 3D surface.In certain embodiments,
One or more such depth map corresponding to environment interested can be received by playback system 1900.The UV being received reflects
Penetrate the 1956 one or more UV mappings including the part corresponding to environment/scene interested.The data 1958 of decoding includes
The data being decoded according to the present invention by decoder 1920.The data 1958 of decoding includes comprising being closed the environment of transmission by selected adfluxion
Scene or scene parts content.
In certain embodiments, various module discussed above is implemented as software module.In other embodiments, module
Realize within hardware, for example, be embodied as single circuit, wherein each module is implemented as executing the corresponding work(of this module
The circuit of energy.In also other embodiments, module is to be realized using the combination of software and hardware.
Although being illustrated as will including, in memory 1912, being shown as including in playback apparatus in the example of Figure 19
Module in 1900 can (and certain in certain embodiments) be realized completely in the hardware in processor 1908, for example, make
For single circuit.Module can (and certain in certain embodiments) be realized completely within hardware, and such as conduct corresponds to
The independent circuit of disparate modules.In other embodiments, some modules are implemented as such as circuit in processor 1908, and its
Its module is implemented as example in processor 1908 outside and is coupled to the circuit of processor 1908.As it should be appreciated, place
Level outside in processor for the integrated horizontal and/or some modules of reason device upper module can be one of design alternative.As replacing
In generation, it is not implemented as circuit, but all or some modules can be realized in software and be stored in depositing of system 1900
In reservoir 1912, wherein when module is executed by processor (for example, processor 1908), the module control operation of system 1900 is real
Now correspond to the function of module.In also other embodiments, various modules are implemented as the combination of hardware and software, for example,
Another circuit outside in processor provides input to processor 1908, and then processor 1908 operates under software, with
A part for execution performing module function.
Figure 23 illustrate in greater detail used in playback system 1900 stream selecting module 1938 and including each
Plant module.Stream selecting module be configured to according to as with regard to the method for the present invention that Figure 14-16 discusses in detail select one or
Multiple content streams.In certain embodiments, stream selecting module be configured to head position based on user, stream information 1946 and/
Or which maximum support data speed is selecting to receive in multiple content streams.In certain embodiments, flow selecting module
1938 head positions including being configured to based on user are prioritized the stream prioritization module 2306 of content stream.Stream prioritization module
2306 output e.g. has the prioritized list of the content stream of the priority of appointment.Discuss in more detail below with reference to Figure 24
Stream prioritization module 2306.
Stream selecting module 1938 also includes being configured to for example being based on bandwidth and/or data rate constraints determine and are used for having
The highest priority stream maximum bandwidth of the maximum bandwidth of the stream of limit priority and/or data rate and/or data rate determine
Module 2308, and it is configured to determine that the maximum bandwidth of each stream and/or data rate that will be used for that there is lower priority
Lower priority stream maximum bandwidth and/or data rate determination module 2310.In certain embodiments, determining module 2308,
2310 execute corresponding determination using the output of bandwidth control information 1948 and stream prioritization module 2306.Therefore, stream selects
Module 1938 can include one or more being configured to based on the band being for example sent to playback system from the network equipment/server
Wide constraint to determine the stream bandwidth determination module of the bandwidth of at least one content stream.
Stream selecting module 1938 also include being configured to based on determined by for highest priority stream maximum bandwidth and/
Or data rate and determine what whether highest priority stream can be supported based on available bandwidth and/or support data speed
Module 2312, and be configured to select the module 2314 of the peak data rate stream with the limit priority that can be supported.
In certain embodiments, selecting module 2314 is configured in selection one from the multiple content streams being assigned limit priority
Rong Liu, be assigned limit priority each content stream provide corresponding to identical view direction content, as be configured to from
Multiple content streams with equal priority carry out a part for selection.In certain embodiments, module 2314 is configured to base
In determined by amount of bandwidth available selected from multiple content streams with equal priority (for example, limit priority).
Therefore, in certain embodiments, when multiple streams with equal priority can use, for example, some have high data rate needs
Ask, and other has lower data rate demand, if available bandwidth and/or support data speed and bandwidth constraint allow this
The selection of sample, then selecting module 2314 selection first water stream, such as high data rate stream.
Stream selecting module 1938 also include being configured to based on determined by for the second highest priority stream maximum belt
Wide and/or data rate is simultaneously based on available bandwidth (for example, total available or remaining available) and/or supported data rate is next
Determine the module 2316 whether the second highest priority stream can be supported, be configured to select that there is the second Gao You that can be supported
The module 2318 of the peak data rate stream of first level, be configured to based on determined by for the 3rd highest priority stream maximum
Bandwidth and/or data rate union are based on available bandwidth (for example, total available or residue is available) and/or supported data speed
Rate determines the module 2320 whether the 3rd highest priority stream can be supported, and is configured to select there is being supported
The module 2322 of the peak data rate stream of three limit priorities.Therefore, in certain embodiments, stream selecting module 1938 is joined
It is set to the one or more content streams for example selecting to be assigned limit priority by prioritization module 2306.
Stream selecting module 1938 also includes additional capacity/bandwidth determining module 2324, and it is configured to for example exist
Chosen one or more higher priority flows determine whether there is any remaining or additional available bandwidth after being used for receiving
For receiving additional content stream.In certain embodiments, stream selecting module 1938 also includes module 2326, and it is configured to base
In determined by for the maximum bandwidth of one or more lower priority stream and/or data rate and be based on available bandwidth
And/or supported data rate is selecting the one or more lower priority stream that can be supported.
Figure 24 shows stream prioritization module 2306, and it can be implemented as example flowing a part (example of selecting module 1938
As its submodule) or as single module.Stream prioritization module 2306 is configured to the head position based on user
To be prioritized content stream.Once content stream has been prioritized, stream selecting module 1938 just can be held from the content stream being prioritized
Row stream selects.In certain embodiments, stream prioritization module 2306 include being configured to current head position based on user Lai
The present viewing field identification module 2404 of user's present viewing field of a part for scene areas that identification instruction user is being watched, with
And be configured to identify transmission with the stream of the partly corresponding content of the scene areas corresponding to user's present viewing field work as forward sight
Wild stream identification module 2404.The output of present viewing field stream identification module 2404 can be stored in memory in certain embodiments
The list of the stream identifying in 1912, and this list can quilt in the head position of user and when therefore changing in the visual field
Update.Therefore, in various embodiments, in order to be prioritized various available content streams, identify the use corresponding to head position first
The present viewing field at family, and identify the stream transmitting the content corresponding to the visual field.
In certain embodiments, stream prioritization module 2306 also includes being configured to determine that the stream from being identified is available
Corresponding to the module 2406 of the size of the part of the scene areas of user's present viewing field, and it is configured to provide based on each stream
Part size to provide corresponding to user's present viewing field scene areas part one or more stream assigned priorities
Priority appointment/distribute module 2408.In certain embodiments, priority appointment/distribute module 2408 includes module 2410,
It is configured to assign limit priority to the stream of the largest portion providing the visual field, for example, will provide the field corresponding to present viewing field
The stream of the largest portion of scape is appointed as main flow.In certain embodiments, priority appointment/distribute module 2408 also includes module
2412, it is configured to the size of the view sections based on each remaining stream offer and assigns ensuing limit priority simultaneously
Specify remaining stream (for example, as the secondary, third level, etc.), for example, provide the stream of the major part in the visual field to correspond to than providing
Stream in the smaller portions of the scene of present viewing field is given a higher priority and specifies.
In certain embodiments, stream prioritization module 2306 also includes module 2414, and it is configured to determine whether exist
Residual stream will be prioritized, and for example, provides the stream of the content corresponding to the scene areas outside present viewing field.
In certain embodiments, stream prioritization module 2306 also includes module 2416, and it is configured to image content-based
To be prioritized the one of content outside user's present viewing field for the offer with the degree of approach of present viewing field or the direction of end rotation
Individual or multiple additional streams, for example, to its assigned priority.In certain embodiments, module 2416 is configured to based on being transmitted
Picture material and present viewing field the degree of approach being prioritized the content transmitting corresponding to the part outside described present viewing field
One or more additional streams, be transmitted in present viewing field close proximity picture material content stream than outside present viewing field simultaneously
Content stream further away from each other is assigned higher priority.
In certain embodiments, module 2416 includes end rotation determining module 2418, and it is configured to determine whether to examine
Measure the end rotation of user, the part for example changing as user's head position.At some but not every embodiment
In, when user looks down upwards or towards ground towards sky or roof although head position has change, but this head fortune
Move and be not qualified as end rotation.In certain embodiments, module 2416 be configured to end rotation direction based on user Lai
It is prioritized one or more additional content streams, picture material outside present viewing field but on end rotation direction is provided
Content stream is than another content stream quilt of the picture material outside present viewing field and on the direction away from end rotation direction
Assign higher priority.In some such embodiments, module 2416 also includes module 2420, and it is configured to offer
Stream corresponding to the content of scene parts (for example, the top of scene environment or bottom) outside present viewing field assigns next relatively
Low priority (for example, after higher priority is assigned to the stream providing corresponding to the content in the visual field) and stream are specified, for example
The third level.In certain embodiments, when determination does not have end rotation, end rotation determining module provides control to module 2420
System input, so as to additional stream assigned priority.
In certain embodiments, module 2416 also includes end rotation direction determining module 2422, and it is configured to determine that
User's head with respect to the direction of rotation of previous head position, for example, to the left or to the right.In certain embodiments, module 2416
Also include module 2424, its be configured to consider end rotation direction and to transmission corresponding to the part outside present viewing field
One or more additional streams assigned priorities of content.In certain embodiments, module 2424 includes module 2426, and it is configured
Be to provide corresponding to scene the stream of the content of the part in cephalad direction assign next lower priority (for example, from
The next one of higher beginning can use priority) and specify, such as third level stream.If it will thus be appreciated that head rotation is detected
Turn, then the direction based on end rotation is assigned come the priority executing convection current in certain embodiments.In certain embodiments, mould
Block 2416 also includes being configured to assigning the add-on module 2428 of more low priority to any residual stream considering.
Although being illustrated as single processor (such as computer) in the embodiment of Figure 19 but it is to be understood that processor
1908 may be implemented as one or more processors, such as computer.When realizing in software, module is included by processing
During device 1908 execution, configuration processor 1908 is to realize the code of the function corresponding to this module.Shown in Figure 19,23 and 24
Various modules are stored in the embodiment in memory 1912, and memory 1912 is the computer journey including computer-readable medium
Sequence product, computer-readable medium includes code, for example, be used for the independent code of each module, be used for making at least one computer
(such as processor 1908) realizes the corresponding function of module.
Can use and be based entirely on hardware or the module being based entirely on software.It is understood, however, that can using software and
An any combinations of hardware, for example, the module of circuit realiration can be used to realize function.As it should be appreciated, Figure 19,23 and 24
Shown in module control and/or configuration system 1900 or element therein (such as processor 1908) are executing the stream in Figure 14
Shown in the method for journey Figure 140 0 and/or description the function of corresponding step and execute the corresponding step shown in Figure 15-17
Function.
Figure 20 including Figure 20 A, the combination of Figure 20 B, Figure 20 C, Figure 20 D and Figure 20 E is according to various exemplary embodiments
The illustrative methods of operation content playback system flow chart 2000.According to various embodiments, contents playback system is, for example,
It is coupled to content playback device or the computer system of display.
The operation of illustrative methods starts from step 2002, wherein on contents playback system electricity and initialize.Operation is from step
Rapid 2002 proceed to step 2004, and wherein contents playback system receives first of the first rear view section corresponding to described environment
Image.Operation proceeds to step 2006 from step 2004, and wherein contents playback system storage is corresponding to described the of described environment
The first image receiving described in one rear view section.Operation proceeds to step 2008, wherein content playback from step 2006
System receives one or more additional image of described first rear view section corresponding to described environment, including corresponding to described
At least second image of described first rear view section of environment.Operation proceeds to step 2010, wherein content from step 2008
Playback system stores corresponding to the one or more appended drawings receiving described in described first rear view section of described environment
Picture.Operation proceeds to step 2012 from step 2010.
In step 2012, contents playback system receives the first figure of the second rear view section corresponding to described environment
Picture.Operation proceeds to step 2014 from step 2012, and wherein contents playback system stores corresponding to described the second of described environment
The first image receiving described in rear view section.Operation proceeds to step 2016, wherein content playback system from step 2014
System receives one or more additional image of described second rear view section corresponding to described environment, including at least corresponding to
State the second image of described second rear view section of environment.Operation proceeds to step 2018 from step 2016, and wherein content is returned
Place system stores corresponding to the one or more additional image receiving described in described second rear view section of described environment.
Operation proceeds to step 2020 from step 2018.
In step 2020, contents playback system receives the one or more of the sky aerial view part corresponding to described environment
Image.Operation proceeds to step 2022 from step 2020, and wherein contents playback system storage is corresponding to the described sky of described environment
The one or more images receiving described in aerial view part.Operation proceeds to step 2024, wherein content from step 2022
Playback system receives one or more images of the ground View component corresponding to described environment.Operation proceeds to from step 2024
Step 2026, wherein contents playback system store corresponding to receiving described in the described ground View component of described environment
Individual or multiple images.In certain embodiments, sky aerial view and ground view are used to refer to the nominal head with respect to beholder
The up direction at direction visual angle (perspective) and down direction and be applied to indoor environment and outdoor environment.
In certain embodiments, for example rely on specific embodiment, can receive corresponding to the first rear view section, second
In the middle of rear view section, sky aerial view part and ground View component some but be not necessarily whole images.
Operation proceeds to step 2030, reaches step 2034 via connecting node A 2028 from step 2026, and via
Connecting node B 2036 reach step 2028,2040,2042,2044,2046,2048,2050 and 2052.Return to step
2030, in step 2030, contents playback system determines the head position of beholder, and described head position corresponds to works as forward sight
Wild.Operation proceeds to step 2032 from step 2030, wherein contents playback system be based on determined by head position to determine
State the present viewing field of beholder.Operation proceeds to step 2030 from step 2032.For example on the basis of continuing, it is repeatedly carried out
Step 2030 and 2032, and present viewing field is updated (for example, refreshing).Determined by present viewing field can be used for generate output figure
Use during picture.
In step 2034, contents playback system receives and provides that (for example, forward portion regards corresponding to the Part I of environment
Figure) content first content stream.
In step 2038, contents playback system receives control information, and the instruction of this control information should during playback duration
When display corresponding to described environment described first rear view section multiple previous transmission image in which, this playback
Time is to measure with respect to the playback duration of instruction in described first content stream.In step 2040, content playback
System receives image selection information, and the instruction of this image selection information should use corresponding to ring during a part for described event
Which in the multiple images of described first rear view section in border.
In step 2042, contents playback system receives control information, and this control information indicates during playback duration
Which in the image of the 2236 multiple previous transmission that should show described second rear view section corresponding to described environment,
This playback duration is to measure with respect to the playback duration of instruction in described first content stream.In step 2044, interior
Hold playback system and receive image selection information, it is right that the instruction of this image selection information should use during a part for described event
Should be in which in the multiple images of described second rear view section of environment.
In step 2046, content playback device receives control information, and the instruction of this control information should during playback duration
When display corresponding to described environment described sky aerial view part multiple previous transmission image in which, during this playback
Between be with respect in described first content stream instruction playback duration measure.In step 2048, content playback system
System receives image selection information, and the instruction of this image selection information should use corresponding to environment during a part for described event
The multiple images of described sky aerial view part in which.
In step 2050, contents playback system receives control information, and the instruction of this control information should during playback duration
When display corresponding to described environment described ground View component multiple previous transmission image in which, during this playback
Between be with respect in described first content stream instruction playback duration measure.In step 2052, content playback system
System receives image selection information, and the instruction of this image selection information should use corresponding to environment during a part for described event
The multiple images of described ground View component in which.
Operation from step 2032, step 2034 and step (can with the 2038 of executed in parallel, 2040,2042,2044,2046,
2048th, 2050 and 2052, via connecting node C2054) proceed to step 2058 via connecting node D 2056.
In certain embodiments, from step 2038,2040,2042,2044,2046,2048,2050 and 2052 control
Information by for example one by one in the way of somewhat sent prior to corresponding first flow content of step 2034, first-class interior for this
Hold, control information will be used.In some other embodiments, control information block start receive first content stream before or with
It is received simultaneously.
In step 2058, contents playback system based on following at least one generate corresponding to one of present viewing field or
Multiple output images:Institute from the first content stream of the Part I view (for example, forward portion view) corresponding to environment
Receive content, the received image of the storage of the first rear view section corresponding to environment, the second rearview corresponding to environment
The received image of partial storage, the received image of the storage of sky aerial view part corresponding to environment, corresponding to environment
The storage of ground View component received image, or corresponding to present viewing field image disabled a part composite diagram
Picture.Step 2058 includes step 2060,2062,2064,2066,2068,2076 and 2078.
In step 2060, contents playback system is based on present viewing field determination will be in generating one or more output images
The set of the View component (for example, there is data available to it) using.The set of some exemplary determinations is included for example:{}、
{ front view part }, { the first rear view section }, { the second rear view section }, { sky aerial view part }, { ground View component },
{ front view part, sky aerial view part }, { front view part, ground View component }, { front view part, the first rearview portion
Point }, { front view part, the second rear view section }, { front view part, the first rear view section, sky portion }, { front view
Partly, the second rear view section, sky aerial view part }, { front view part, the first rear view section, ground View component },
{ front view part, the second rear view section, ground View component }, { the first rear view section, sky aerial view part }, { first
Rear view section, ground View component }, { the first rear view section, the second rear view section }, the first rear view section, second
Rear view section, sky aerial view part }, { the first rear view section, the second rear view section, ground View component }, { after second
View component, sky aerial view part } and { the second rear view section, ground View component }.
Operation proceeds to step 2062 from step 2060.In step 2062, contents playback system determine whether meet with
Lower two conditions:The set of i View component that () determines from step 2060 only includes the first View component and (ii) regards first
There is no the part of present viewing field outside figure part.If it is determined that determined by gather and only include the first View component and first
There is not the part of present viewing field outside View component, then operate and proceed to step 2064 from step 2062;Otherwise, operation is from step
Rapid 2062 proceed to step 2066.
In step 2064, contents playback system is generated corresponding to present viewing field based on the content receiving from first content stream
One or more output images.
In step 2066, contents playback system determines whether there is any portion of the disabled described present viewing field of image
Point.If contents playback system determines at least one portion that there is the disabled present viewing field of image, operate from step
2066 proceed to step 2076;Otherwise, operation proceeds to step 2068 from step 2066.
In step 2068, contents playback system is based on to be used being determined when generating one or more output image
View component set generating the one or more output images corresponding to present viewing field.Step 2068 can (and have
When certain) include step 2070, wherein contents playback system be based on include receiving at least some of first content stream
Content and corresponding to described environment Part II the content being stored generating or many corresponding to present viewing field
Individual output image.In certain embodiments, step 2070 includes one of step 2072 and 2074 or two.In step 2072
In, contents playback system selects the image of the Part II view corresponding to environment based on the image selection information receiving.
Operation proceeds to step 2074 from step 2072.In step 2074, contents playback system is by from the second time point capture
The content that described first content stream obtains combines with the first image corresponding to described first time point, described first time point and
Second time point is different.
In certain embodiments, the first image is the first image of the Part II of environment, and Part II is environment
The first rear view section and environment one of the second rear view section.In some such embodiments, first time point pair
Should be in the time before the second time point.In some such embodiments, first time point before the time of live event,
Image in capture first content stream during this live event.
Return to step 2076, each part execution step 2076 disabled to image.In step 2076, content is returned
Place system is the disabled partially synthetic image of image of described present viewing field.Operation proceeds to step 2078 from step 2076.
In step 2078, contents playback system is based on and is generating one or more output images and/or one or more composograph
In environment to be used determined View component set generating the one or more output images corresponding to present viewing field.By
The output image that step 2078 generates can include:Completely synthetic image;Including the content from composograph with from the
The image of the content of one content stream;Including from composograph content, from first content stream content and from storage
The image of the content of image;And include the image of the content from composograph and the content from storage image.Various
In embodiment, step 2078 (and sometimes certain) can include one of step 2080 and 2082 or two.
In step 2080, contents playback system is based on including the content that receives at least some of first content stream
With the composograph of a part for simulated environment (for example, Part II) to generate one or more defeated corresponding to present viewing field
Go out image.In step 2082, composograph is combined by content playback device with least a portion of the image receiving, with life
Become the image corresponding to present viewing field.
It should be appreciated that present viewing field can (and generally certain) change over.In response to the change of present viewing field,
The different sets of the View component of output image to be used to generate can be determined in step 2060, different images may need
And to synthesize, for example, corresponding to the different piece not having image in the visual field in step 2076.Additionally, in different time, base
In the control information receiving, different storage images can be identified for generating the combination output image in different time.
Operation proceeds to step 2086, wherein contents playback system output from step 2058 via connecting node E 2084
And/or one or more output images that display is generated.Step 2086 includes step 2088, wherein contents playback system output
And/or display the first output image, described first output image is one of output image of one or more generations.
In certain embodiments, for example the output image of the generation corresponding to present viewing field generating in step 2070 can
Include the letter of the Part III from the Part I of environment, the Part II of environment and environment with (and sometimes certain)
Breath.In certain embodiments, the Part I corresponding to the environment of first content stream is front view part, the Part II of environment
It is the first rear view section (for example, right rear corner pictorial view part) and second rear view section (for example, left back View component) of environment
One of, the Part III of environment is one of sky aerial view part and ground View component of environment.In some such embodiments
In, the content corresponding to described Part I includes capturing and be streamed to the reality of described playback system when event is carried out
When content, and correspond to described second and the content of Part III be non-real-time images.
In various embodiments, combined arrangement is included one or more with the output image generating corresponding to present viewing field
Filtration, fuzzy, brightness change and/or color change, the such as frontier district between following any two is executed in borderline region
In domain:The image that obtains from the first content stream of the front view part corresponding to environment, the first rearview portion corresponding to environment
The image of storage dividing, the image of the storage of the second rear view section corresponding to environment, corresponding to the sky aerial view portion of environment
The image of storage dividing, the image of the storage of ground View component corresponding to environment, and correspond in present viewing field to it
There is not the composograph in the region of image.
In certain embodiments, exist and do not covered by first (for example, front view part) corresponding to first content stream
Environment some parts, and be to which stores one or more extentions of the image receiving.In various embodiments
In, for those uncovered parts of environment, image is synthesized.For example, in one embodiment, can not there is correspondence
In the storage image of sky aerial view part, and present viewing field include sky aerial view a part of when, image is synthesized.Another
In one example, there may be dead angle between the first rear view section and the second rear view section, for example uncovered area
Domain.In certain embodiments, composograph includes repeating a part for the image corresponding to the adjacent area in environment, for example, deposit
A part for image for storage or a part for the image obtaining from the first content stream receiving.
In certain embodiments, the first picture material receiving in step 2034 is including left-eye image and right eye figure
The stereoscopic image content of picture, for example, left eye and eye image are to being received.In some such embodiments, received and deposited
The image corresponding to the first rear view section, the second rear view section, sky aerial view part and ground View component of storage includes
For example paired left-eye image and eye image.Therefore, when for example generating one corresponding to present viewing field in step 2070
Or during multiple output image, from the left eye figure of the first content stream corresponding to the first View component (for example, front view part)
As combining with the left-eye image of the storage of the one or more of the other part corresponding to environment, from corresponding to the first View component
The eye image of first content stream of (for example, front view part) and the storage of the one or more of the other part corresponding to environment
Eye image combination.
In some other embodiments, be received and stored corresponding to the first rear view section, the second rear view section,
The image of sky aerial view part and ground View component includes left-eye image or eye image or bag from original image pair
Include single (monochromatic) image, for example, from individually operated to capture the single camera of View component.Therefore, in such enforcement
In example, when for example generating the one or more output image corresponding to present viewing field in step 2070, from first content
The left-eye image of stream (for example, corresponding to front view part) and from first content stream corresponding eye image all with from another
The identical storage image combination of one View component.
In yet another embodiment, the image of some storages includes left eye and eye image pair, and other image includes
The image of single storage.For example, the reception image being stored corresponding to the first rear view section can include left eye and right eye
Image pair, and correspond to the reception image that sky aerial view part stored and can include single image, such as monochrome image, and not
It is paired.
In certain embodiments, the Part I (for example, corresponding to the first content stream being received) of environment is front view
Part, the Part II of environment is rear view section (for example, corresponding to the first rearview portion of the right back corresponding to beholder
Divide or correspond to the second rear view section of the left back of beholder), the Part III of environment is sky aerial view part or ground
Face View component.In some such embodiments, image is to be connect corresponding to the different rates of first, second, and third part
Receive, more images are that the event corresponding to described Part I rather than described Part II is received.In some such realities
Apply in example, receive the more images corresponding to described Part II rather than described Part III.
According to some embodiments, the illustrative methods of operation content playback system include:Determine the head position of beholder,
Described head position corresponds to present viewing field;Receive the first content stream of the content that the Part I corresponding to environment is provided;Base
In including the content that receives at least some of described first content stream and i) correspond to the Part II of described environment
Storage interior perhaps ii) simulate described environment Part II composograph, generate corresponding to one of present viewing field or
Multiple output images;And output or display the first output image, described first output image is the defeated of one or more generations
Go out one of image.In certain embodiments, contents playback system is content playback device.In certain embodiments, content playback
System is coupled to the computer system of display.
In various embodiments, the method also includes:Receive the first figure of the described Part II corresponding to described environment
Picture;And storage is corresponding to the described first image of the described Part II of described environment.
In certain embodiments, the described first image of the described Part II of described environment corresponds to first time point;
Generate the one or more output images corresponding to present viewing field to include combining from described the first of the second time point capture
Hold content and the described first image corresponding to described first time point, described first time point and the second time point that stream obtains
Different.In some such embodiments, described first time point corresponds to the time before described second time point.At some
In such embodiment, described first time point, before the time of live event, captures during this live event and includes
Image in described first content stream.
In various embodiments, the method also includes receive the described Part II corresponding to described environment one or many
Individual additional image, one or more of additional image correspond to the described Part II of described environment, at least include the second figure
Picture.
In certain embodiments, the method includes receiving control information, and control information indicates the institute corresponding to described environment
State in the image of multiple previous transmission of Part II which should be shown during playback duration, this playback duration is
Measure with respect to the playback duration of instruction in described first content stream.
In certain embodiments, the Part II of described environment is the first rear view section, the second rear view section, sky
One of View component or ground View component.In some such embodiments, the method also includes:Receive corresponding to described ring
One or more images of the Part III in border.
In various embodiments, the described Part I of described environment is front view part;Described Part III is sky
One of view or ground View component;And image is connect with the different rates corresponding to described first, second, and third part
Receive, more images are that the event corresponding to described Part I rather than described Part II receives.
In various embodiments, the described content corresponding to described Part I includes being captured when event is carried out
And it is streamed to the real time content of described playback apparatus;With the corresponding content of described image corresponding to described second and Part III
It is non-real-time images.In some such embodiments, receive the multiple images corresponding to the described Part II of environment for the instruction
In which image selection information that should be used during a part for described event;And
Included based on one or more output images that the content that at least some receives generates corresponding to present viewing field:Base
Select the image of the described Part II corresponding to environment in the image selection information receiving.
In various embodiments, illustrative methods include determining that image is a part of unavailable to described present viewing field;Close
One-tenth will be used for the image of the described present viewing field image to its disabled described part;And combinatorial compound image with receive
Image at least a portion, with generate corresponding to present viewing field image.
In various embodiments, described first image content is in the stereo-picture including left-eye image and eye image
Hold.
Figure 21 shows the example content playback system 2100 realized according to the present invention, it can be used to receive, decode,
The image content that storage, process and display receive from content delivery system (all content delivery system as shown in figs. 1 and 7).
System 2100 can be implemented as the single playback apparatus 2100' including display 2102, or is embodied as such as being coupled to computer
The combination of the element of external display (for example, head mounted display 2105) of system 2100'.
In at least some embodiments, contents playback system 2100 includes 3D head mounted display.Head mounted display can
The OCULUSRIFT of head mounted display 2105 can be included with utilizationTMVR (virtual reality) earphone is realizing.In various enforcements
In example, head mounted display 2105 is identical with head mounted display 805.It is also possible to use other head mounted displays.Real at some
Apply in example, the wear-type helmet or other headset equipment, wherein one or more display screens are used to left eye and the right side to user
Eye display content.By showing different images to left eye and right eye on single screen, wherein headset equipment is configured to
The different piece of single screen is exposed to different eyes, it is possible to use individual monitor is showing by the left eye of beholder
The left eye perceiving respectively with right eye and eye image.In certain embodiments, mobile phone screen is used as head-mounted display apparatus
Display.In at least some such embodiment, mobile phone is inserted into headset equipment and mobile phone is used to display image.
Contents playback system 2100 has the coded image data that receives of decoding and generates for showing to consumer
3D rendering content ability, wherein coded image data such as left eye and eye image and/or corresponding to environment or scene
The monophonic (single image) of different piece, wherein display for example are by the different left eye perceiving user and right eye regards
Figure renders and is shown as 3D rendering.In certain embodiments, contents playback system 2100 is located at consumer guard station position, such as family
Or office is but it is also possible to be located at image capture place.According to the present invention, contents playback system 2100 can execute according to this
Bright signal receives, decodes, display and/or other operate.
System 2100 includes display 2102, display device interfaces 2103, input equipment 2104, input/output (I/O) connect
Mouth 2106, processor 2108, network interface 2110 and memory 2112.Memory 2112 includes modular assembly 2114, for example soft
Part modular assembly, and data/information 2116.In certain embodiments, system 2100 includes modular assembly 2115, such as hardware mould
Block (such as circuit) assembly.The various parts of system 2100 are via the bus allowing data to communicate between the part of system 2100
2109 and/or connected by other or be coupled by wave point.Although display 2102 is made in certain embodiments
It is included for optional element, as shown in using dotted line frame, but in certain embodiments, external display device 2105, for example
Wear-type stereoscopic display device, can be coupled to playback apparatus via display device interfaces 2103.
For example, it is used as processor 2108 in cell phone processor and mobile phone generates and display image in headset equipment
In the case of, system can include processor 2108, display 2102 and memory 2112, as one of headset equipment
Point.Processor 2108, display 2102 and memory 2112 can be parts for mobile phone.Other enforcement in system 2100
In example, processor 2108 can be a part for the games system of such as XBOX or PS4, and wherein display 2105 is arranged on and wears
In formula equipment and be coupled to games system.Whether processor 2108 and/or memory 2112 are located at and wear overhead equipment
Although in be not crucial and, as it would be appreciated, common location processor 2108 meeting in headwear in some cases
It is convenient, but from the perspective of power, heat and weight, at least some cases it may be desirable to make processor
2108 and memory 2112 be coupled to the headwear including display.
Although various embodiments contemplate head mounted display 2105 or 2102, the method and device can also with can
The non-head mounted display supporting 3D rendering is used together.Thus although in many examples system 2100 include wear-type
Display, but it can also be realized using non-head mounted display.
Memory 2112 includes various modules, such as routine, when it is executed by processor 2108, control content playback system
System 2100 execution operations according to the instant invention.Memory 2112 includes modular assembly 2114, such as software module components, and number
According to/information 2116.
Data message 2116 includes following one or more:Corresponding to the reception image of the first rear view section 2118, right
Should receiving image, the reception image corresponding to sky aerial view part 2122 and corresponding to ground in the second rear view section 2120
The reception image of face View component 2124.Exemplary receiver image corresponding to rear view section 2118 or 2120 includes for example seeing
Crowd or crowd's image, spectators or crowd image being seated stood, the image with different visible advertisements, the image of crowd's cheer
Deng.Exemplary receiver image corresponding to sky aerial view includes such as clear sky, different cloud atlas cases, corresponds to different time not
With darkness etc..Data message 2116 also includes following one or more or whole:Receive corresponding to the first rear view section
2130 control information, the control information corresponding to the second rear view section 2132 receiving, receive corresponding to sky
The control information of View component 2134, and the control information corresponding to ground View component 2136 receiving.Data message
2116 also include the current viewer's head position 2126 determining, the present viewing field 2128 determining, the first content stream receiving
2128 (for example, including corresponding to the eye image of front view part and left-eye image to), for generating output image 2138
(for example, corresponding to present viewing field, for this present viewing field, there is can be used for combining at least one in the View component set determining
A little receive contents), composograph 2140, and the output image 2142 generating.The output image 2142 generating.The output figure generating
As (and sometimes certain) including the output image that generates, the output image of this generation includes from first content stream
Combination image content (for example, corresponding to first, (such as front view) part), and include the storage image receiving (for example,
From rear view section, sky portion or above ground portion) a part, and/or include composograph or composograph one
Point.
Figure 22 is example content playback system 2100 or the Fig. 8 that can be included in Figure 21 according to exemplary embodiment
System 800 in modular assembly 2200 figure.Module in modular assembly 2200 can be (and true in certain embodiments
Real) realized with hardware in processor 2108 completely, such as single circuit.Module in modular assembly 2200 is permissible
(and certain in certain embodiments) is realized with hardware completely in modular assembly 2115, such as corresponding to disparate modules
Independent circuit.In other embodiments, some modules are implemented as such as circuit in processor 2108, and other module quilt
It is embodied as the circuit in such as modular assembly 2115, outside and be coupled to processor 2108 in processor 2108.As managed
Solution, the integrated horizontal of processor upper module and/or some modules level outside in processor can be one of design alternative.
As an alternative, it is not implemented as circuit, but all or some modules can be implemented in software and stored
In the memory 2112 of system 2100, wherein when module is executed by processor (for example, processor 2108), module controls system
The operation of system 2100 is to realize the function corresponding to module.In some such embodiments, modular assembly 2200 is included in
In memory 2112, as modular assembly 2114.In also other embodiments, the various modules in modular assembly 2200 are by reality
It is now the combination of hardware and software, for example, another circuit outside in processor provides input to processor 2108, then processes
Device 2108 operates under software, to execute a part for performing module function.Although being illustrated as in the embodiment of Figure 21
Single processor, such as computer are but it is to be understood that processor 2108 may be implemented as one or more processors, example
As computer.
When implemented in software, module includes code, and when code is executed by processor 2108, configuration processor 2108 is real
Now correspond to the function of this module.It is stored in the embodiment in memory 2112 in modular assembly 2200, memory 2112 is
Including computer-readable medium computer program for make at least one computer (such as processor 2108) realize
The corresponding function of module, this computer-readable medium includes code, for example, be used for the independent code of each module.
Can use and be based entirely on hardware or the module being based entirely on software.It is understood, however, that software and hardware appoint
What combination (module of such as circuit realiration) can be used to realize function.As it should be appreciated, module control shown in fig. 22
System and/or configuration system 2100 or element therein (such as processor 2108), to execute the side of the flow chart 2000 in Figure 20
Shown in method and/or description corresponding step function.Therefore, modular assembly 2200 include executing corresponding to one of Figure 20 or
The various modules of the function of multiple steps.
Modular assembly 2200 includes observer's head position determining module 2202, present viewing field determining module 2204, content
Stream selecting module 2206, content stream receiver module 2208, image receiver module 2210, reception image storage module 2212, control
Information receiving module 2214, output image generation module 2216, output module 2242, display module 2244 and control routine
2246.
Beholder's head position determining module 2202 is configured to determine that the head position of beholder, described head position pair
Should be in present viewing field.Present viewing field determining module 2204 be configured to be based on determined by head position determining described observer
Present viewing field.
Content stream selecting module 2206 is configured to for example come in multiple replacement based on user (for example, beholder) input
Rong Liuzhong selects content stream.Different content streams can correspond to different events.In various embodiments, corresponding to mutually working together
The different content stream of part corresponds to the different cameral being pointed in different directions, for example, to provide the front view substituting to regard to beholder
Angle.In certain embodiments, can at least some of selected image stream comprise left-eye image to and eye image pair
Stereoscopic image content.
Content stream receiver module 2208 is configured to receive Part I (for example, the forward portion providing corresponding to environment
View) content first content stream.In various embodiments, in first content stream receive the first picture material be including
Left-eye image and the stereoscopic image content of eye image pair.
Image receiver module 2210 is configured to receive corresponding to one or more different views parts (for example, after first
View component, the second rear view section, sky aerial view part and ground View component) image.Image receiver module 2210 quilt
It is configured to receive the first image of the Part II corresponding to described environment.In some such embodiments, image-receptive mould
Block is additionally configured to receive one or more additional image of the described Part II corresponding to described environment, one or many
Individual additional image corresponds to the described Part II of described environment, including at least second image.In certain embodiments, described ring
The Part II in border is one of the first rear view section, the second rear view section, sky aerial view part or ground View component.?
In some embodiments, the Part II of described environment is one of the first rear view section or the second rear view section.Real at some
Apply in example, image receiver module 2210 is configured to receive one or more images of the Part III corresponding to described environment.
In certain embodiments, the Part I of environment is front view part, and the Part III of environment is sky aerial view or ground view
One of part.
In certain embodiments, image receiver module 2210 is configured to receive the first rearview corresponding to described environment
The first partial image, and it is additionally configured to receive one or many of described first rear view section corresponding to described environment
Individual additional image, the one or more of additional image corresponding to described first rear view section are included corresponding to described first
At least second image of rear view section.In certain embodiments, image receiver module 2210 is configured to receive corresponding to institute
State the first image of the second rear view section of environment, and be additionally configured to receive corresponding to after described the second of described environment
One or more additional image of View component, corresponding to one or more of additional image of described second rear view section
At least include the second image corresponding to described second rear view section.In certain embodiments, image receiver module 2210 quilt
It is configured to receive one or more images of the sky aerial view part corresponding to described environment.In certain embodiments, image connects
Receive module 2210 to be configured to receive one or more images of the ground View component corresponding to described environment.
The image storage module 2212 receiving is configured to store the image being received by image receiver module 2212.Receive
Image storage module 2212 is configured to store the first image of the described Part II corresponding to described environment.The image receiving
Memory module 2212 is configured to store one or more additional image of the Part II corresponding to environment.The image receiving is deposited
Storage module 2212 is configured to store one or more images of the Part III corresponding to described environment receiving.Various
In embodiment, the image storage module 2212 of reception is configured to store described first rear view section corresponding to described environment
The first image, and corresponding to described environment described first rear view section one or more additional image.Various
In embodiment, receive image storage module 2212 and be configured to store described second rear view section corresponding to described environment
First image, and one or more additional image of described second rear view section corresponding to described environment.Real at some
Apply in example, the image storage module 2212 of reception is configured to store one of the described sky aerial view part corresponding to described environment
Individual or multiple images.In certain embodiments, the image storage module 2212 of reception is configured to storage corresponding to described environment
Described ground View component one or more images.
Control information receiver module 2214 is configured to receive control information, and this control information indicates corresponding to described environment
The image of multiple previous transmission of Part II in which should be shown during playback duration, this playback duration is
Measure with respect to the playback duration of instruction in first content stream.In various embodiments, control information receiver module
2214 are additionally configured to receive control information, and this control information indicates the multiple first forward pass of the Part III corresponding to described environment
Which in the image sending should be shown during playback duration, and this playback duration is with respect in first content stream middle finger
The playback duration showing measures.In certain embodiments, control information receiver module 2214 is configured to receive image choosing
Select information, the instruction of this image selection information should be in event corresponding to which in the multiple images of the Part II of environment
Used during a part.In certain embodiments, control information receiver module 2214 is configured to receive image selection information,
The instruction of this image selection information should be in a part for event corresponding to which in the multiple images of the Part III of environment
Period is used.
In certain embodiments, control information receiver module 2214 is configured to receive control information, and this control information refers to
Show in the image of the multiple previous transmission of described first rear view section corresponding to described environment which should playback
Shown during time, this playback duration is to measure with respect to the playback duration in described first content stream.One
In a little embodiments, control information receiver module 2214 is configured to receive image selection information, and the instruction of this image selection information is right
Which should should be used during a part for event in the multiple images of described first rear view section of environment.
In certain embodiments, control information receiver module 2214 be configured to receive control information, this control information instruction corresponding to
Which in the image of multiple previous transmission of described second rear view section of described environment should be during playback duration
Shown, this playback duration is to measure with respect to the playback duration in described first content stream.In some embodiments
In, control information receiver module 2214 is configured to receive image selection information, and this image selection information indicates corresponding to environment
The multiple images of described second rear view section in which should be used during a part for event.
In certain embodiments, control information receiver module 2214 is configured to receive control information, and this control information refers to
Show in the image of the multiple previous transmission of described sky aerial view part corresponding to described environment which should playback when
Between during shown, this playback duration is to measure with respect to the playback duration in described first content stream.At some
In embodiment, control information receiver module 2214 is configured to receive image selection information, and the instruction of this image selection information corresponds to
Which in the multiple images of the described sky aerial view part of environment should be used during a part for event.One
In a little embodiments, control information receiver module 2214 is configured to receive control information, and this control information indicates corresponding to described
Which in the image of multiple previous transmission of described ground View component of environment should be shown during playback duration,
This playback duration is to measure with respect to the playback duration in described first content stream.In certain embodiments, control
Information receiving module 2214 is configured to receive image selection information, and this image selection information indicates described corresponding to environment
In the multiple images of face View component, which should be used during a part for event.
Output image generation module 2216 is configured to generate corresponding to present viewing field based at least one of the following
One or more output images:It is derived from the reception content of the first content stream of the Part I view corresponding to environment, deposited
Second rear view section corresponding to environment receiving image, being stored of first rear view section corresponding to environment of storage
Receive image, the reception image of the sky aerial view part corresponding to environment being stored, the ground corresponding to environment being stored
The reception image of View component, or the composograph of the disabled part of image corresponding to present viewing field.Output image generates
Module 2216 includes View component set determining module 2218, only content stream determining module 2220, lack part determining module
2222nd, image compositer module 2224, content stream output image generation module 2226, synthesis output image generation module 2228,
Based on the generation module 2230 of output image content stream, and the generation module 2236 based on output image non-streaming.
View component set determining module 2218 be configured to determine based on present viewing field to be used for generating one or more
, for example, there is the view portion of at least some available picture material to it in the View component set of the described environment of output image
Point.The set of some exemplary determinations is included for example:{ }, { front view part }, { the first rear view section }, { the second rearview
Part }, { sky aerial view part }, { ground View component }, { front view part, sky aerial view part }, front view part,
Face View component }, { front view part, the first rear view section }, { front view part, the second rear view section }, { front view portion
Point, the first rear view section, sky portion, { front view part, the second rear view section, sky aerial view part }, { front view
Partly, the first rear view section, ground View component }, { front view part, the second rear view section, ground View component },
{ the first rear view section, sky aerial view part }, { the first rear view section, ground View component }, the first rear view section,
Second rear view section }, { the first rear view section, the second rear view section, sky aerial view part }, the first rear view section,
Second rear view section, ground View component }, { the second rear view section, sky aerial view part } and the second rear view section,
Ground View component }.
Only content stream determining module 2220 is configured to determine whether to generate one based on the content in first content stream
Individual or multiple output images, are derived from other parts view (the such as first rear view section, the second rearview portion without relying on
Point, sky aerial view part or ground View component) the image receiving image or synthesis being stored.Only content stream determining module
2220 be configured to check for determined by gather the first (for example, front) the view portion whether including as corresponding to first content stream
The individual element dividing, and check the visual field whether in this first (for example, front) View component.
Lack part determining module 2222 is configured to determine that image is a part of unavailable for present viewing field, for example,
Image or the first content stream from the front view corresponding to environment or the storage from the reception corresponding to environment another part
Image is unavailable.Image compositer module 2224 is configured to synthesize the disabled described part of image of present viewing field to be used for
Image.In various embodiments, image synthesis unit 2224 generates slightly larger than the composograph needed for filling lack part, example
As to allow some overall borders.
Content stream output image generation module 2226 is configured to, when determined by gather and only include the first View component
(for example, front view part) and as not had in the first View component by the present viewing field that only content stream determining module 2220 determines
Outside part when, be based only upon from first content stream reception content generate corresponding to present viewing field one or more outputs
Image.In certain embodiments, content stream output image generation module 2226 executes sanction to the image obtaining from first content stream
Cut operation.
It is (for example, right in first content stream based on including to be configured to based on the generation module 2230 of output image content stream
Should be in the content stream of front view) at least some of receive content and i) correspond to environment the storage of Part II interior
Appearance (for example corresponding to the first rearview, the storage image of the second rearview, sky aerial view or ground view) or ii) simulated environment
Part II composograph (composograph of the part in the disabled visual field of such as analog image) come to generate corresponding to
One or more output images of present viewing field.Storage content corresponding to the Part II of environment is for example to be deposited by reception image
The storage image of storage module 2212 storage.The image that composograph is e.g. generated by module 2224.In certain embodiments, base
Generation module 2230 in output image content stream is configured to select corresponding to ring based on the image selection information being received
The image of the described Part II in border, as the one of the one or more output images being configurable to generate corresponding to present viewing field
Part.
Include being configured to combining one or more composographs based on the generation module 2230 of output image content stream
Composograph binding modules 2232 in the output image generating.Composograph binding modules 2232 are configured to close
Become image and (for example, carry out first (for example, the front) View component obtaining since the first content stream receiving with the image receiving
Reception image) at least a portion or corresponding to the first rear view section, the second rear view section, sky aerial view part or
The reception image being stored of one of ground View component is combined, to generate the image corresponding to present viewing field.Storage figure
As binding modules 2134 are configured to a part for one or more storage images is attached in the output image generating.
As the part generating output image, module 2230, module 2232 and/or module 2234 execute mixing in borderline region.?
In various embodiments, mixing includes filtration, fuzzy, brightness change and/or color change.
Based on the generation module 2236 of output image non-streaming be configured to based on following at least one generate corresponding to work as
One or more output images of forward view:I) correspond to the storage content of a part for environment, for example, correspond to the first backsight
Figure, the storage image of the second rearview, sky aerial view or ground view, or ii) simulated environment a part composograph, example
The composograph of the part in the visual field as disabled in analog image.In present viewing field corresponding to corresponding with first content stream
When outside the region of first (for example, front view) part, module 2236 generates one or more images.In certain embodiments,
It is configured to select based on the image selection information being received to correspond to based on the generation module 2236 of output image non-content stream
In the image of the described Part II of environment, as the one or more output images being configurable to generate corresponding to present viewing field
A part.
Include being configured to being attached to one or more composographs based on the generation module 2236 of output image non-streaming
Composograph binding modules 2238 in the output image generating.Composograph binding modules 2238 are configured to synthesize
Image and reception image are (for example, from the first rear view section, the second rear view section, sky aerial view part or ground view
Partial reception image) at least a portion combination, with generate corresponding to present viewing field image.The image binding modules of storage
2240 are configured to a part for the image of one or more storages is attached in the output image generating.As generation
A part for output image, module 2236, module 2232 and/or module 2234 execute mixing in borderline region.In various enforcements
In example, mixing includes filtration, fuzzy, brightness change and/or color change.
Output module 2242 is configured to output for example by defeated based on output image content stream generation module 2130, content stream
Go out image generation module 2130 and based on output image non-content stream generation module 2236 generate one or more generations defeated
Go out image, one or more of output images include the first output image.Output module is configured to for example connect via network
Mouth 2110 and/or display device interfaces 2103 export the first output image.
Display module 2244 is display configured to for example by the generation module 2230 based on output image content stream, content stream
Output image generation module 2230 and one or more generations of the generation of the generation module 2136 based on output image non-content stream
Output image, one or more of output images include the first output image.Display module 2244 is configured to such as warp
First output image is shown by display 2102 and/or display 2105.
Control routine 2246 includes equipment control routine and Communications routines, with the operation of control system 2100.
According to some embodiments, example content playback system (system 2100 of such as Figure 21) includes:It is configured to really
Determine beholder's head position determining module 2202 of the head position of beholder, described head position corresponds to present viewing field;Quilt
It is configured to receive the content stream receiver module of the first content stream of the content that the Part I corresponding to environment is provided;Based on output
The generation module 2230 of picture material stream, its be configured to based on include at least some of described first content stream receive
Content and i) correspond to the storage content of Part II of described environment or ii) simulate described environment Part II synthesis
Image is generating the one or more output images corresponding to present viewing field;And following at least one:It is configured to export
The output module 2242 of one output image or the display module 2244 being display configured to the first output image, described first is defeated
Going out image is one of output image of one or more generations.
In certain embodiments, described contents playback system 2100 is content playback device 2100'.In some embodiments
In, described contents playback system 2100 is coupled to computer system 2100' of display 2105.
In certain embodiments, this system also includes:It is configured to receive the described Part II corresponding to described environment
The first image image receiver module 2210;And be configured to store the institute of the described Part II corresponding to described environment
State the reception image storage module 2212 of the first image.
In various embodiments, the described first image of the described Part II of described environment corresponds to first time point;
Described based on the generation module 2230 of output image content stream be configured to by from second time point capture described first in
The content holding stream acquisition is combined with the described first image corresponding to described first time point, and first and second time point described is not
With.In some such embodiments, described first time point corresponds to the time before described second time point.In various realities
Apply in example, described first time point, before the time of live event, captures between this live event and includes described first
Image in content stream.
In certain embodiments, described image receiver module 2210 is additionally configured to receive corresponding to described in described environment
One or more additional image of Part II, one or more of additional image correspond to the institute including at least second image
State the described Part II of environment.
In various embodiments, system also includes:It is configured to receive the control information receiver module 2214 of control information,
This control information indicate corresponding to described environment described Part II multiple previous transmission image in which should
Shown during playback duration, this playback duration is to be surveyed with respect to the playback duration of instruction in described first content stream
Amount.
In certain embodiments, the described Part II of described environment be the first rear view section, the second rear view section,
One of sky aerial view part or ground View component.In some such embodiments, described image receiver module 2210 goes back quilt
It is configured to receive one or more images of the Part III corresponding to described environment.
In certain embodiments, the described Part I of described environment is front view part;Described Part III is sky
One of view or ground View component;Image is to be received with the different rates corresponding to described first, second, and third part,
More images are that the event corresponding to described Part I rather than described Part II receives.
In certain embodiments, the described content corresponding to described Part I includes being captured when event is carried out
And it is streamed to the real time content of described playback apparatus;And it is corresponding with the described image corresponding to described second and Part III
Described content is non-real-time images.In some such embodiments, described control information receiver module 2214 is additionally configured to
Receive image selection information, in the multiple images corresponding to the described Part II of described environment for this image selection information instruction
Which should be used during a part for described event;Described generation module 2230 quilt based on output image content stream
It is configured to received image selection information to select the image of the described Part II corresponding to described environment, as quilt
It is configured to generate a part for the one or more output images corresponding to present viewing field.
In various embodiments, system also includes being configured to determine image and a part of disabled of the described visual field is lacked
Lose portion determination module 2222;It is configured to synthesize the figure of the image of the disabled described part of image that will be used for the described visual field
As Senthesizer module 2224;And be configured to by composograph with receive image at least a portion combine with generate corresponding to
The composograph binding modules 2232 of the image of present viewing field.
In various embodiments, described first image content is in the stereo-picture including left-eye image and eye image
Hold.
In certain embodiments, in addition to the other modules shown in Fig. 8, modular assembly 2200 is also included in Fig. 8
Contents playback system 800 in.Module in modular assembly 2200 can be included in memory 812, in processor 808, and/
Or as be coupled in system 800 bus 809 in processor 808 outside hardware module.
Figure 23 is the exemplary stream selecting module used in playback system 1900 illustrating according to some embodiments
1938 Figure 23 00.
Figure 24 shows Figure 24 00 of exemplary stream prioritization module 2306, and it can be implemented as the stream of Figure 23 and select mould
A part for block or as single module.
Some embodiments are directed to and include encoding for control computer or miscellaneous equipment and compress the software of three-dimensional video-frequency
The non-transient computer-readable medium that instruction (for example, computer executable instructions) is gathered.Other embodiments are for inclusion
For control computer or miscellaneous equipment, in the software instruction of player end decoding and decompressed video, (for example, computer can be held
Row instruction) computer-readable medium gathered.Although coding and compression as possible individually operated be mentioned, should manage
Solution, coding can be used to execution compression and therefore coding can include compressing in some cases.Similarly, decoding is permissible
It is related to decompress.
The technology of various embodiments can be realized using the combination of software, hardware and/or software and hardware.Various enforcements
Example is directed to device, such as image data processing system.Various embodiments are also directed to method, the method for example processing view data.
Various embodiments are also directed to non-transient machine, for example, include the machine of the one or more steps for controlling machine implemented method
The computer-readable medium of device readable instruction, such as ROM, RAM, CD, hard disk etc..
The various features of the present invention are realized using module.This module can be (and true in certain embodiments
Real) it is implemented as software module.In other embodiments, module is realized with hardware.In also other embodiments, module is profit
Realized with the combination of software and hardware.In certain embodiments, module is implemented as single circuit, wherein each module
It is implemented as the circuit for the corresponding function of performing module.Various embodiments are it is contemplated that including wherein different moulds
Some embodiments that block is realized by different way, more within hardware, some in software, some use hardware and software
Combination.It should also be noted that contrary with the software executing on aageneral-purposeaprocessor, routine and/or subroutine or by this example
Some steps of Cheng Zhihang can be realized in specialized hardware.This embodiment keeps within the scope of the invention.Many above-mentioned
Method or method and step can be using the machines including in machine readable media (such as memory devices, such as RAM, floppy disk etc.)
Device executable instruction (such as software) realizing, to control machine (for example with or without the all-purpose computer of additional firmware)
Realize all or part of said method.Thus, in addition to other, present invention is more particularly directed towards including machine-executable instruction
Machine readable media, described machine-executable instruction is used for making machine (for example, processor and the hardware associating) execute above-mentioned side
The one or more steps of method.
In view of above description, the numerous additional change of the method and apparatus of above-mentioned various embodiments is to people in the art
Member will be apparent from.These change be considered in the range of.
Claims (68)
1. a kind of method of operation playback system, including:
Based on the head position of user, which selection will receive in multiple content streams to use when playing back at the very first time;
And
Receive one or more selected content streams, use for playback.
2. the method for claim 1, wherein said selection includes:
Based on the head position of user, it is prioritized content stream.
3. method as claimed in claim 2, wherein said priorization includes:
Identification transmits one or more streams of content corresponding with user's present viewing field;And
The size of the view sections being provided based on each stream, be prioritized provide one of scene parts corresponding with present viewing field or
Multiple content streams.
4. method as claimed in claim 3, is wherein prioritized content stream and also includes:
The degree of approach based on the image section being transmitted and present viewing field or end rotation direction at least one of are worked as, and are prioritized
Transmit the one or more additional streams with the partly corresponding content outside described present viewing field.
5. method as claimed in claim 4, is wherein prioritized content stream and includes:
Portion outside transmission and described present viewing field is prioritized based on the degree of approach of the picture material being transmitted and present viewing field
Divide one or more additional streams of corresponding content, the content of the picture material in region of present viewing field described in transmission close proximity
Stream is assigned higher priority than outside described present viewing field and away from the content stream of described present viewing field.
6. method as claimed in claim 4, the one or more additional content stream of wherein said priorization is the head based on user
Portion's direction of rotation, provides the content stream of the picture material outside described present viewing field but on end rotation direction to exist than offer
Other content streams of the picture material on the external direction away from end rotation direction of described present viewing field are assigned higher
Priority.
7. method as claimed in claim 2, wherein said selection includes selecting to be assigned of limit priority or many
Individual content stream.
8. method as claimed in claim 2, the step of wherein said selection also includes:
Determine the available bandwidth for receiving content stream;And
Amount of bandwidth available determined by being based on, is selected from multiple content streams with equal priority.
9. method as claimed in claim 8, also includes:
Based on the bandwidth constraint being sent to described playback system, determine the bandwidth at least one content stream;And
The described selection wherein carrying out from multiple content streams with equal priority is also based upon at least one content stream described
The bandwidth determining.
10. method as claimed in claim 9, also includes:
Receive the bandwidth constraint for different view directions, each bandwidth constraint specify to be used for receiving provide corresponding to described
The maximum bandwidth of one or more content streams of the content of the corresponding view direction of each bandwidth constraint.
11. methods as claimed in claim 8, the described selection wherein carrying out from multiple content streams with equal priority
Select a content stream including from the multiple content streams be assigned limit priority, be assigned each content of limit priority
Stream provides the content corresponding to identical view direction.
12. methods as claimed in claim 11, the described choosing wherein carrying out from multiple content streams with equal priority
Select and select a content stream including from the multiple content streams be assigned the second high priority, be assigned the every of the second high priority
Individual content stream provides the content corresponding to identical view direction.
13. the method for claim 1, also include receiving providing and can select, with regard to playback system, the content stream that receives
The guide information of information.
14. methods as claimed in claim 13, wherein said guide information includes being used to access providing guide letter for it
The information of the content stream of breath.
15. methods as claimed in claim 14, wherein, for first content stream, described guide information includes one of following:Energy
It is added into the multicast address of the multicast group to receive first content stream, request access can be used to for providing first content stream
The information of switched digital video channel or can be used to control playback system tuner be tuned to broadcast described the above
The channel tuning information of the broadcast channel of one content stream.
16. methods as claimed in claim 15, also include:
Start the payment of selected content stream, the payment starting selected content stream includes sending signal to add corresponding to described selected
The multicast group of content stream.
17. methods as claimed in claim 15, also include:
Start the payment of selected content stream, the payment starting selected content stream includes sending request to the equipment in network, thus
Ask the payment exchanging digital channel that described selected content stream is transmitted above.
A kind of 18. playback systems, including:
Stream selecting module, is configured to head position based on user and which selects to receive in multiple content streams with the
Use when playing back at one time;And
Including the interface of receiver, described receiver is configured to receive one or more selected content streams, uses for playback.
19. playback systems as claimed in claim 18, wherein said stream selecting module includes:
Stream prioritization module, is configured to head position based on user and is prioritized content stream.
20. playback systems as claimed in claim 19, wherein said stream prioritization module includes:
Identification module, is configured to identify the one or more streams transmitting content corresponding with user's present viewing field;And
Priority assignment module, the size being configured to the view sections based on each stream offer is prioritized offer and works as forward sight
One or more content streams of wild corresponding scene parts.
21. playback systems as claimed in claim 20, wherein said stream prioritization module also includes additional flow priority distribution
Module, described additional streams priority assignment module be configured to the degree of approach based on the image section being transmitted and present viewing field or
End rotation direction at least one of is worked as, and to be prioritized transmission and one of the partly corresponding content outside described present viewing field
Individual or multiple additional streams.
22. playback systems as claimed in claim 21, wherein said additional streams priority assignment module is configured to based on institute
Picture material and the degree of approach of described present viewing field of transmission to be prioritized partly corresponding outside transmission and described present viewing field
Content one or more additional streams, transmission close proximity described in present viewing field the picture material in region content stream than work as
It is assigned higher priority outside forward view and away from the content stream of present viewing field.
23. playback systems as claimed in claim 21, the head based on user for the wherein said additional streams priority assignment module
Direction of rotation, to be prioritized one or more additional content streams, provides outside described present viewing field but on end rotation direction
The content stream ratio of picture material picture material on the external direction away from end rotation direction of present viewing field is provided
Other content streams are assigned higher priority.
24. playback systems as claimed in claim 19, wherein said stream selecting module is additionally configured to select to be assigned
One or more content streams of high priority.
25. playback systems as claimed in claim 19, also include:
Available bandwidth data rate determination module, is configured to determine that the available bandwidth for receiving content stream;And
Wherein said stream selecting module includes selecting module, described selecting module be configured to be based on determined by amount of bandwidth available
Selected from multiple content streams with equal priority.
26. playback systems as claimed in claim 25,
Wherein said stream selecting module also includes flowing bandwidth determination module, and described stream bandwidth determination module is configured to based on transmission
Bandwidth constraint to described playback system to determine the bandwidth at least one content stream;And
Wherein said selecting module is configured to be based further on the bandwidth determining at least one content stream described from having phase
Selected in the plurality of content stream of same priority.
27. playback systems as claimed in claim 26, also include:
Wherein said receiver is additionally configured to receive the bandwidth constraint for different view directions, and each bandwidth constraint is specified will
For receiving the maximum of one or more content streams of the content corresponding to view direction corresponding with each bandwidth constraint described
Bandwidth.
28. playback systems as claimed in claim 25, wherein said selecting module is configured to from being assigned limit priority
Multiple content streams in select a content stream, as being configured to be selected from multiple content streams with equal priority
The part selected, each content stream being assigned limit priority provides the content corresponding to identical view direction.
29. playback systems as claimed in claim 28, wherein said selecting module is configured to, high preferential from being assigned second
A content stream is selected, as being configured to carry out from multiple content streams with equal priority in multiple content streams of level
The part selecting, each content stream being assigned the second high priority provides the content corresponding to identical view direction.
30. playback systems as claimed in claim 18, wherein said receiver is additionally configured to receive to be provided with regard to playback system
System can select the guide information of the information of content stream of reception.
31. playback systems as claimed in claim 30, wherein said guide information includes being used to access referring to for its offer
The information of the content stream of southern information.
32. playback systems as claimed in claim 31, wherein, for first content stream, described guide information include following it
One:The multicast address of the multicast group to receive first content stream can be added into, request access can be used to for providing in first
Hold stream the information of switched digital video channel or can be used to control playback system tuner be tuned to broadcast above
The channel tuning information of the broadcast channel of described first content stream.
33. playback systems as claimed in claim 32, also include:
Content delivery starting module, is configured to start the payment of selected content stream, and described content delivery starting module is also joined
It is set to sending signal to add multicast group corresponding with described selected content stream.
34. playback systems as claimed in claim 32, also include:
Content delivery starting module, is configured to start the payment of selected content stream, and described content delivery starting module is also joined
It is set to and generates and send request to the equipment in network, thus asking the exchange numeral that described selected content stream is transmitted above
The payment of channel.
A kind of 35. non-transient computer-readable mediums including processor executable, described instruction ought be held by processor
During row, control playback system:
Based on the head position of user, which selection will receive in multiple content streams to use when playing back at the very first time;
And
Receive one or more selected content streams, use for playback.
A kind of 36. methods of operation content playback system, the method includes:
Determine the head position of beholder, described head position corresponds to present viewing field;
Receive the first content stream that content corresponding with the Part I of environment is provided;
Based on include at least some of described first content stream receive content and i) correspond to described environment second
Partial storage content or ii) simulate described environment Part II composograph, generate corresponding to one of present viewing field
Or multiple output image;And
Output or display the first output image, described first output image is one of output image of one or more generations.
37. methods as claimed in claim 36, wherein said contents playback system is content playback device.
38. methods as claimed in claim 36, wherein said contents playback system is coupled to the computer system of display.
39. methods as claimed in claim 36, also include:
Receive first image corresponding with the described Part II of described environment;And
Store the corresponding described first image of described Part II with described environment.
40. methods as claimed in claim 39,
The described first image of the described Part II of wherein said environment corresponds to first time point;And
Wherein generate the one or more output images corresponding to present viewing field to include from described in the second time point capture
First content stream obtain content combine with the described first image corresponding to described first time point, described first time point and
Second time point is different.
41. methods as claimed in claim 40, wherein said first time point correspond to described second time point before when
Between.
42. methods as claimed in claim 41, wherein said first time point is before the time of live event, live at this
During event, capture includes the image in described first content stream.
43. methods as claimed in claim 40, also include:
Receive one or more additional image of the described Part II corresponding to described environment, corresponding to described in described environment
One or more of additional image of Part II include at least second image.
44. methods as claimed in claim 43, also include:
Receive control information, this control information indicates the figure of multiple previous transmission corresponding with the described Part II of described environment
In picture, which should be shown during playback duration, and described playback duration is to indicate with respect in described first content stream
Playback duration measure.
45. methods as claimed in claim 44, the described Part II of wherein said environment be the first rear view section, second
One of rear view section, sky aerial view part or ground View component.
46. methods as claimed in claim 45, also include:
Receive one or more images corresponding with the Part III of described environment.
47. methods as claimed in claim 46,
The described Part I of wherein said environment is front view part;
Wherein said Part III is one of sky aerial view or ground View component;And
Wherein image to be received corresponding to the different rates of described Part I, Part II and Part III, more images
Received for the event corresponding to described Part I rather than the event corresponding to described Part II.
48. methods as claimed in claim 47,
Wherein corresponding with described Part I described content includes being captured and be streamed to described returning when event is carried out
Put the real time content of equipment;And
It is non-real-time images wherein with the corresponding described content of described image corresponding to described Part II and Part III.
49. methods as claimed in claim 36, also include:
Receiving instruction should be in the one of described event with which in the corresponding multiple images of described Part II of environment
The image selection information being used between by stages;And
Wherein include being based on based on one or more output images that the content that at least some receives generates corresponding to present viewing field
The image selection information receiving selects the corresponding image of described Part II with environment.
50. methods as claimed in claim 49, also include:
Determine that image is a part of unavailable to described present viewing field;
Synthesis will be used for the image of the disabled described part of image of described present viewing field;And
Composograph is combined with least a portion receiving image, to generate the image corresponding to present viewing field.
51. methods as claimed in claim 36, wherein said first picture material is including left-eye image and eye image
Stereoscopic image content.
A kind of 52. contents playback system, including:
Beholder's head position determining module, is configured to determine that the head position of beholder, and described head position corresponds to works as
Forward view;
Content stream receiver module, is configured to receive the first content stream providing content corresponding with the Part I of environment;
Based on the generation module of output image content stream, be configured to, based on include in described first content stream at least one
A little contents receiving and i) correspond to the storage content of Part II or the ii of described environment) simulate second of described environment
The composograph dividing, generates the one or more output images corresponding to present viewing field;And
Below at least one:It is configured to export the output module of the first output image or be display configured to the first output figure
The display module of picture, described first output image is one of one or more of output images of being generated.
53. systems as claimed in claim 52, wherein said contents playback system is content playback device.
54. systems as claimed in claim 52, wherein said contents playback system is coupled to the computer system of display.
55. systems as claimed in claim 52, also include:
Image receiver module, is configured to receive first image corresponding with the described Part II of described environment;And
Receive image storage module, be configured to store the corresponding described first image of described Part II with described environment.
56. systems as claimed in claim 55,
The described first image of the described Part II of wherein said environment corresponds to first time point;And
Wherein said it is configured to from described the of the second time point capture based on the generation module of output image content stream
The content that one content stream obtains combines with the described first image corresponding to described first time point, described first time point and the
Two time points are different.
57. systems as claimed in claim 56, wherein said first time point correspond to described second time point before when
Between.
58. systems as claimed in claim 57, wherein said first time point is before the time of live event, live at this
Between event, capture includes the image in described first content stream.
59. systems as claimed in claim 56, wherein said image receiver module is additionally configured to receive and described environment
The corresponding one or more additional image of described Part II, corresponding one with the described Part II of described environment or
Multiple additional image include at least second image.
60. systems as claimed in claim 59, also include:
Control information receiver module, is configured to receive control information, and described the of the instruction of described control information and described environment
Two partly which in the image of corresponding multiple previous transmission should be shown during playback duration, this playback duration is
Measure with respect to the playback duration of instruction in described first content stream.
61. systems as claimed in claim 60, the described Part II of wherein said environment be the first rear view section, second
One of rear view section, sky aerial view part or ground View component.
62. systems as claimed in claim 61, wherein said image receiver module is additionally configured to:
Receive one or more images corresponding with the Part III of described environment.
63. systems as claimed in claim 62,
The described Part I of wherein said environment is front view part;
Wherein said Part III is one of sky aerial view or ground View component;And
Wherein image to be received corresponding to the different rates of described Part I, Part II and Part III, more images
Received for the event corresponding to described Part I rather than the event corresponding to described Part II.
64. systems as described in claim 63,
Wherein corresponding with described Part I described content includes being captured and be streamed to described returning when event is carried out
Put the real time content of equipment;And
It is non-real-time images wherein with the corresponding described content of described image corresponding to described Part II and Part III.
65. systems as described in claim 64, wherein said control information receiver module is additionally configured to receive instruction and institute
State in the corresponding multiple images of described Part II of environment which should be used during a part for described event
Image selection information;And
Wherein said based on the generation module of output image content stream be configured to based on receive image selection information select with
The corresponding image of described Part II of described environment, one or more defeated corresponding to present viewing field as being configurable to generate
Go out a part for image.
66. systems as described in claim 65, also include:
Lack part determining module, is configured to determine that image is a part of unavailable to the described visual field;
Image compositer module, is configured to synthesize the image of the disabled described part of image that will be used for the described visual field;And
Composograph binding modules, are configured to combine composograph with least a portion receiving image, to generate correspondence
Image in present viewing field.
67. systems as claimed in claim 52, wherein said first picture material is including left-eye image and eye image
Stereoscopic image content.
A kind of 68. non-transient machine readable medias including processor executable, described instruction is when by content playback system
During the computing device of system, control this system execution following steps:
Determine the head position of beholder, described head position corresponds to present viewing field;
Receive the first content stream that content corresponding with the Part I of environment is provided;
Based on include at least some of described first content stream receive content and i) correspond to described environment second
Partial storage content or ii) simulate described environment Part II composograph, generate corresponding to one of present viewing field
Or multiple output image;And
Output or display the first output image, described first output image be one or more of output images of being generated it
One.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201462004547P | 2014-05-29 | 2014-05-29 | |
US62/004,547 | 2014-05-29 | ||
PCT/US2015/033420 WO2015184416A1 (en) | 2014-05-29 | 2015-05-29 | Methods and apparatus for delivering content and/or playing back content |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106416239A true CN106416239A (en) | 2017-02-15 |
CN106416239B CN106416239B (en) | 2019-04-09 |
Family
ID=54699946
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201580028645.9A Active CN106416239B (en) | 2014-05-29 | 2015-05-29 | Method and apparatus for delivering content and/or playing back content |
Country Status (6)
Country | Link |
---|---|
EP (1) | EP3149937A4 (en) |
JP (1) | JP2017527230A (en) |
KR (2) | KR102407283B1 (en) |
CN (1) | CN106416239B (en) |
CA (1) | CA2948642A1 (en) |
WO (1) | WO2015184416A1 (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108668138A (en) * | 2017-03-28 | 2018-10-16 | 华为技术有限公司 | A kind of method for downloading video and user terminal |
WO2019223645A1 (en) * | 2018-05-22 | 2019-11-28 | 华为技术有限公司 | Vr video playback method, terminal, and server |
CN110663256A (en) * | 2017-05-31 | 2020-01-07 | 维里逊专利及许可公司 | Method and system for rendering frames of a virtual scene from different vantage points based on a virtual entity description frame of the virtual scene |
CN111226264A (en) * | 2017-10-20 | 2020-06-02 | 索尼公司 | Playback apparatus and method, and generation apparatus and method |
CN111602105A (en) * | 2018-01-22 | 2020-08-28 | 苹果公司 | Method and apparatus for presenting synthetic reality companion content |
CN111614974A (en) * | 2020-04-07 | 2020-09-01 | 上海推乐信息技术服务有限公司 | Video image restoration method and system |
CN114356070A (en) * | 2020-09-29 | 2022-04-15 | 国际商业机器公司 | Actively selecting virtual reality content context |
Families Citing this family (42)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10204658B2 (en) | 2014-07-14 | 2019-02-12 | Sony Interactive Entertainment Inc. | System and method for use in playing back panorama video content |
CN106937128A (en) * | 2015-12-31 | 2017-07-07 | 幸福在线(北京)网络技术有限公司 | A kind of net cast method, server and system and associated uses |
CN105791882B (en) * | 2016-03-22 | 2018-09-18 | 腾讯科技(深圳)有限公司 | Method for video coding and device |
US9986221B2 (en) * | 2016-04-08 | 2018-05-29 | Visbit Inc. | View-aware 360 degree video streaming |
US10587934B2 (en) * | 2016-05-24 | 2020-03-10 | Qualcomm Incorporated | Virtual reality video signaling in dynamic adaptive streaming over HTTP |
US10219014B2 (en) | 2016-06-02 | 2019-02-26 | Biamp Systems, LLC | Systems and methods for bandwidth-limited video transport |
US10805592B2 (en) | 2016-06-30 | 2020-10-13 | Sony Interactive Entertainment Inc. | Apparatus and method for gaze tracking |
WO2018004934A1 (en) * | 2016-06-30 | 2018-01-04 | Sony Interactive Entertainment Inc. | Apparatus and method for capturing and displaying segmented content |
KR20180025797A (en) * | 2016-09-01 | 2018-03-09 | 삼성전자주식회사 | Method for Streaming Image and the Electronic Device supporting the same |
WO2018049221A1 (en) | 2016-09-09 | 2018-03-15 | Vid Scale, Inc. | Methods and apparatus to reduce latency for 360-degree viewport adaptive streaming |
CN109716757A (en) * | 2016-09-13 | 2019-05-03 | 交互数字Vc控股公司 | Method, apparatus and stream for immersion video format |
WO2018060334A1 (en) * | 2016-09-29 | 2018-04-05 | Koninklijke Philips N.V. | Image processing |
WO2018063957A1 (en) * | 2016-09-30 | 2018-04-05 | Silver VR Technologies, Inc. | Methods and systems for virtual reality streaming and replay of computer video games |
KR102633595B1 (en) * | 2016-11-21 | 2024-02-05 | 삼성전자주식회사 | Display apparatus and the control method thereof |
KR20180059210A (en) | 2016-11-25 | 2018-06-04 | 삼성전자주식회사 | Image processing apparatus and method for image processing thereof |
US10244215B2 (en) | 2016-11-29 | 2019-03-26 | Microsoft Technology Licensing, Llc | Re-projecting flat projections of pictures of panoramic video for rendering by application |
US10244200B2 (en) | 2016-11-29 | 2019-03-26 | Microsoft Technology Licensing, Llc | View-dependent operations during playback of panoramic video |
US10595069B2 (en) | 2016-12-05 | 2020-03-17 | Adobe Inc. | Prioritizing tile-based virtual reality video streaming using adaptive rate allocation |
FI20165925L (en) * | 2016-12-05 | 2018-06-06 | Rolls Royce Oy Ab | Optimizing data stream transmissions from marine vessel |
CN108156484B (en) * | 2016-12-05 | 2022-01-14 | 奥多比公司 | Prioritizing tile-based virtual reality video streams with adaptive rate allocation |
US10242714B2 (en) | 2016-12-19 | 2019-03-26 | Microsoft Technology Licensing, Llc | Interface for application-specified playback of panoramic video |
US10999602B2 (en) | 2016-12-23 | 2021-05-04 | Apple Inc. | Sphere projected motion estimation/compensation and mode decision |
US11259046B2 (en) | 2017-02-15 | 2022-02-22 | Apple Inc. | Processing of equirectangular object data to compensate for distortion by spherical projections |
US10924747B2 (en) | 2017-02-27 | 2021-02-16 | Apple Inc. | Video coding techniques for multi-view video |
US10979663B2 (en) * | 2017-03-30 | 2021-04-13 | Yerba Buena Vr, Inc. | Methods and apparatuses for image processing to optimize image resolution and for optimizing video streaming bandwidth for VR videos |
US11093752B2 (en) | 2017-06-02 | 2021-08-17 | Apple Inc. | Object tracking in multi-view video |
US20210204019A1 (en) * | 2017-07-18 | 2021-07-01 | Hewlett-Packard Development Company, L.P. | Virtual reality buffering |
TWI653882B (en) | 2017-11-23 | 2019-03-11 | 宏碁股份有限公司 | Video device and encoding/decoding method for 3d objects thereof |
US10990831B2 (en) | 2018-01-05 | 2021-04-27 | Pcms Holdings, Inc. | Method to create a VR event by evaluating third party information and re-providing the processed information in real-time |
JP7059662B2 (en) * | 2018-02-02 | 2022-04-26 | トヨタ自動車株式会社 | Remote control system and its communication method |
EP3750301B1 (en) * | 2018-02-06 | 2023-06-07 | Phenix Real Time Solutions, Inc. | Simulating a local experience by live streaming sharable viewpoints of a live event |
CN110198457B (en) * | 2018-02-26 | 2022-09-02 | 腾讯科技(深圳)有限公司 | Video playing method and device, system, storage medium, terminal and server thereof |
CN112219406B (en) | 2018-03-22 | 2023-05-05 | Vid拓展公司 | Latency reporting method and system for omni-directional video |
US11917127B2 (en) | 2018-05-25 | 2024-02-27 | Interdigital Madison Patent Holdings, Sas | Monitoring of video streaming events |
US10764494B2 (en) | 2018-05-25 | 2020-09-01 | Microsoft Technology Licensing, Llc | Adaptive panoramic video streaming using composite pictures |
US10666863B2 (en) | 2018-05-25 | 2020-05-26 | Microsoft Technology Licensing, Llc | Adaptive panoramic video streaming using overlapping partitioned sections |
KR20190136417A (en) * | 2018-05-30 | 2019-12-10 | 삼성전자주식회사 | Method for tramsmitting stereoscopic 360 degree video data, display apparatus thereof, and storing device of video data thereof |
KR102435519B1 (en) * | 2018-06-20 | 2022-08-24 | 삼성전자주식회사 | Method and apparatus for processing 360 degree image |
EP3588970A1 (en) * | 2018-06-22 | 2020-01-01 | Koninklijke Philips N.V. | Apparatus and method for generating an image data stream |
KR20220039113A (en) * | 2020-09-21 | 2022-03-29 | 삼성전자주식회사 | Method and apparatus for transmitting video content using edge computing service |
US11632531B1 (en) * | 2021-05-03 | 2023-04-18 | Amazon Technologies, Inc. | Synchronization and presentation of multiple 3D content streams |
CN115250363A (en) * | 2022-09-22 | 2022-10-28 | 广州市千钧网络科技有限公司 | Multi-view live broadcast system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090256904A1 (en) * | 2005-11-09 | 2009-10-15 | Krill Jerry A | System and Method for 3-Dimensional Display of Image Data |
US20110149043A1 (en) * | 2009-12-18 | 2011-06-23 | Electronics And Telecommunications Research Institute | Device and method for displaying three-dimensional images using head tracking |
CN102740154A (en) * | 2011-04-14 | 2012-10-17 | 联发科技股份有限公司 | Method for adjusting playback of multimedia content according to detection result of user status and related apparatus thereof |
US20130219012A1 (en) * | 2012-02-22 | 2013-08-22 | Citrix Systems, Inc. | Hierarchical Display |
CN103533340A (en) * | 2013-10-25 | 2014-01-22 | 深圳市汉普电子技术开发有限公司 | Naked eye 3D (three-dimensional) playing method of mobile terminal and mobile terminal |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6466254B1 (en) * | 1997-05-08 | 2002-10-15 | Be Here Corporation | Method and apparatus for electronically distributing motion panoramic images |
JPH1127577A (en) * | 1997-06-30 | 1999-01-29 | Hitachi Ltd | Image system with virtual visual point |
US20040104935A1 (en) * | 2001-01-26 | 2004-06-03 | Todd Williamson | Virtual reality immersion system |
JP4461739B2 (en) * | 2003-08-18 | 2010-05-12 | ソニー株式会社 | Imaging device |
JP2009017064A (en) * | 2007-07-03 | 2009-01-22 | Hitachi Ltd | Video receiver and multicast distribution content reception control method |
US20110181601A1 (en) * | 2010-01-22 | 2011-07-28 | Sony Computer Entertainment America Inc. | Capturing views and movements of actors performing within generated scenes |
JP2013521743A (en) * | 2010-03-05 | 2013-06-10 | トムソン ライセンシング | Bit rate adjustment in adaptive streaming systems |
FR2988964A1 (en) * | 2012-03-30 | 2013-10-04 | France Telecom | Method for receiving immersive video content by client entity i.e. smartphone, involves receiving elementary video stream, and returning video content to smartphone from elementary video stream associated with portion of plan |
-
2015
- 2015-05-29 CN CN201580028645.9A patent/CN106416239B/en active Active
- 2015-05-29 CA CA2948642A patent/CA2948642A1/en active Pending
- 2015-05-29 JP JP2017515038A patent/JP2017527230A/en not_active Withdrawn
- 2015-05-29 WO PCT/US2015/033420 patent/WO2015184416A1/en active Application Filing
- 2015-05-29 KR KR1020167036714A patent/KR102407283B1/en active IP Right Grant
- 2015-05-29 KR KR1020227019042A patent/KR102611448B1/en active IP Right Grant
- 2015-05-29 EP EP15798986.4A patent/EP3149937A4/en not_active Withdrawn
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090256904A1 (en) * | 2005-11-09 | 2009-10-15 | Krill Jerry A | System and Method for 3-Dimensional Display of Image Data |
US20110149043A1 (en) * | 2009-12-18 | 2011-06-23 | Electronics And Telecommunications Research Institute | Device and method for displaying three-dimensional images using head tracking |
CN102740154A (en) * | 2011-04-14 | 2012-10-17 | 联发科技股份有限公司 | Method for adjusting playback of multimedia content according to detection result of user status and related apparatus thereof |
US20130219012A1 (en) * | 2012-02-22 | 2013-08-22 | Citrix Systems, Inc. | Hierarchical Display |
CN103533340A (en) * | 2013-10-25 | 2014-01-22 | 深圳市汉普电子技术开发有限公司 | Naked eye 3D (three-dimensional) playing method of mobile terminal and mobile terminal |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108668138A (en) * | 2017-03-28 | 2018-10-16 | 华为技术有限公司 | A kind of method for downloading video and user terminal |
CN110663256A (en) * | 2017-05-31 | 2020-01-07 | 维里逊专利及许可公司 | Method and system for rendering frames of a virtual scene from different vantage points based on a virtual entity description frame of the virtual scene |
CN110663256B (en) * | 2017-05-31 | 2021-12-14 | 维里逊专利及许可公司 | Method and system for rendering frames of a virtual scene from different vantage points based on a virtual entity description frame of the virtual scene |
CN111226264A (en) * | 2017-10-20 | 2020-06-02 | 索尼公司 | Playback apparatus and method, and generation apparatus and method |
CN111602105A (en) * | 2018-01-22 | 2020-08-28 | 苹果公司 | Method and apparatus for presenting synthetic reality companion content |
CN111602105B (en) * | 2018-01-22 | 2023-09-01 | 苹果公司 | Method and apparatus for presenting synthetic reality accompanying content |
WO2019223645A1 (en) * | 2018-05-22 | 2019-11-28 | 华为技术有限公司 | Vr video playback method, terminal, and server |
US11765427B2 (en) | 2018-05-22 | 2023-09-19 | Huawei Technologies Co., Ltd. | Virtual reality video playing method, terminal, and server |
CN111614974A (en) * | 2020-04-07 | 2020-09-01 | 上海推乐信息技术服务有限公司 | Video image restoration method and system |
CN111614974B (en) * | 2020-04-07 | 2021-11-30 | 上海推乐信息技术服务有限公司 | Video image restoration method and system |
CN114356070A (en) * | 2020-09-29 | 2022-04-15 | 国际商业机器公司 | Actively selecting virtual reality content context |
Also Published As
Publication number | Publication date |
---|---|
KR102611448B1 (en) | 2023-12-07 |
KR20220081390A (en) | 2022-06-15 |
KR20170015938A (en) | 2017-02-10 |
KR102407283B1 (en) | 2022-06-10 |
CA2948642A1 (en) | 2015-12-03 |
CN106416239B (en) | 2019-04-09 |
WO2015184416A1 (en) | 2015-12-03 |
JP2017527230A (en) | 2017-09-14 |
EP3149937A1 (en) | 2017-04-05 |
EP3149937A4 (en) | 2018-01-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106416239B (en) | Method and apparatus for delivering content and/or playing back content | |
US11871085B2 (en) | Methods and apparatus for delivering content and/or playing back content | |
US20210409672A1 (en) | Methods and apparatus for receiving and/or playing back content | |
US10645369B2 (en) | Stereo viewing | |
US20060244831A1 (en) | System and method for supplying and receiving a custom image | |
EP3988187A1 (en) | Layered augmented entertainment experiences | |
US11461871B2 (en) | Virtual reality cinema-immersive movie watching for headmounted displays | |
US11849104B2 (en) | Multi-resolution multi-view video rendering | |
Baker et al. | Capture and display for live immersive 3D entertainment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |