US20230237976A1 - Display Layout Optimization of Multiple Media Streams - Google Patents

Display Layout Optimization of Multiple Media Streams Download PDF

Info

Publication number
US20230237976A1
US20230237976A1 US18/129,941 US202318129941A US2023237976A1 US 20230237976 A1 US20230237976 A1 US 20230237976A1 US 202318129941 A US202318129941 A US 202318129941A US 2023237976 A1 US2023237976 A1 US 2023237976A1
Authority
US
United States
Prior art keywords
media
source
display
media source
metadata
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US18/129,941
Inventor
Marshall D. Simmons
Jeremy P. Zullo
Alejandro Mata Sanchez
Robert J. Paksi, Jr.
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panduit Corp
Original Assignee
Panduit Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Panduit Corp filed Critical Panduit Corp
Priority to US18/129,941 priority Critical patent/US20230237976A1/en
Publication of US20230237976A1 publication Critical patent/US20230237976A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/147Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/14Display of multiple viewports
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • G06F9/452Remote windowing, e.g. X-Window System, desktop virtualisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/75Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/38Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory with means for controlling the display position
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/10Recognition assisted with metadata
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/06Adjustment of display parameters
    • G09G2320/0613The adjustment depending on the type of the information to be displayed
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0442Handling or displaying different aspect ratios, or changing the aspect ratio
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0464Positioning
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/20Details of the management of multiple sources of image data

Definitions

  • the present disclosure is directed to optimizing a display layout of multiple incoming media streams for output to a sink device and, more particularly, to dynamically optimizing the display layout based, in part, on a prediction of a type of source device from which an incoming media stream is received.
  • a media receiver comprising a memory configured to store machine-readable instructions, and a processor circuitry in communication with the memory.
  • the processor circuitry is configured to execute the machine-readable instructions to cause the processing circuitry to receive a first media stream corresponding to a first media source, obtain first metadata from the first media stream, receive a second media stream corresponding to a second media source, obtain second metadata from the second media stream, determine a first source type for the first media source based on the first metadata, determine a second source type for the second media source based on the second metadata, generate an optimized display including the first media stream and the second media stream based on at least the first source type and the second source type, and control transmission of the optimized display to a sink device.
  • a method for optimizing a display layout on a display screen comprising receiving, by a communication interface, a first media stream from a first media source, extracting, by a processor, first metadata from the first media stream, receiving, by the communication interface, a second media stream from a second media source, extracting, by the processor, second metadata from the second media stream, determining, by the processor, a first source type for the first media source based on the first metadata, determining, by the processor, a second source type for the second media source based on the second metadata, generating, by the processor, an optimized display including the first media stream and the second media stream based on at least the first source type and the second source type, and controlling, by the processor, transmission of the optimized display to a sink device.
  • FIG. 1 shows an environmental diagram of a media presentation system including a streaming media receiver, in accordance with one or more embodiments of the present disclosure.
  • FIG. 2 shows a block diagram of the system from FIG. 1 including a more detailed view of the streaming media receiver.
  • FIG. 3 shows an exemplary flow chart depicting a process for optimizing a display layout of multiple media streams, in accordance with one or more embodiments of the present disclosure.
  • FIG. 4 shows an exemplary flow diagram depicting the flow of signals, data, and/or corresponding information used in optimizing a display layout, in accordance with one or more embodiments of the present disclosure.
  • FIG. 5 shows an exemplary flow chart depicting a process for predicting a source device type based on an analysis of an input media stream, in accordance with one or more embodiments of the present disclosure.
  • FIG. 6 shows a flow diagram illustrating a first exemplary embodiment for predicting a source device type based on one or more characteristics of an input media stream, in accordance with one or more embodiments of the present disclosure.
  • FIG. 7 shows a flow diagram illustrating a second exemplary embodiment for predicting a source device type based on one or more characteristics of an input media stream, in accordance with one or more embodiments of the present disclosure.
  • FIG. 8 shows a flow diagram illustrating a third exemplary embodiment for predicting a source device type based on one or more characteristics of an input media stream, in accordance with one or more embodiments of the present disclosure.
  • FIG. 9 shows a flow diagram illustrating a fourth exemplary embodiment for predicting a source device type based on one or more characteristics of an input media stream, in accordance with one or more embodiments of the present disclosure.
  • FIG. 10 shows an exemplary display layout that is optimized in accordance with one or more embodiments of the present disclosure.
  • FIG. 11 shows an exemplary display layout that is optimized in accordance with one or more embodiments of the present disclosure.
  • FIG. 12 shows an exemplary display layout that is optimized in accordance with one or more embodiments of the present disclosure.
  • FIG. 13 shows an exemplary display layout that is optimized in accordance with one or more embodiments of the present disclosure.
  • FIG. 14 shows an exemplary display layout that is optimized in accordance with one or more embodiments of the present disclosure.
  • FIG. 15 shows an exemplary display layout that is optimized in accordance with one or more embodiments of the present disclosure.
  • FIG. 16 shows an exemplary display layout that is optimized in accordance with one or more embodiments of the present disclosure.
  • FIG. 1 is an environmental diagram of a media presentation system 10 including a streaming media receiver 12 , in accordance with one or more embodiments of the present disclosure.
  • the streaming media receiver 12 is configured to receive multimedia content, such as audio and/or video (A/V) streams, from one or more source devices 14 .
  • the source devices 14 may include any input media device capable of providing audio and/or video signals.
  • the source devices 14 may include various types of computing devices or multimedia devices, such as a personal computer (PC), laptop, smartphone, tablet, or streaming media player.
  • PC personal computer
  • the streaming media receiver 12 may be configured to connect to a network 16 , such as a local area network (LAN) or a wide area network (WAN) such as the Internet.
  • various source devices 14 communicate with the streaming media receiver 12 over the network 16 .
  • one or more source devices 14 wirelessly connect to the streaming media receiver 12 directly using a wireless communication protocol.
  • one or more of the source devices 14 may connect to the streaming media receiver 12 directly using a wire or cable, such as USB, HDMI, DisplayPort or the like.
  • the streaming media receiver 12 is further configured to process and analyze multiple incoming media streams from a plurality of the source devices 14 and generate an output media signal, where the output media signal is transmitted to at least one sink device 18 .
  • the sink device 18 may refer to any output media device or endpoint of a given device output configured to receive audio and/or video signals, such as a display, television, projector, video conferencing system, A/V switch, or the like.
  • the output media signal is a composited stream going out to the sink device 18 that includes a plurality of the incoming media streams from the source devices 14 .
  • the streaming media receiver 12 may output the composited stream to the sink device 18 as a single, flattened stream.
  • the composited and flattened stream of the output media signal may include metadata corresponding to attributes of their respective source device 14 , where the attribute metadata may then be used for controlling the output at the end user's interface or sink device.
  • the streaming media receiver 12 optimizes the display layout of the multiple incoming media streams from the plurality of source devices 14 when combining the streams for output to the sink device 18 .
  • these attributes may relate to the sizing, position, scaling, orientation, aspect ratio, and other features of the incoming media streams to enhance the display layout of multiple streams.
  • Optimizing the display layout for the multiple incoming video streams at the sink device 18 provides for more efficient use of screen real estate and can provide advantages such as increasing the legibility of text. For example, by optimizing the sizing and placement of the video streams, wasted screen space can be minimized and illegible text can be addressed to make more legible (e.g., increase text size by increasing display window) when multiple video streams are displayed.
  • the display layout of multiple streams may be optimized based on a number of different variables, including characteristics of the sink device 18 , total number of active incoming streams, active audio, and other characteristics of the source material or device.
  • the source of an incoming media stream may contain useful characteristics for optimizing the display layout of multiple media streams.
  • One such characteristic of the source may include the device type, such as laptop, PC, phone, or tablet.
  • native streaming protocols on operating systems such as Microsoft Windows, Apple OSX, Apple iPadOS, Apple iOS and Android do not directly share metadata about the specific type of device upon which the operating system is running.
  • one or more embodiments of the present disclosure may provide a system, apparatus, and method for predicting the device type of a source device providing an incoming media stream to the streaming media receiver 12 based on available metadata and/or other attribute information obtained from the incoming media streams.
  • FIG. 2 is a block diagram of the system 10 from FIG. 1 including a more detailed view of the streaming media receiver 12 .
  • the streaming media receiver 12 is configured to detect a source device type associated with an input media stream 20 and optimize a display layout of multiple input media streams based, at least in part, on the predicted source device type of each stream.
  • the streaming media receiver 12 may be configured to receive multiple input media streams 20 from the plurality of source devices 14 and generate an output media signal 22 to one or more sink devices 18 , as described in FIG. 1 .
  • the sink device 18 is external to the streaming media receiver 12 , and accessible via direct communication with the streaming media receiver 12 or via the network 16 (i.e., Network sink device).
  • the sink device 18 may be integrated with the streaming media receiver 12 within a single device.
  • the streaming media receiver 12 may be a stand-alone device or may be integrated as a component within another computing unit or device.
  • the streaming media receiver 12 includes a bus 24 , a processor 26 , a memory 28 , a secondary storage device 30 , and a communication interface 32 .
  • the bus 24 may include components that permit communication among the other components of the streaming media receiver 12 .
  • the processor 26 may be any type of processing component implemented in hardware, firmware, or a combination of hardware and software. This may include a central processing unit (CPU), a graphics processing unit (GPU), a microprocessor, a microcontroller, an application-specific integrated circuit (ASIC), or similar processing component.
  • the processor 26 may include one or more processors capable of being programmed to perform functions or processes such as for controlling one or more components of the streaming media receiver 12 .
  • the memory 28 may store information and instructions for use by the processor 26 .
  • the processor 26 may be configured to read and execute instructions stored on the memory 28 to perform functions. This may include control logic 34 , such as computer software, and data.
  • the memory 28 may include a random access memory (RAM), a read only memory (ROM), and/or another type of dynamic or static storage device (e.g., a flash memory, a magnetic memory, and/or an optical memory).
  • the secondary storage device 30 may also store information and/or software related to the operation and use of the streaming media receiver 12 .
  • the secondary storage device 30 may include a hard disk drive and/or a removable storage device or drive, as well as other types of storage device and/or non-transitory computer-readable medium.
  • the streaming media receiver 12 also includes a canvas engine 36 and a canvas renderer 38 .
  • the canvas engine 36 and the canvas renderer 38 may be embodied as hardware, software, or a combination of hardware and software. Thus, while depicted as separate components, the canvas engine 36 and the canvas renderer 38 may be integrated with the processor 26 and/or with the memory 28 or secondary storage device 30 as control logic.
  • the canvas engine 36 is configured to take a combination of one or more of the different variables, which may include characteristics of the sink device 18 , total number of active incoming streams, active audio, and other characteristics of the source material or source devices 14 , and instruct the canvas renderer 38 how to optimally display the multiple input media streams 20 collectively at the sink device 18 . Accordingly, the canvas renderer 38 may receive this instruction and generate the output media signal 22 for the sink device 18 .
  • the communication interface 32 may include one or more transceivers or transceiver-like components (e.g., a separate receiver and transmitter) that enables the streaming media receiver 12 to communicate with the source devices 14 and the sink device 18 , such as via a wired connection, a wireless connection, or a combination of wired and wireless connections.
  • the communication interface 32 permits the streaming media receiver 12 to receive information from another device, such as from the input media streams 20 from the plurality of source devices 14 .
  • the communication interface 32 may further permit the streaming media receiver 12 to provide information to another device, including the output media signal 22 to the sink device 18 .
  • the communication interface 32 may include an Ethernet interface, an optical interface, a coaxial interface, an infrared interface, a radio frequency (RF) interface, a universal serial bus (USB) interface, a Wi-Fi interface, a cellular network interface, and/or the like.
  • RF radio frequency
  • USB universal serial bus
  • the streaming media receiver 12 may perform one or more processes described herein.
  • the streaming media receiver 12 may perform these processes based on the processor executing software instructions stored on a non-transitory machine-readable medium (e.g., the machine may be a computer device), such as the memory 28 and/or the storage device 30 .
  • a machine-readable medium is defined herein as a non-transitory memory device, which may include memory space within a single physical storage device or memory space spread across multiple physical storage devices.
  • Software instructions may be read into the memory 28 or the storage device 30 from another machine-readable medium or from another device via the communication interface 32 .
  • the software instructions stored in memory 28 and/or the storage device 30 when executed, may cause the processor 26 to perform one or more processes described in the present disclosure.
  • hardwired circuitry may be used in place of, or in combination with, software instructions to perform one or more processes described herein.
  • the various implementations described herein may not be limited to any specific combination of hardware circuitry and software.
  • the streaming media receiver 12 or similar device may include additional components, fewer components, different components, or differently arranged components than those shown. Additionally, or alternatively, one or more components of the streaming media receiver 12 may perform one or more functions described as being performed by another set of components of the streaming media receiver.
  • FIG. 3 is an exemplary flow chart 300 depicting a process for optimizing a display layout of multiple media streams, in accordance with one or more embodiments of the present disclosure.
  • the display layout may be optimized based on information gathered by the streaming media receiver 12 from a number of different variables.
  • Information and/or variables for display layout optimization may include characteristics of the sink device 18 , metadata extracted from each incoming media stream 20 , total number of active incoming media streams, and the like.
  • the metadata extracted from each incoming media stream 20 may allow the streaming media receiver 12 to predict the type of source device corresponding to each media stream, which may be used to further enhance optimization of the display layout of multiple streams.
  • the process described in the flow chart 300 includes obtaining characteristics of the sink device 18 , as provided at step 305 .
  • Characteristics of the sink device 18 may include a size, an aspect ratio, and a resolution of the sink device. Additional characteristics of the sink device 18 may include an average viewing distance a user is from the sink device. The average user viewing distance from the sink device may be relevant, for example, in a conference room or lecture hall application.
  • Some of the sink characteristics may be automatically determined by the streaming media receiver 12 , such as aspect ratio and resolution. Other characteristics may be obtained from a user's input. For instance, upon configuration of the streaming media receiver 12 , various sink characteristics may be entered by a user, including the size of the sink device and the average user viewing distance from the sink device.
  • the streaming media receiver 12 receives one or more incoming or input media streams 20 from a plurality of source devices 14 , as provided at step 310 .
  • the input media streams 20 may be received wirelessly from the source devices 14 or via a wired connection, such as an ethernet connection to the network or a direct cable connection to the streaming media receiver 12 (e.g., USB, HDMI, etc.).
  • Metadata is extracted from each input media stream 20 to collect information about the streams and their corresponding source devices, as provided at step 315 .
  • the metadata extracted from each input media stream 20 may include information and/or characteristics about each media stream such as a MAC address, stream aspect ratio, stream resolution, streaming protocol, and the like.
  • the streaming media receiver 12 processes and analyzes the characteristics of each input from the media streams of the source devices 15 , as provided at step 320 . In some implementations, this may include identifying which streams include active audio and, if multiple streams have active audio, which streams have priority. According to one or more embodiments, the streaming media receiver 12 predicts the type of source device from which each media stream originated based on the analysis of various characteristics extracted from a stream's metadata, as provided at step 325 . Examples of source device types that may be predicted include laptop, PC, smartphone, tablet, and the like. Various examples of methods for predicting a source device type are described in greater detail below in connection with FIGS. 5 - 9 .
  • the streaming media receiver 12 then processes one or more of the sink characteristics, input stream characteristics, and/or predicted source device types, and generates an optimized display layout of a plurality of the incoming media streams for display at the sink device 18 , as provided at step 330 .
  • a display layout may be generated that optimizes the sizing and placement of the active input media streams at the sink device 18 .
  • the streaming media receiver 12 may utilize the sink characteristics, input stream characteristics, and source device type, among other things, to minimize wasted screen space and increase clarity and legibility of streamed images, video, and text.
  • the streaming media receiver 12 may use the canvas engine 36 to optimize the display layout of multiple incoming media streams based on the available metadata and sink characteristics.
  • the streaming media receiver 12 Upon generating an optimized display layout of the multiple input media streams, the streaming media receiver 12 transmits the output media signal 22 to the sink device 18 , as provided at step 335 .
  • the output media signal 22 may be a single, flattened stream including a composite of the multiple input media streams 20 arranged in the optimized display layout.
  • the canvas renderer 38 may be employed to generate the output media signal 22 based on instructions received from the canvas engine 36 .
  • the transmission of the output media signal 22 may be dynamic and its transmission from the streaming media receiver 12 to the sink device 18 may be continuous. Accordingly, the optimization of the display layout at the sink device 18 of the multiple incoming media streams may also be continuous and dynamic, particularly as the number, content, and source of the input media streams change. For example, as an incoming media stream is added or removed, the components and logic modules of the streaming media receiver 12 may process all active stream characteristics to determine the most optimized sizing and placement of the current active media streams and update the output media signal 22 accordingly. As an active media stream changes any of its characteristics, such as device type or orientation, the newly available characteristics may be processed to determine the most optimized sizing and placement of all the current active media streams. Moreover, it is possible that sink characteristic may change or be altered. As a sink characteristic changes, the newly available sink characteristics may also be processed to determine the most optimized sizing and placement of all the current active media streams.
  • FIG. 4 is an exemplary flow diagram 400 depicting the flow of signals, data, and/or corresponding information used in optimizing the display layout, in accordance with one or more embodiments of the present disclosure.
  • Four input media streams 20 are depicted, though the streaming media receiver 12 may be configured to receive, process, and optimize a display layout of any number of input media streams.
  • the metadata extracted from each input media stream 20 may be categorized as static metadata 40 or dynamic metadata 42 .
  • the static metadata 40 may include characteristics indicative of the type of source device, as well as mac address, protocol, DPI, and/or hostname.
  • the dynamic metadata 42 of each input media stream 20 may include information or characteristics such as screen orientation, aspect ratio, resolution, and/or content (e.g., via computer vision), as well as whether the stream includes active audio.
  • the stream metadata 40 , 42 and sink metadata 44 may be used in a process ( 405 ) that analyzes, calculates, and generates an optimized display layout 46 of all active input streams, which may then be transmitted to the sink device 18 .
  • the process ( 405 ) for generating the optimized display layout 46 show in FIG. 4 may be illustrative of an alternative embodiment to step 330 from the method 300 described in FIG. 3 .
  • the streaming media receiver 12 may predict the type of source device from which each input media stream 20 originated based on the analysis of various characteristics extracted from the input media stream's metadata.
  • FIG. 5 is an exemplary flow chart 500 depicting a process for predicting a source device type based on an analysis of an input media stream, in accordance with one or more embodiments of the present disclosure. Accordingly, the method described by the flow chart 500 may be an expansion of step 325 from the process described in flow chart 300 shown in FIG. 3 .
  • the streaming media receiver 12 assigns weights to various characteristics of each input media stream 20 , as provided at step 505 .
  • a total stream weight is calculated from one or more of the individual weights. For example, one or more individual weights may be summed to provide a final total stream weight.
  • the individual weights and/or the total stream weight may provide an indication of the type of source device from which an input media stream 20 is received within a degree of certainty.
  • various aspects of an incoming media stream are given differing values based on characteristics extracted from the stream's metadata and weighted accordingly to generate a final prediction or confidence level regarding the type of source device, as provided at step 515 .
  • the final or total stream weight may then be linked to the corresponding input media stream for use in optimizing the display layout of multiple streams at the sink device 18 .
  • the weight may be out of band and can dynamically change as characteristics of the input media stream change.
  • the streaming media receiver 12 may determine whether a given input media stream is being received from, for example, a laptop, PC, phone, or tablet.
  • FIG. 6 depicts a flow diagram 600 illustrating a first example embodiment for predicting a source device type based on these one or more characteristics.
  • the streaming media receiver 12 may receive and process the media access control (MAC) address of an input media stream. If the MAC address is coming from a Layer 3 or above switch, address resolution protocol (ARP) commands may be used to resolve the MAC address with the incoming media stream. Based on the MAC address, the streaming media receiver may attempt to determine the source device type (e.g. laptop, PC, phone, tablet) from which the input media stream is being transmitted.
  • MAC media access control
  • ARP address resolution protocol
  • This MAC address classification may be accomplished by maintaining a dataset of MAC addresses, including Organizationally Unique Identifier (OUI) and Network Interface Controller (NIC), as well as identifying key MAC address ranges for explicit user device manufacture model types.
  • the method may provide a weight that helps predict the type of device that is sending the input media stream, with a degree of certainty.
  • the identification of either the OUI or the NIC may determine a weight given based on the MAC address classification, which may be expressed as a sum of individual weights assigned to the OUI and NIC, as shown by Equation 1 below:
  • OUI weight +NIC weight Total Stream Weight (Eq. 1)
  • FIG. 7 depicts a flow diagram 700 illustrating a second example embodiment for predicting a source device type based on one or more characteristics of an input media stream 20 .
  • the streaming media receiver 12 may further analyze a Hostname included in the metadata of the input media stream.
  • the streaming media receiver 12 attempts to receive the Hostname and, if successful, seek key identification information in the Hostname that will help with identifying the source device type. This may be accomplished by maintaining a dataset of known Hostname prefixes and/or suffixes that manufacturers use by default on specific device model types.
  • the Hostname may indicate with some degree of certainty that the device type is a smartphone and, more particularly, a specific make and/or model of smartphone. Accordingly, an individual weight may be assigned to the Hostname (Hostname weight ) of an input media stream. In some implementations, the individual weight from the Hostname may be combined with the weight from the MAC address classification, as shown by Equation 2 below:
  • FIG. 8 depicts a flow diagram 800 illustrating a third example embodiment for predicting a source device type based on one or more characteristics of an input media stream 20 .
  • the streaming media receiver 12 may further analyze a streaming protocol associated with the input media stream.
  • the analysis of the streaming protocol may also include an analysis of an aspect ratio and a resolution of the input media stream.
  • the streaming media receiver 12 sorts the incoming media streams by streaming protocol for further analysis.
  • the input media streams may be first sorted by streaming protocol because each protocol may have different methods for classification and may help narrow down source device types or at least narrow down the manufacturer of the source device 14 .
  • an Airplay streaming protocol may indicate the source device is an Apple device.
  • the streaming media receiver 12 may then analyze the aspect ratio and screen resolution.
  • a dataset of common screen resolutions and aspect ratios used by specific manufacturer model types may be kept and stored in memory 28 or storage device 30 in order to help identify the source device type.
  • Individual weights may be assigned to each of the streaming protocol, aspect ratio, and resolution characteristics. In certain implementations, these individual weights may be combined with one or more of the weights from the MAC address classification and Hostname classification to provide the total stream weight used in predicting the source device type, as shown by Equation 3 below:
  • FIG. 9 depicts a flow diagram 900 illustrating a fourth example embodiment for predicting a source device type based on one or more characteristics of an input media stream 20 .
  • the streaming media receiver 12 may further analyze the input media stream via Computer Vision Object Detection (CVOD) and Computer Vision Image Classification (CVIC) technology.
  • CVOD Computer Vision Object Detection
  • CVIC Computer Vision Image Classification
  • Computer Vision Object Detection may be employed initially.
  • the streaming media receiver 12 utilizes Computer Vision Object Detection to analyze the first few video frames of the input media stream 20 . In doing so, the streaming media receiver 12 may scan for specific objects in the frame(s). This may be accomplished using an internal trained model dataset to compare against the frames of the input media stream.
  • the internal trained model dataset may be stored in the memory 28 or the secondary storage device 30 .
  • Examples of specific objects may include task bar objects, start menu icons, home screen or background imagery, logos, or the like.
  • the internal trained model dataset may be employed to recognize or detect objects in the frames that may help indicate the source device type of the input media stream 20 , as certain objects may be associated with certain device types or manufacturers. Accordingly, the streaming media receiver 12 may assign an individual weight based on the Computer Vision Object Detection process (CVOD weight ). As shown, the Computer Vision Object Detection process may result in predicting source device type using the object recognition techniques for detecting specific objects from image or video data.
  • CVOD weight Computer Vision Object Detection process
  • the streaming media receiver 12 may then use Computer Vision Image Classification to further analyze the input media stream 20 .
  • the streaming media receiver 12 may classify an image as coming from a particular source device type using a different internal trained model dataset stored in memory. Accordingly, the streaming media receiver 12 may assign an individual weight based on the Computer Vision Image Classification process (CVIC weight ).
  • the individual weights, CVOD weight and CVIC weight may be combined with one or more of the weights from the MAC address classification, Hostname classification, and Protocol classification processes to provide the total stream weight used in predicting the source device type, as shown by Equation 4 below:
  • various individual weights may be combined into broader classifications and summed to provide a weight total for each classification.
  • classifications are exemplary only.
  • individual weights corresponding to each classification are also exemplary and may be grouped or organized in a manner other than depicted in FIGS. 6 - 9 . Any combination of one or more individual weights may be used in calculating the total stream weight and subsequently employed in predicting the source device type.
  • FIGS. 10 - 16 illustrate a sample progression of a display layout 50 depicting how the streaming media receiver 12 may optimize the overall display layout 50 as different source devices 14 connect and transmit media streams 20 dynamically to the streaming media receiver 12 .
  • the display layout 50 is dynamically optimized using the techniques described herein to provide an enhanced viewing experience for end users.
  • FIG. 10 illustrates an exemplary display layout 50 of two input media streams.
  • a first input media stream may be received from a first source device (1), which is a laptop.
  • a second input media stream may be received from a second source device (2), which is a smartphone.
  • the second source device a smartphone
  • the second source device is oriented in a “portrait” mode or vertically, such that the aspect ratio includes a width lesser than a height (e.g., 9:16).
  • FIG. 11 illustrates how the display layout 50 from FIG. 10 may change when the orientation of the second source device (2) is rotated, changing its aspect ratio to one of a widescreen or “landscape” mode (e.g., from 9:16 to 16:9).
  • FIG. 12 illustrates how the display layout 50 from FIG. 10 may change when a third input media stream is received from a third source device (3), which may be another laptop, when the third source device (3) actively connects to the streaming media receiver 12 .
  • FIG. 13 illustrates how the display layout 50 from FIG. 12 may change when the orientation of the second source device (2) is rotated, changing its aspect ratio to one of a widescreen or landscape mode (e.g., from 9:16 to 16:9), similar to FIG. 11 .
  • a widescreen or landscape mode e.g., from 9:16 to 16:9
  • FIG. 14 illustrates how the display layout 50 from FIG. 12 may change when a fourth input media stream is received from a fourth source device (4), which may be another smartphone, when the fourth source device (4) actively connects to the streaming media receiver 12 in portrait mode.
  • FIG. 15 illustrates how the display layout 50 from FIG. 14 may change when the orientation of the fourth source device (4) is rotated, changing its aspect ratio to one of a widescreen or landscape mode (e.g., from 9:16 to 16:9).
  • FIG. 16 illustrates how the display layout 50 from FIG. 15 may change further when the orientation of the second source device (2) is also rotated, changing its aspect ratio to one of a widescreen or landscape mode (e.g., from 9:16 to 16:9).

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Library & Information Science (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

A system and method for optimizing a display layout of multiple video streams at a sink device is provided. The display layout of multiple streams may be dynamically optimized based on a number of different variables, including characteristics of the sink device, total number of active incoming streams, active audio, and other characteristics of the source material or device. The source of an incoming media stream may contain useful characteristics for optimizing the display layout of multiple media streams. One such characteristic of a source device may include the device type, such as laptop, PC, phone, or tablet. Information may be extracted from each incoming stream in order to predict a source device type from which the incoming media stream originates.

Description

    CROSS-REFERENCE TO RELATED APPLICATION(S)
  • This application is a continuation of U.S. Non-Provisional patent application Ser. No. 17/217,469, filed Mar. 30, 2021, which claims benefit to U.S. Provisional Patent Application No. 63/108,485, filed Nov. 2, 2020, the entirety of all of which are hereby incorporated by reference herein.
  • TECHNICAL FIELD
  • The present disclosure is directed to optimizing a display layout of multiple incoming media streams for output to a sink device and, more particularly, to dynamically optimizing the display layout based, in part, on a prediction of a type of source device from which an incoming media stream is received.
  • BACKGROUND
  • When combining multiple video streams, such as in a teleconferencing application, conventional video stream control algorithms utilize several approaches to sizing and placement of video streams. These approaches include displaying all streams in equal size, emphasizing active talker (i.e., displaying one stream larger than others), picture-in-picture, and framing (i.e., displaying small streams squared around a big stream in the middle). The result is often poor use of screen real estate and illegible text. Moreover, native streaming protocols on various operating systems do not share metadata that specifies the type of device upon which the operating system is running (e.g., laptop, PC, phone, tablet, etc.). As a result, streaming receivers often lack context for what is being viewed on the device's output, which in turn also leads to wasted screen space and sub-optimal output rendering for the end user or viewer.
  • SUMMARY
  • According to some embodiments, a media receiver is disclosed, where the media receiver comprises a memory configured to store machine-readable instructions, and a processor circuitry in communication with the memory. The processor circuitry is configured to execute the machine-readable instructions to cause the processing circuitry to receive a first media stream corresponding to a first media source, obtain first metadata from the first media stream, receive a second media stream corresponding to a second media source, obtain second metadata from the second media stream, determine a first source type for the first media source based on the first metadata, determine a second source type for the second media source based on the second metadata, generate an optimized display including the first media stream and the second media stream based on at least the first source type and the second source type, and control transmission of the optimized display to a sink device.
  • According to some embodiments, a method for optimizing a display layout on a display screen, the method comprising receiving, by a communication interface, a first media stream from a first media source, extracting, by a processor, first metadata from the first media stream, receiving, by the communication interface, a second media stream from a second media source, extracting, by the processor, second metadata from the second media stream, determining, by the processor, a first source type for the first media source based on the first metadata, determining, by the processor, a second source type for the second media source based on the second metadata, generating, by the processor, an optimized display including the first media stream and the second media stream based on at least the first source type and the second source type, and controlling, by the processor, transmission of the optimized display to a sink device.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 shows an environmental diagram of a media presentation system including a streaming media receiver, in accordance with one or more embodiments of the present disclosure.
  • FIG. 2 shows a block diagram of the system from FIG. 1 including a more detailed view of the streaming media receiver.
  • FIG. 3 shows an exemplary flow chart depicting a process for optimizing a display layout of multiple media streams, in accordance with one or more embodiments of the present disclosure.
  • FIG. 4 shows an exemplary flow diagram depicting the flow of signals, data, and/or corresponding information used in optimizing a display layout, in accordance with one or more embodiments of the present disclosure.
  • FIG. 5 shows an exemplary flow chart depicting a process for predicting a source device type based on an analysis of an input media stream, in accordance with one or more embodiments of the present disclosure.
  • FIG. 6 shows a flow diagram illustrating a first exemplary embodiment for predicting a source device type based on one or more characteristics of an input media stream, in accordance with one or more embodiments of the present disclosure.
  • FIG. 7 shows a flow diagram illustrating a second exemplary embodiment for predicting a source device type based on one or more characteristics of an input media stream, in accordance with one or more embodiments of the present disclosure.
  • FIG. 8 shows a flow diagram illustrating a third exemplary embodiment for predicting a source device type based on one or more characteristics of an input media stream, in accordance with one or more embodiments of the present disclosure.
  • FIG. 9 shows a flow diagram illustrating a fourth exemplary embodiment for predicting a source device type based on one or more characteristics of an input media stream, in accordance with one or more embodiments of the present disclosure.
  • FIG. 10 shows an exemplary display layout that is optimized in accordance with one or more embodiments of the present disclosure.
  • FIG. 11 shows an exemplary display layout that is optimized in accordance with one or more embodiments of the present disclosure.
  • FIG. 12 shows an exemplary display layout that is optimized in accordance with one or more embodiments of the present disclosure.
  • FIG. 13 shows an exemplary display layout that is optimized in accordance with one or more embodiments of the present disclosure.
  • FIG. 14 shows an exemplary display layout that is optimized in accordance with one or more embodiments of the present disclosure.
  • FIG. 15 shows an exemplary display layout that is optimized in accordance with one or more embodiments of the present disclosure.
  • FIG. 16 shows an exemplary display layout that is optimized in accordance with one or more embodiments of the present disclosure.
  • DETAILED DESCRIPTION
  • As required, detailed embodiments are disclosed herein; however, it is to be understood that the disclosed embodiments are merely exemplary of the inventive features that may be embodied in various and alternative forms that include additional, or fewer, components and/or steps. The figures are not necessarily to scale; some features may be exaggerated or minimized to show details of particular components. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a representative basis for teaching one skilled in the art to variously employ the present invention.
  • FIG. 1 is an environmental diagram of a media presentation system 10 including a streaming media receiver 12, in accordance with one or more embodiments of the present disclosure. The streaming media receiver 12 is configured to receive multimedia content, such as audio and/or video (A/V) streams, from one or more source devices 14. The source devices 14 may include any input media device capable of providing audio and/or video signals. For instance, the source devices 14 may include various types of computing devices or multimedia devices, such as a personal computer (PC), laptop, smartphone, tablet, or streaming media player.
  • The streaming media receiver 12 may be configured to connect to a network 16, such as a local area network (LAN) or a wide area network (WAN) such as the Internet. According to one or more embodiments, various source devices 14 communicate with the streaming media receiver 12 over the network 16. As shown, one or more source devices 14 wirelessly connect to the streaming media receiver 12 directly using a wireless communication protocol. In certain embodiments, one or more of the source devices 14 may connect to the streaming media receiver 12 directly using a wire or cable, such as USB, HDMI, DisplayPort or the like.
  • The streaming media receiver 12 is further configured to process and analyze multiple incoming media streams from a plurality of the source devices 14 and generate an output media signal, where the output media signal is transmitted to at least one sink device 18. The sink device 18 may refer to any output media device or endpoint of a given device output configured to receive audio and/or video signals, such as a display, television, projector, video conferencing system, A/V switch, or the like.
  • The output media signal is a composited stream going out to the sink device 18 that includes a plurality of the incoming media streams from the source devices 14. The streaming media receiver 12 may output the composited stream to the sink device 18 as a single, flattened stream. The composited and flattened stream of the output media signal may include metadata corresponding to attributes of their respective source device 14, where the attribute metadata may then be used for controlling the output at the end user's interface or sink device.
  • According to one or more embodiments of the present disclosure, the streaming media receiver 12 optimizes the display layout of the multiple incoming media streams from the plurality of source devices 14 when combining the streams for output to the sink device 18. Accordingly, these attributes may relate to the sizing, position, scaling, orientation, aspect ratio, and other features of the incoming media streams to enhance the display layout of multiple streams. Optimizing the display layout for the multiple incoming video streams at the sink device 18 provides for more efficient use of screen real estate and can provide advantages such as increasing the legibility of text. For example, by optimizing the sizing and placement of the video streams, wasted screen space can be minimized and illegible text can be addressed to make more legible (e.g., increase text size by increasing display window) when multiple video streams are displayed.
  • The display layout of multiple streams may be optimized based on a number of different variables, including characteristics of the sink device 18, total number of active incoming streams, active audio, and other characteristics of the source material or device. The source of an incoming media stream may contain useful characteristics for optimizing the display layout of multiple media streams. One such characteristic of the source may include the device type, such as laptop, PC, phone, or tablet. As set forth above, native streaming protocols on operating systems such as Microsoft Windows, Apple OSX, Apple iPadOS, Apple iOS and Android do not directly share metadata about the specific type of device upon which the operating system is running. Accordingly, one or more embodiments of the present disclosure may provide a system, apparatus, and method for predicting the device type of a source device providing an incoming media stream to the streaming media receiver 12 based on available metadata and/or other attribute information obtained from the incoming media streams.
  • FIG. 2 is a block diagram of the system 10 from FIG. 1 including a more detailed view of the streaming media receiver 12. According to one or more embodiments, the streaming media receiver 12 is configured to detect a source device type associated with an input media stream 20 and optimize a display layout of multiple input media streams based, at least in part, on the predicted source device type of each stream. The streaming media receiver 12 may be configured to receive multiple input media streams 20 from the plurality of source devices 14 and generate an output media signal 22 to one or more sink devices 18, as described in FIG. 1 . As shown in FIG. 2 , the sink device 18 is external to the streaming media receiver 12, and accessible via direct communication with the streaming media receiver 12 or via the network 16 (i.e., Network sink device). In one or more alternate embodiments, the sink device 18 may be integrated with the streaming media receiver 12 within a single device. Additionally, the streaming media receiver 12 may be a stand-alone device or may be integrated as a component within another computing unit or device.
  • As shown in FIG. 2 , the streaming media receiver 12 includes a bus 24, a processor 26, a memory 28, a secondary storage device 30, and a communication interface 32. The bus 24 may include components that permit communication among the other components of the streaming media receiver 12. The processor 26 may be any type of processing component implemented in hardware, firmware, or a combination of hardware and software. This may include a central processing unit (CPU), a graphics processing unit (GPU), a microprocessor, a microcontroller, an application-specific integrated circuit (ASIC), or similar processing component. The processor 26 may include one or more processors capable of being programmed to perform functions or processes such as for controlling one or more components of the streaming media receiver 12. The memory 28 may store information and instructions for use by the processor 26. For example, the processor 26 may be configured to read and execute instructions stored on the memory 28 to perform functions. This may include control logic 34, such as computer software, and data. The memory 28 may include a random access memory (RAM), a read only memory (ROM), and/or another type of dynamic or static storage device (e.g., a flash memory, a magnetic memory, and/or an optical memory). The secondary storage device 30 may also store information and/or software related to the operation and use of the streaming media receiver 12. For example, the secondary storage device 30 may include a hard disk drive and/or a removable storage device or drive, as well as other types of storage device and/or non-transitory computer-readable medium.
  • The streaming media receiver 12 also includes a canvas engine 36 and a canvas renderer 38. The canvas engine 36 and the canvas renderer 38 may be embodied as hardware, software, or a combination of hardware and software. Thus, while depicted as separate components, the canvas engine 36 and the canvas renderer 38 may be integrated with the processor 26 and/or with the memory 28 or secondary storage device 30 as control logic. The canvas engine 36 is configured to take a combination of one or more of the different variables, which may include characteristics of the sink device 18, total number of active incoming streams, active audio, and other characteristics of the source material or source devices 14, and instruct the canvas renderer 38 how to optimally display the multiple input media streams 20 collectively at the sink device 18. Accordingly, the canvas renderer 38 may receive this instruction and generate the output media signal 22 for the sink device 18.
  • The communication interface 32 may include one or more transceivers or transceiver-like components (e.g., a separate receiver and transmitter) that enables the streaming media receiver 12 to communicate with the source devices 14 and the sink device 18, such as via a wired connection, a wireless connection, or a combination of wired and wireless connections. The communication interface 32 permits the streaming media receiver 12 to receive information from another device, such as from the input media streams 20 from the plurality of source devices 14. The communication interface 32 may further permit the streaming media receiver 12 to provide information to another device, including the output media signal 22 to the sink device 18. For example, the communication interface 32 may include an Ethernet interface, an optical interface, a coaxial interface, an infrared interface, a radio frequency (RF) interface, a universal serial bus (USB) interface, a Wi-Fi interface, a cellular network interface, and/or the like.
  • The streaming media receiver 12 may perform one or more processes described herein. The streaming media receiver 12 may perform these processes based on the processor executing software instructions stored on a non-transitory machine-readable medium (e.g., the machine may be a computer device), such as the memory 28 and/or the storage device 30. A machine-readable medium is defined herein as a non-transitory memory device, which may include memory space within a single physical storage device or memory space spread across multiple physical storage devices. Software instructions may be read into the memory 28 or the storage device 30 from another machine-readable medium or from another device via the communication interface 32. The software instructions stored in memory 28 and/or the storage device 30, when executed, may cause the processor 26 to perform one or more processes described in the present disclosure. Additionally, or alternatively, hardwired circuitry may be used in place of, or in combination with, software instructions to perform one or more processes described herein. Thus, the various implementations described herein may not be limited to any specific combination of hardware circuitry and software.
  • The arrangement and/or number of components shown in FIG. 2 are provided as an example. In practice, the streaming media receiver 12 or similar device may include additional components, fewer components, different components, or differently arranged components than those shown. Additionally, or alternatively, one or more components of the streaming media receiver 12 may perform one or more functions described as being performed by another set of components of the streaming media receiver.
  • FIG. 3 is an exemplary flow chart 300 depicting a process for optimizing a display layout of multiple media streams, in accordance with one or more embodiments of the present disclosure. The display layout may be optimized based on information gathered by the streaming media receiver 12 from a number of different variables. Information and/or variables for display layout optimization may include characteristics of the sink device 18, metadata extracted from each incoming media stream 20, total number of active incoming media streams, and the like. As will be described in greater detail, the metadata extracted from each incoming media stream 20 may allow the streaming media receiver 12 to predict the type of source device corresponding to each media stream, which may be used to further enhance optimization of the display layout of multiple streams.
  • The process described in the flow chart 300 includes obtaining characteristics of the sink device 18, as provided at step 305. Characteristics of the sink device 18 may include a size, an aspect ratio, and a resolution of the sink device. Additional characteristics of the sink device 18 may include an average viewing distance a user is from the sink device. The average user viewing distance from the sink device may be relevant, for example, in a conference room or lecture hall application. Some of the sink characteristics may be automatically determined by the streaming media receiver 12, such as aspect ratio and resolution. Other characteristics may be obtained from a user's input. For instance, upon configuration of the streaming media receiver 12, various sink characteristics may be entered by a user, including the size of the sink device and the average user viewing distance from the sink device.
  • The streaming media receiver 12 receives one or more incoming or input media streams 20 from a plurality of source devices 14, as provided at step 310. As previously described, the input media streams 20 may be received wirelessly from the source devices 14 or via a wired connection, such as an ethernet connection to the network or a direct cable connection to the streaming media receiver 12 (e.g., USB, HDMI, etc.).
  • Metadata is extracted from each input media stream 20 to collect information about the streams and their corresponding source devices, as provided at step 315. The metadata extracted from each input media stream 20 may include information and/or characteristics about each media stream such as a MAC address, stream aspect ratio, stream resolution, streaming protocol, and the like.
  • Once extracted, the streaming media receiver 12 processes and analyzes the characteristics of each input from the media streams of the source devices 15, as provided at step 320. In some implementations, this may include identifying which streams include active audio and, if multiple streams have active audio, which streams have priority. According to one or more embodiments, the streaming media receiver 12 predicts the type of source device from which each media stream originated based on the analysis of various characteristics extracted from a stream's metadata, as provided at step 325. Examples of source device types that may be predicted include laptop, PC, smartphone, tablet, and the like. Various examples of methods for predicting a source device type are described in greater detail below in connection with FIGS. 5-9 .
  • The streaming media receiver 12 then processes one or more of the sink characteristics, input stream characteristics, and/or predicted source device types, and generates an optimized display layout of a plurality of the incoming media streams for display at the sink device 18, as provided at step 330. For example, a display layout may be generated that optimizes the sizing and placement of the active input media streams at the sink device 18. In optimizing the display layout of multiple media streams for the sink device 18, the streaming media receiver 12 may utilize the sink characteristics, input stream characteristics, and source device type, among other things, to minimize wasted screen space and increase clarity and legibility of streamed images, video, and text. The streaming media receiver 12 may use the canvas engine 36 to optimize the display layout of multiple incoming media streams based on the available metadata and sink characteristics.
  • Upon generating an optimized display layout of the multiple input media streams, the streaming media receiver 12 transmits the output media signal 22 to the sink device 18, as provided at step 335. As previously described, the output media signal 22 may be a single, flattened stream including a composite of the multiple input media streams 20 arranged in the optimized display layout. According to one or more embodiments, the canvas renderer 38 may be employed to generate the output media signal 22 based on instructions received from the canvas engine 36.
  • The transmission of the output media signal 22 may be dynamic and its transmission from the streaming media receiver 12 to the sink device 18 may be continuous. Accordingly, the optimization of the display layout at the sink device 18 of the multiple incoming media streams may also be continuous and dynamic, particularly as the number, content, and source of the input media streams change. For example, as an incoming media stream is added or removed, the components and logic modules of the streaming media receiver 12 may process all active stream characteristics to determine the most optimized sizing and placement of the current active media streams and update the output media signal 22 accordingly. As an active media stream changes any of its characteristics, such as device type or orientation, the newly available characteristics may be processed to determine the most optimized sizing and placement of all the current active media streams. Moreover, it is possible that sink characteristic may change or be altered. As a sink characteristic changes, the newly available sink characteristics may also be processed to determine the most optimized sizing and placement of all the current active media streams.
  • FIG. 4 is an exemplary flow diagram 400 depicting the flow of signals, data, and/or corresponding information used in optimizing the display layout, in accordance with one or more embodiments of the present disclosure. Four input media streams 20 are depicted, though the streaming media receiver 12 may be configured to receive, process, and optimize a display layout of any number of input media streams. As shown, the metadata extracted from each input media stream 20 may be categorized as static metadata 40 or dynamic metadata 42. The static metadata 40 may include characteristics indicative of the type of source device, as well as mac address, protocol, DPI, and/or hostname. The dynamic metadata 42 of each input media stream 20 may include information or characteristics such as screen orientation, aspect ratio, resolution, and/or content (e.g., via computer vision), as well as whether the stream includes active audio. The stream metadata 40, 42 and sink metadata 44 may be used in a process (405) that analyzes, calculates, and generates an optimized display layout 46 of all active input streams, which may then be transmitted to the sink device 18. The process (405) for generating the optimized display layout 46 show in FIG. 4 may be illustrative of an alternative embodiment to step 330 from the method 300 described in FIG. 3 .
  • In order to optimize the display layout of multiple streams displayed simultaneously, it is helpful to understand the type of device from which each input media stream originates. Examples of source device types that may transmit media streams to the streaming media receiver 12 include laptops, PCs, smartphones, tablets, and the like. Because native streaming protocols on operating systems do not share metadata about the type of device upon which the operating system is running, the source device type may be deduced from other available information. As previously mentioned, the streaming media receiver 12 may predict the type of source device from which each input media stream 20 originated based on the analysis of various characteristics extracted from the input media stream's metadata.
  • FIG. 5 is an exemplary flow chart 500 depicting a process for predicting a source device type based on an analysis of an input media stream, in accordance with one or more embodiments of the present disclosure. Accordingly, the method described by the flow chart 500 may be an expansion of step 325 from the process described in flow chart 300 shown in FIG. 3 .
  • To predict a source device type, the streaming media receiver 12 assigns weights to various characteristics of each input media stream 20, as provided at step 505. At step 510, a total stream weight is calculated from one or more of the individual weights. For example, one or more individual weights may be summed to provide a final total stream weight. The individual weights and/or the total stream weight may provide an indication of the type of source device from which an input media stream 20 is received within a degree of certainty. To this end, various aspects of an incoming media stream are given differing values based on characteristics extracted from the stream's metadata and weighted accordingly to generate a final prediction or confidence level regarding the type of source device, as provided at step 515.
  • The final or total stream weight may then be linked to the corresponding input media stream for use in optimizing the display layout of multiple streams at the sink device 18. Although the final weight calculated for each input media stream 20 is linked thereto, the weight may be out of band and can dynamically change as characteristics of the input media stream change. Using the weight information, the streaming media receiver 12 may determine whether a given input media stream is being received from, for example, a laptop, PC, phone, or tablet.
  • One or several characteristics of each input media stream 20, once extracted from the metadata, may be analyzed and weighted. FIG. 6 depicts a flow diagram 600 illustrating a first example embodiment for predicting a source device type based on these one or more characteristics. According to the first embodiment, the streaming media receiver 12 may receive and process the media access control (MAC) address of an input media stream. If the MAC address is coming from a Layer 3 or above switch, address resolution protocol (ARP) commands may be used to resolve the MAC address with the incoming media stream. Based on the MAC address, the streaming media receiver may attempt to determine the source device type (e.g. laptop, PC, phone, tablet) from which the input media stream is being transmitted. This MAC address classification may be accomplished by maintaining a dataset of MAC addresses, including Organizationally Unique Identifier (OUI) and Network Interface Controller (NIC), as well as identifying key MAC address ranges for explicit user device manufacture model types. The method may provide a weight that helps predict the type of device that is sending the input media stream, with a degree of certainty. The identification of either the OUI or the NIC may determine a weight given based on the MAC address classification, which may be expressed as a sum of individual weights assigned to the OUI and NIC, as shown by Equation 1 below:

  • OUIweight+NICweight=Total Stream Weight   (Eq. 1)
  • FIG. 7 depicts a flow diagram 700 illustrating a second example embodiment for predicting a source device type based on one or more characteristics of an input media stream 20. According to the second embodiment, the streaming media receiver 12 may further analyze a Hostname included in the metadata of the input media stream. The streaming media receiver 12 attempts to receive the Hostname and, if successful, seek key identification information in the Hostname that will help with identifying the source device type. This may be accomplished by maintaining a dataset of known Hostname prefixes and/or suffixes that manufacturers use by default on specific device model types. For example, if the Hostname is identified as “Mary's iPhone,” it may indicate with some degree of certainty that the device type is a smartphone and, more particularly, a specific make and/or model of smartphone. Accordingly, an individual weight may be assigned to the Hostname (Hostnameweight) of an input media stream. In some implementations, the individual weight from the Hostname may be combined with the weight from the MAC address classification, as shown by Equation 2 below:

  • (OUIweight+NICweight)+Hostnameweight=Total Stream Weight   (Eq. 2)
  • FIG. 8 depicts a flow diagram 800 illustrating a third example embodiment for predicting a source device type based on one or more characteristics of an input media stream 20. According to the third embodiment, the streaming media receiver 12 may further analyze a streaming protocol associated with the input media stream. The analysis of the streaming protocol may also include an analysis of an aspect ratio and a resolution of the input media stream. For example, the streaming media receiver 12 sorts the incoming media streams by streaming protocol for further analysis. The input media streams may be first sorted by streaming protocol because each protocol may have different methods for classification and may help narrow down source device types or at least narrow down the manufacturer of the source device 14. For example, an Airplay streaming protocol may indicate the source device is an Apple device.
  • Once sorted by streaming protocol, the streaming media receiver 12 may then analyze the aspect ratio and screen resolution. A dataset of common screen resolutions and aspect ratios used by specific manufacturer model types may be kept and stored in memory 28 or storage device 30 in order to help identify the source device type. Individual weights may be assigned to each of the streaming protocol, aspect ratio, and resolution characteristics. In certain implementations, these individual weights may be combined with one or more of the weights from the MAC address classification and Hostname classification to provide the total stream weight used in predicting the source device type, as shown by Equation 3 below:

  • (OUIweight+NICweight)+Hostnameweight+(Protocolweight+Aspectweight+Resolutionweight)=Total Stream Weight   (Eq. 3)
  • FIG. 9 depicts a flow diagram 900 illustrating a fourth example embodiment for predicting a source device type based on one or more characteristics of an input media stream 20. According to the fourth embodiment, the streaming media receiver 12 may further analyze the input media stream via Computer Vision Object Detection (CVOD) and Computer Vision Image Classification (CVIC) technology. In some implementations, Computer Vision Object Detection may be employed initially. The streaming media receiver 12 utilizes Computer Vision Object Detection to analyze the first few video frames of the input media stream 20. In doing so, the streaming media receiver 12 may scan for specific objects in the frame(s). This may be accomplished using an internal trained model dataset to compare against the frames of the input media stream. The internal trained model dataset may be stored in the memory 28 or the secondary storage device 30. Examples of specific objects may include task bar objects, start menu icons, home screen or background imagery, logos, or the like. The internal trained model dataset may be employed to recognize or detect objects in the frames that may help indicate the source device type of the input media stream 20, as certain objects may be associated with certain device types or manufacturers. Accordingly, the streaming media receiver 12 may assign an individual weight based on the Computer Vision Object Detection process (CVODweight). As shown, the Computer Vision Object Detection process may result in predicting source device type using the object recognition techniques for detecting specific objects from image or video data.
  • If the CVODweight corresponds to a relatively low certainty level, the streaming media receiver 12 may then use Computer Vision Image Classification to further analyze the input media stream 20. Using the same video frames from the Computer Vision Object Detection process, the streaming media receiver 12 may classify an image as coming from a particular source device type using a different internal trained model dataset stored in memory. Accordingly, the streaming media receiver 12 may assign an individual weight based on the Computer Vision Image Classification process (CVICweight). In certain implementations, the individual weights, CVODweight and CVICweight, may be combined with one or more of the weights from the MAC address classification, Hostname classification, and Protocol classification processes to provide the total stream weight used in predicting the source device type, as shown by Equation 4 below:

  • (OUIweight+NICweight)+Hostnameweight+(Protocolweight+Aspectweight+Resolutionweight)+(CVODweight+CVICweight)=Total Stream Weight   (Eq. 4)
  • As shown, various individual weights may be combined into broader classifications and summed to provide a weight total for each classification. It should be noted that the classifications are exemplary only. Moreover, the individual weights corresponding to each classification are also exemplary and may be grouped or organized in a manner other than depicted in FIGS. 6-9 . Any combination of one or more individual weights may be used in calculating the total stream weight and subsequently employed in predicting the source device type.
  • A dynamic, optimized display layout of multiple incoming media streams based on the various characteristics described herein, including the source device type of an incoming media stream may reduce wasted screen space, increase legibility, and ultimately enhance the viewing experience. By way of example, FIGS. 10-16 illustrate a sample progression of a display layout 50 depicting how the streaming media receiver 12 may optimize the overall display layout 50 as different source devices 14 connect and transmit media streams 20 dynamically to the streaming media receiver 12. As shown in FIGS. 10-16 , rather than giving equal screen space for each input media stream 20, the display layout 50 is dynamically optimized using the techniques described herein to provide an enhanced viewing experience for end users.
  • FIG. 10 illustrates an exemplary display layout 50 of two input media streams. In this example, a first input media stream may be received from a first source device (1), which is a laptop. A second input media stream may be received from a second source device (2), which is a smartphone. As shown, the second source device, a smartphone, is oriented in a “portrait” mode or vertically, such that the aspect ratio includes a width lesser than a height (e.g., 9:16). FIG. 11 illustrates how the display layout 50 from FIG. 10 may change when the orientation of the second source device (2) is rotated, changing its aspect ratio to one of a widescreen or “landscape” mode (e.g., from 9:16 to 16:9).
  • FIG. 12 illustrates how the display layout 50 from FIG. 10 may change when a third input media stream is received from a third source device (3), which may be another laptop, when the third source device (3) actively connects to the streaming media receiver 12. FIG. 13 illustrates how the display layout 50 from FIG. 12 may change when the orientation of the second source device (2) is rotated, changing its aspect ratio to one of a widescreen or landscape mode (e.g., from 9:16 to 16:9), similar to FIG. 11 .
  • FIG. 14 illustrates how the display layout 50 from FIG. 12 may change when a fourth input media stream is received from a fourth source device (4), which may be another smartphone, when the fourth source device (4) actively connects to the streaming media receiver 12 in portrait mode. FIG. 15 illustrates how the display layout 50 from FIG. 14 may change when the orientation of the fourth source device (4) is rotated, changing its aspect ratio to one of a widescreen or landscape mode (e.g., from 9:16 to 16:9). FIG. 16 illustrates how the display layout 50 from FIG. 15 may change further when the orientation of the second source device (2) is also rotated, changing its aspect ratio to one of a widescreen or landscape mode (e.g., from 9:16 to 16:9).
  • While exemplary embodiments are described above, it is not intended that these embodiments describe all possible forms of the described features. Rather, the words used in the specification are words of description rather than limitation, and it is understood that various changes may be made without departing from the spirit and scope of the disclosure. Additionally, the features of various implementing embodiments may be combined to form further embodiments of the disclosure.

Claims (20)

What is claimed is:
1. A media receiver comprising:
a memory configured to store machine-readable instructions; and
a processor circuitry in communication with the memory, wherein the processor circuitry is configured to execute the machine-readable instructions to cause the processing circuitry to:
receive a first media stream corresponding to a first media source;
obtain first metadata from the first media stream;
receive a second media stream corresponding to a second media source;
obtain second metadata from the second media stream;
determine a first source device type for the first media source based on a reference to the first metadata;
determine a second source device type for the second media source based on a reference to the second metadata; and
control a display layout of the first media stream and the second media stream for display on a sink device based on at least the first source type and the second source type.
2. The media receiver of claim 1, wherein the processor circuitry is configured to execute the machine-readable instructions to further cause the processing circuitry to:
receive attribute data corresponding to the sink device; and
further control the display layout including the first media stream and the second media stream based additionally on the attribute data corresponding to the sink device.
3. The media receiver of claim 2, wherein the attribute data corresponding to the sink device includes a predetermined user viewing distance of the sink device.
4. The media receiver of claim 1, wherein the first metadata includes MAC address classification corresponding to the first media source and the second metadata further includes MAC address classification corresponding to the second media source; and
wherein the processor circuitry is configured to execute the machine-readable instructions to cause the processing circuitry to:
determine the first source device type for the first media source based on at least the MAC address classification corresponding to the first media source; and
determine the second source device type for the second media source based on at least the MAC address classification corresponding to the second media source.
5. The media receiver of claim 1, wherein the first metadata includes Hostname classification corresponding to the first media source and the second metadata further includes Hostname address classification corresponding to the second media source; and
wherein the processor circuitry is configured to execute the machine-readable instructions to cause the processing circuitry to:
determine the first source device type for the first media source based on at least the Hostname classification corresponding to the first media source; and
determine the second source device type for the second media source based on at least the Hostname classification corresponding to the second media source.
6. The media receiver of claim 1, wherein the first metadata includes protocol, aspect ratio, and resolution information of the first media source and the second metadata further includes protocol, aspect ratio, and resolution information of the second media source; and
wherein the processor circuitry is configured to execute the machine-readable instructions to cause the processing circuitry to:
determine the first source device type for the first media source based on at least one of the protocol, aspect ratio, or resolution information of the first media source; and
determine the second source device type for the second media source based on at least one of the protocol, aspect ratio, or resolution information of the second media source.
7. The media receiver of claim 1, wherein the first metadata includes computer vision object detection and computer vision image classification information of the first media source and the second metadata further includes computer vision object detection and computer vision image classification information of the second media source; and
wherein the processor circuitry is configured to execute the machine-readable instructions to cause the processing circuitry to:
determine the first source device type for the first media source based on at least one of the computer vision object detection or computer vision image classification information of the first media source; and
determine the second source device type for the second media source based on at least one of the computer vision object detection or computer vision image classification information of the second media source.
8. The media receiver of claim 1, wherein the processor circuitry is configured to execute the machine-readable instructions to cause the processing circuitry to control the display layout to include:
a first display window including a display of the first media stream; and
a second display window including a display of the second media stream, wherein the first display window is sized differently from the second display window.
9. The media receiver of claim 1, wherein the processor circuitry is configured to execute the machine-readable instructions to cause the processing circuitry to control the display layout to include:
a first display window including a display of the first media stream; and
a second display window including a display of the second media stream, wherein the first display window has different dimensions from the second display window.
10. The media receiver of claim 1, wherein the processor circuitry is configured to execute the machine-readable instructions to cause the processing circuitry to control the display layout to include:
a first display window including a display of the first media stream, the first display positioned at a first location selected based on the first source device type; and
a second display window including a display of the second media stream, the second display positioned at a second location selected based on the second source device type.
11. A method for optimizing a display layout on a display screen, the method comprising:
receiving, by a communication interface, a first media stream from a first media source;
extracting, by a processor, first metadata from the first media stream;
receiving, by the communication interface, a second media stream from a second media source;
extracting, by the processor, second metadata from the second media stream;
determining, by the processor, a first source device type for the first media source based on a reference to the first metadata;
determining, by the processor, a second source device type for the second media source based on a reference to the second metadata; and
controlling, by the processor, a display layout of the first media stream and the second media stream for display on a sink device based on at least the first source type and the second source type.
12. The method of claim 11, the method further comprising:
receiving, by the processor, attribute data corresponding to the sink device; and
further controlling, by the processor, the display layout including the first media stream and the second media stream based additionally on the attribute data corresponding to the sink device.
13. The method of claim 12, wherein the attribute data corresponding to the sink device includes a predetermined user viewing distance of the sink device.
14. The method of claim 11, wherein the first metadata includes MAC address classification corresponding to the first media source and the second metadata further includes MAC address classification corresponding to the second media source;
wherein determining the first source device type for the first media source is based on at least the MAC address classification corresponding to the first media source; and
wherein determining the second source device type for the second media source is based on at least the MAC address classification corresponding to the second media source.
15. The method of claim 11, wherein the first metadata includes Hostname classification corresponding to the first media source and the second metadata further includes Hostname address classification corresponding to the second media source;
wherein determining the first source device type for the first media source is based on at least the Hostname classification corresponding to the first media source; and
wherein determining the second source device type for the second media source is based on at least the Hostname classification corresponding to the second media source.
16. The method of claim 11, wherein the first metadata includes protocol, aspect ratio, and resolution information of the first media source and the second metadata further includes protocol, aspect ratio, and resolution information of the second media source;
wherein determining the first source device type for the first media source is based on at least one of the protocol, aspect ratio, or resolution information of the first media source; and
wherein determining the second source device type for the second media source is based on at least one of the protocol, aspect ratio, or resolution information of the second media source.
17. The method of claim 11, wherein the first metadata includes computer vision object detection and computer vision image classification information of the first media source and the second metadata further includes computer vision object detection and computer vision image classification information of the second media source;
wherein determining the first source device type for the first media source is based on at least one of the computer vision object detection or computer vision image classification information of the first media source; and
wherein determining the second source device type for the second media source is based on at least one of the computer vision object detection or computer vision image classification information of the second media source.
18. The method of claim 11, wherein controlling the display layout includes:
generating a first display window including a display of the first media stream; and
generating a second display window including a display of the second media stream, wherein the first display window is sized differently from the second display window.
19. The method of claim 11, wherein controlling the display layout includes:
generating a first display window including a display of the first media stream; and
generating a second display window including a display of the second media stream, wherein the first display window has different dimensions from the second display window.
20. The method of claim 11, wherein controlling the display layout includes:
generating a first display window including a display of the first media stream, the first display positioned at a first location selected based on the first source device type; and
generating a second display window including a display of the second media stream, the second display positioned at a second location selected based on the second source device type.
US18/129,941 2020-11-02 2023-04-03 Display Layout Optimization of Multiple Media Streams Pending US20230237976A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/129,941 US20230237976A1 (en) 2020-11-02 2023-04-03 Display Layout Optimization of Multiple Media Streams

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US202063108485P 2020-11-02 2020-11-02
US17/217,469 US11651749B2 (en) 2020-11-02 2021-03-30 Display layout optimization of multiple media streams
US18/129,941 US20230237976A1 (en) 2020-11-02 2023-04-03 Display Layout Optimization of Multiple Media Streams

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US17/217,469 Continuation US11651749B2 (en) 2020-11-02 2021-03-30 Display layout optimization of multiple media streams

Publications (1)

Publication Number Publication Date
US20230237976A1 true US20230237976A1 (en) 2023-07-27

Family

ID=75581392

Family Applications (2)

Application Number Title Priority Date Filing Date
US17/217,469 Active 2041-04-21 US11651749B2 (en) 2020-11-02 2021-03-30 Display layout optimization of multiple media streams
US18/129,941 Pending US20230237976A1 (en) 2020-11-02 2023-04-03 Display Layout Optimization of Multiple Media Streams

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US17/217,469 Active 2041-04-21 US11651749B2 (en) 2020-11-02 2021-03-30 Display layout optimization of multiple media streams

Country Status (4)

Country Link
US (2) US11651749B2 (en)
EP (1) EP3993409A1 (en)
CN (1) CN114443188A (en)
AU (1) AU2021202385A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114756156A (en) * 2020-12-25 2022-07-15 群创光电股份有限公司 Electronic device
EP4333423A1 (en) * 2022-09-05 2024-03-06 Nokia Technologies Oy Video conference calls

Family Cites Families (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7870125B1 (en) * 2005-12-27 2011-01-11 Charter Communications Holding Company Integrated media content server system and method for the customization of metadata that is associated therewith
US8446454B2 (en) 2007-05-21 2013-05-21 Polycom, Inc. Dynamic adaption of a continuous presence videoconferencing layout based on video content
US7840638B2 (en) 2008-06-27 2010-11-23 Microsoft Corporation Participant positioning in multimedia conferencing
US8635317B2 (en) * 2009-06-23 2014-01-21 Nokia Corporation Method and apparatus for providing uniform content management
US9119070B2 (en) 2009-08-31 2015-08-25 Verizon Patent And Licensing Inc. Method and system for detecting unauthorized wireless devices
US8350891B2 (en) 2009-11-16 2013-01-08 Lifesize Communications, Inc. Determining a videoconference layout based on numbers of participants
US9129340B1 (en) * 2010-06-08 2015-09-08 United Services Automobile Association (Usaa) Apparatuses, methods and systems for remote deposit capture with enhanced image detection
US8537195B2 (en) 2011-02-09 2013-09-17 Polycom, Inc. Automatic video layouts for multi-stream multi-site telepresence conferencing system
US9124920B2 (en) * 2011-06-29 2015-09-01 The Nielson Company (Us), Llc Methods, apparatus, and articles of manufacture to identify media presentation devices
CN108600820B (en) * 2011-08-26 2021-03-16 谷歌有限责任公司 System and method for presenting video streams
US9467708B2 (en) * 2011-08-30 2016-10-11 Sonic Ip, Inc. Selection of resolutions for seamless resolution switching of multimedia content
KR101899458B1 (en) * 2012-01-11 2018-09-18 삼성전자주식회사 3d display apparatus and methods thereof
US9716856B2 (en) * 2012-03-07 2017-07-25 Echostar Technologies L.L.C. Adaptive bit rate transcode and caching for off air television programming delivery
US20160098169A1 (en) 2012-03-15 2016-04-07 Ronaldo Luiz Lisboa Herdy Apparatus, system, and method for providing social content
US9204291B2 (en) 2012-10-12 2015-12-01 Crestron Electronics, Inc. User identification and location determination in control applications
US8943072B2 (en) 2012-10-25 2015-01-27 Xerox Corporation Determining OEM of rebranded device
US9100699B2 (en) * 2012-11-28 2015-08-04 At&T Intellectual Property I, Lp Method and apparatus for selection and presentation of media content
US9177402B2 (en) * 2012-12-19 2015-11-03 Barco N.V. Display wall layout optimization
US9042605B2 (en) * 2013-02-15 2015-05-26 Google Inc. Determining a viewing distance for a computing device
CA2901957C (en) * 2013-02-25 2021-06-15 Savant Systems, Llc Video tiling
US9781385B2 (en) 2013-03-15 2017-10-03 Blue Jeans Network User interfaces for presentation of audio/video streams
US9210379B2 (en) 2014-02-27 2015-12-08 Google Inc. Displaying a presenter during a video conference
JP6515918B2 (en) * 2014-03-11 2019-05-22 ソニー株式会社 INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, AND INFORMATION PROCESSING METHOD
KR102230024B1 (en) * 2014-09-05 2021-03-18 엘지전자 주식회사 Electronic device, and method for operating the same
US20160103793A1 (en) * 2014-10-14 2016-04-14 Microsoft Technology Licensing, Llc Heterogeneous Application Tabs
US9705936B2 (en) * 2015-04-24 2017-07-11 Mersive Technologies, Inc. System and method for interactive and real-time visualization of distributed media
US11310298B2 (en) * 2016-03-07 2022-04-19 Intel Corporation Technologies for providing hints usable to adjust properties of digital media
US10237581B2 (en) * 2016-12-30 2019-03-19 Facebook, Inc. Presentation of composite streams to users
US20180225024A1 (en) * 2017-02-09 2018-08-09 Zumobi, Inc. System and method for generating an integrated mobile graphical experience using compiled-content from multiple sources
US10038877B1 (en) 2017-03-13 2018-07-31 Microsoft Technology Licensing, Llc Event conditioned views for teleconferencing sessions
US11367132B1 (en) * 2017-04-28 2022-06-21 United Services Automobile Association (Usaa) Systems and methods for generating personalized accounting analysis videos
US20190230310A1 (en) 2018-01-24 2019-07-25 Microsoft Technology Licensing, Llc Intelligent content population in a communication system
JP7129248B2 (en) * 2018-07-05 2022-09-01 フォルシアクラリオン・エレクトロニクス株式会社 Information control device and display change method
US11356732B2 (en) * 2018-10-03 2022-06-07 Nbcuniversal Media, Llc Tracking user engagement on a mobile device
KR102624613B1 (en) * 2018-12-11 2024-01-15 삼성전자주식회사 Signage apparatus and control method thereof
US11349976B2 (en) * 2019-09-12 2022-05-31 Lenovo (Beijing) Co., Ltd. Information processing method, file transmission method, electronic apparatus, and computing apparatus
US11164539B2 (en) * 2019-12-18 2021-11-02 Ross Video Limited Systems and methods for bandwidth reduction in video signal transmission
US20210405865A1 (en) * 2020-06-26 2021-12-30 Microsoft Technology Licensing, Llc Dynamic positioning of content views based on a camera position relative to a display screen

Also Published As

Publication number Publication date
US11651749B2 (en) 2023-05-16
CN114443188A (en) 2022-05-06
EP3993409A1 (en) 2022-05-04
US20220139356A1 (en) 2022-05-05
AU2021202385A1 (en) 2022-05-19

Similar Documents

Publication Publication Date Title
US20230237976A1 (en) Display Layout Optimization of Multiple Media Streams
CN108810649B (en) Image quality adjusting method, intelligent television and storage medium
CN108111910B (en) Method and system for adjusting video playing definition
EP2840800A1 (en) Content-based audio/video adjustment
US9398255B2 (en) Information processing apparatus, information processing system and information processing method
EP3070959A1 (en) Methods and systems for content presentation optimization
US20140365889A1 (en) User effected adaptive streaming
KR20190031032A (en) Method and apparatus for executing a content
KR20150086720A (en) Method of setting camera profile and apparatus of obtaining image
EP3107282A1 (en) Information terminal, system, control method, and recording medium
CN108737851B (en) Methods, systems, and media for palette extraction for video content items
JP7367187B2 (en) Unoccluded video overlay
US11303958B2 (en) Display device and image display method of the same
US20210027743A1 (en) Display apparatus and control method thereof
WO2023134625A1 (en) Special effect optimization method and apparatus, and storage medium and program product
JP2010522485A (en) Method and apparatus for upscaling video
CN116259281A (en) Display equipment and backlight control method
US11657778B2 (en) On demand display source presentation at a partial display area
US11962859B2 (en) System and method for implementation of region of interest based streaming
US8212796B2 (en) Image display apparatus and method, program and recording media
JP2006101063A (en) Transmission of moving image data in consideration of environment of reproduction side
US10454988B2 (en) Communication apparatus, communication system, and method for controlling data communication
WO2022245362A1 (en) Video signal classifications
WO2022025751A1 (en) A system and method for mirroring and distributing a presentation content
JP2023049840A (en) Display device, control device of display device, and program

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION