GB2538797B - Managing display data - Google Patents

Managing display data Download PDF

Info

Publication number
GB2538797B
GB2538797B GB1509290.1A GB201509290A GB2538797B GB 2538797 B GB2538797 B GB 2538797B GB 201509290 A GB201509290 A GB 201509290A GB 2538797 B GB2538797 B GB 2538797B
Authority
GB
United Kingdom
Prior art keywords
display
display data
data
cursor
display device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
GB1509290.1A
Other versions
GB201509290D0 (en
GB2538797A (en
Inventor
Skinner Colin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DisplayLink UK Ltd
Original Assignee
DisplayLink UK Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DisplayLink UK Ltd filed Critical DisplayLink UK Ltd
Priority to GB1509290.1A priority Critical patent/GB2538797B/en
Publication of GB201509290D0 publication Critical patent/GB201509290D0/en
Publication of GB2538797A publication Critical patent/GB2538797A/en
Application granted granted Critical
Publication of GB2538797B publication Critical patent/GB2538797B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/003Details of a display terminal, the details relating to the control arrangement of the display terminal and to the interfaces thereto
    • G09G5/006Details of the interface to the display terminal
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/08Cursor circuits
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/02Handling of images in compressed format, e.g. JPEG, MPEG
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0407Resolution change, inclusive of the use of different resolutions for different screen areas
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0464Positioning
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/12Overlay of images, i.e. displayed pixel being the result of switching between the corresponding input pixels
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2350/00Solving problems of bandwidth in display systems
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2352/00Parallel handling of streams of display data
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2354/00Aspects of interface with display user
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/18Use of a frame buffer in a display terminal, inclusive of the display panel
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/10Use of a protocol of communication by packets in interfaces along the display data pipeline
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/20Details of the management of multiple sources of image data
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/39Control of the bit-mapped memory
    • G09G5/399Control of the bit-mapped memory using two or more bit-mapped memories, the operations of which are switched in time, e.g. ping-pong buffers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/156Availability of hardware or computational resources, e.g. encoding based on power-saving criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/174Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Controls And Circuits For Display Device (AREA)

Description

Managing Display Data
Background
In desktop computing, it is now common to use more than one display device such as a monitor, television screen or even a projector. Traditionally, a user would have a computer with a single display device attached, but now it is possible to have more than one display device attached to the computer, which increases the usable area for the user. For example, International Patent Application Publication WO 20071020408 discloses a display system which comprises a plurality of display devices, each displaying respectively an image, a data processing device connected to each display device and controlling the image displayed by each display device, and a user interface device connected to the data processing device. Connecting multiple display devices to a computer is a proven method for improving productivity.
The connection of an additional display device to a computer presents a number of problems. In general, a computer will be provided with only one video output such as a VGA-out connection. One method by which a display device can be added to a computer is by adding an additional graphics card to the internal components of the computer. The additional graphics card will provide an additional video output which will allow the display device to be connected to the computer and driven by that computer.
However, this solution is relatively expensive and is not suitable for many non-technical users of computers.
An alternative method of connecting a display device is to connect the display device to a USB socket on the computer, as all modem computers are provided with multiple USB sockets. This provides a simple connection topology, but requires additional hardware and software to be present since it is necessary to compress display data due to the relatively-low bandwidth of a USB connection. However, compression and the associated processing add a delay to the transmission of display data to the display device. This is especially problematic in the case of the cursor, which in a conventional desktop arrangement is likely to be the user’s main point of interaction with the computer. When the user moves a mouse, he or she expects to see an immediate reaction from the cursor and low latency is therefore especially important in this case. In some cases, such as the DisplayPort standard, the data may even be compressed a second time as part of conversion to a DisplayPort standard, adding additional latency to that mentioned above.
The DisplayPort standard further provides a method, known as Multiple Stream Transport (MST), which allows display data to be transmitted in multiple time-multiplexed streams down a single physical connection to a chain of display devices, such that one stream contains display data for one connected display device. In this way, only one stream is displayed on each display device.
Overview
It is an object of the present invention, therefore, to provide a method of managing display data that overcomes or at least reduces the over mentioned problems.
Accordingly, in a first aspect, the invention provides a method of managing display data, the method comprising: receiving one or more input streams of display data forming an image having a cursor thereon from at least one source; identifying an area of the image at a location having the cursor being an area of the display with which a user of a display device on which the image is to be displayed is interacting, by: capturing a direction of the user’s gaze using cameras connected to the display device; calculating the location on the display device on which the user’s gaze is focussed; and using the location to identify the area having the cursor; isolating display data from the input stream at the area to produce cursor display data, the remaining display data of the image forming image display data wherein the cursor display data has a lower latency than the image display data; identifying a first notional display at the area where the cursor display data is to be displayed on the display device; identifying a second notional display on the display device at which the image display data is to be displayed, the first and second notional displays overlapping each other; compressing the image display data; creating at least a first output stream comprising the cursor display data; producing a second output stream comprising the compressed image display data, the at least first and second output streams being associated with a display device for display on the display device; providing instructions indicating how the cursor and image display data in the first and second output streams is to be combined for display on the display device, wherein the instructions indicate that the cursor display data is to be displayed on the first notional display on the display device, and the image display data is to be displayed on the second notional display on the display device; outputting the at least first and second output streams of display data onto a multi stream link for transmittal via the multi stream link to a controller for the first display device, such that the first output stream is sent to the first notional display and the second output stream is sent to the second notional display as if they were separate displays but addressed to the display device of which the notional displays are part; and outputting the instructions for transmittal to the controller for the display device; at a display device: receiving at least the first output stream comprising cursor display data and the second input stream comprising compressed image display data over the multi stream link from the display control device; receiving the instructions indicating how the cursor and image display data is to be combined for display on the display device; decompressing the compressed image display data; combining the cursor and decompressed image display data from the at least first and second streams according to the instructions into combined display data; and forwarding the combined display data for display on the display device.
The cursor display data may also be compressed prior to being output in the second output stream, but the image display data may be compressed more than the cursor display data.
The above method is preferably performed in a display system comprising a display data manager, which may, for example, be a docking station, and a display device having a display screen for displaying the combined display data.
This method is beneficial because it will allow the accelerated cursor display data to be treated separately from the main display data, for example by being sent to the display device as a raw stream. This is the most beneficial use of the invention as it will further mean that the accelerated display data will not be compressed and will therefore be of better quality than if it had been compressed. It also means that the accelerated display data can be updated entirely independently from the rest of the display data, which will improve latency when the other display data must be processed in some way or if it is not changing; it would not be necessary to update all the display data in order to move the cursor, for example. A cursor is likely to be moving even when the rest of the image displayed on the display device is static, for example when browsing a web page. Therefore, by having the cursor as the accelerated display data, re-processing and compressing the entire image will not be necessary. A notional display is a construct comprising an area of a physical display device and when transmitted the display data is still addressed to the physical display device in question. However, in all other ways the display control device treats the display data as if it were being sent to a separate display device. This includes timing controls and the methods and timing used for transmission, such that the notional display will be correctly positioned and supplied with data at the correct rate. This is beneficial because it takes advantage of existing technology such as MST to reduce latency and processing of areas of interest on a single display device.
Preferably, it is possible for the notional display to be duplicated such that it can be part of multiple physical display devices. This is especially beneficial in a cloning situation, as it allows the same accelerated display data to be sent to multiple physical display devices to be displayed in the same location relative to the main display data associated with each physical display device.
Accelerating the cursor display data is beneficial as the cursor is a user-interface device and therefore must have low latency for a satisfactory user experience. This is also more beneficial than many other possible embodiments as a cursor is likely to have relatively small dimensions and comprise a relatively small amount of data, maximising the efficiency of the individual stream.
Advantageously, the method may be extended to allow more than one accelerated area to be received. This could be beneficial in, for example, an embodiment where two users were able to interact with a single computer and they each had individual cursors. Alternatively, other accelerated areas could be used for other areas of the display data that required low latency, alongside a cursor.
The accelerated cursor display data may be isolated from the main display data either by cutting it out or copying it out. Cutting it out would mean copying it into separate storage and filling the space it had occupied in the frame with blank pixels or pixels in a single colour which will be easy to compress and blend into upon receipt of the resulting stream. It is most likely that this colour will be black, but it may not be, depending on the details of the implementation. Copying comprises simply copying the accelerated display data into separate storage and making no changes to the main display data. Copying is preferable as it requires less processing due to the fact that the gap does not need to be filled, and also because it will result in fewer visible artefacts after the accelerated display data has been blended back into the main display data.
According to a still further possible use of the invention, the display data may be received from multiple computing devices, commonly known as hosts, where the method may further comprise: 1. dividing one or more display devices into a number of notional displays greater than the number of physical display devices; 2. receiving streams of display data from more than one host; 3. associating each of the multiple streams of display data with the appropriate notional display; 4. optionally, receiving instructions as to how a stream of display data should be manipulated prior to display on its associated notional display and carrying out these instructions; and 5. transmitting the streams of display data to the appropriate displays.
This is beneficial because it will allow multiple hosts to share a single display using the same technique for different parts of the same display data. The use of different streams serving individual notional displays is beneficial because it means that the data does not need to be composited into a single frame before being sent to the display device; it can just be directed onto notional displays as appropriate.
The instructions for how a stream of display data should be displayed on its associated notional display may include: • Cropping • Rotating • Scaling • Colour correction • Dithering • Format conversion • Colour conversion or any other appropriate required function. Any number of these functions may be carried out in any combination.
The outgoing compression engine could further be arranged to compress different streams of display data to different degrees such that, for example, the accelerated display data is compressed less than the main display data.
Brief Description of the Drawings
Embodiments of the invention will now be more fully described, by way of example, with reference to the drawings, of which:
Figure la is a basic schematic of a conventional system;
Figure lb is a frame of display data comprising an image and a cursor that may be displayed by the system of Figure la;
Figure 2 shows a basic schematic of a system according to a first embodiment of the invention;
Figure 3 is a detailed schematic of a display device used in the system of Figure 2;
Figure 4 is a detailed schematic of part of the system of Figure 2 in the case where the accelerated display data is provided as a separate stream;
Figure 5 is a detailed schematic of part of the system of Figure 2 in the case where the accelerated display data must be copied from the main display data; and
Figure 6 is a detailed schematic of an example embodiment of the system with multiple hosts.
Detailed Description of the Drawings
Figure la shows a conventional system comprising a host [8], a data manager [9] and a display device [10], The host produces display data for display on the display device in frame [11] such as that shown in Figure lb, which will include some ‘background’ image data [12] (shown as a “star”) - the main display data, and some foreground image data, such as a cursor [13]. Conventionally, the frame [11] is then rasterised by a data manager [9], which may be part of a host device such as the computer, and is then sent to the display device [10] in this form.
Figure 2 shows a schematic of a system according to one embodiment of the present invention. In this system, there may be several hosts [14a, 14b], (in this case two are shown) each of which provide display data to a data manager [15], In this embodiment, one host produces the ‘background’ image data [12] (shown as a “star”), for which latency is less important and which is therefore of a lower priority, and another host produces foreground image data, such as the cursor [13], which is of higher priority and is likely to move more rapidly. Thus, in general, different parts of the complete frame may be produced by different hosts, with the cursor being part of display data produced by one host or being produced independently by one host. In this example, the cursor [13] is likely to be accelerated display data, although, as will be described further below, there may be other types of accelerated display data. The data manager [15] forms two or more streams of display data from the display data provided by the hosts and interleaves them into a single interleaved stream that is sent to a display device [16], The data manager [15] also sends instructions to the display device on how the multiple interleaved streams should be combined for display. The system may also include a sensor [17], for example a camera, to sense which part of the display a user is looking at, that information being fed back via a sensing device [18] to the data manager [15], which may use that information to make sure that the area of the display that is being looked at has lower latency that the rest of the displayed image, as will be more fully described below.
Figure 3 shows an embodiment of the display device [16] which is able to receive multiple interleaved streams of display data [21] and blend them into a single frame for display. The display device [16] includes an input engine [22] which is arranged to receive an incoming interleaved stream of display data [21] and separate it into separate streams [23] as required. It could do this by checking header data in packets of display data and arranging the packets into internal buffers prior to releasing the packets from the buffers as required by a blending engine [24],
The blending engine [24] takes the multiple streams [23] of display data separately and combines them according to position data provided in instructions from the data manager. The instructions may be provided with the display data [21], preferably as part of the interleaving, or may be provided separately. The combined display data forms a single frame of pixel data which is suitable for display on a display panel [27], The finished pixel data is stored in a frame buffer [25], This may be large enough to hold a complete frame or may be a small flow control buffer and only able to hold a limited amount of data at a time. There may also be more than one buffer [25] so that one buffer can be updated while another is read and its contents displayed. As such, two buffers [25] are shown here. The pixel data can then be read by a raster engine [26] and displayed on the display panel [27] in a conventional way.
Figure 4 shows a more detailed view of a data manager [15] which contains an input engine [31], an input buffer [32], a processing engine [33], a cursor buffer [34], an output buffer [35], and an output engine [36], It may, of course, also comprise further components, but these are not shown for clarity. The data manager [15] is connected to a single physical display device [16] such as that shown in Figure 3, in this embodiment via a single wired connection [37] which carries a signal [39] comprising the interleaved streams of display data [310, 311],
The main display data - in this example, the “star” [12] similar to that shown in Figure lb- is produced by a host [14] and transmitted to the data manager [15] along with, in this embodiment, metadata comprising the location of the cursor [13], The display data comprising the cursor icon itself will also be provided by the same host [14], but it is likely to be updated much less frequently and is here shown as a separate input. It is stored in a dedicated cursor buffer [34], The main display data and metadata are received by the input engine [31], which copies the main display data into the input buffer [32] and transfers the metadata to the output engine [36] (shown by the dashed line).
The processing engine [33] carries out any processing required, such as decompression, and copies the processed display data to the output buffer [35], No blending of the cursor [13] is necessary at this stage as the cursor [13] will be treated by the data manager [15] as a separate frame displayed on a different display device [38], It will be blended at the display device [16], This removes the need for some processing and will therefore improve latency and reduce power consumption by the processing engine [33],
As a further result of the removal of the need for blending, if the main display data [12] does not change from frame to frame, for example it comprises a website or word processing document being browsed by the user, no update is required to the display data in order to move the cursor [13], No further display data need be sent to the input engine [31]; it will only receive metadata comprising the new location of the cursor [13], No buffer updates are needed, removing slow memory interactions, and the processing engine [33] need not be used, leading to a further reduction in latency and power use. The output engine [36] can simply read the original display data from the output buffer [35] and produce a stream of display data [39] as described below.
The output engine [36] creates a notional display [38] at the location sent to it by the input engine [31] but with the same address as the physical display device [16], This means that, although there is only one physical display device [16], the output engine [36] behaves in all ways as if it were sending display data to two display devices, although the single physical display device [16] will receive both streams of display data [310, 311], The output engine [36] then fetches pixel data from the buffers [35, 34] to produce an interleaved stream [39] directed to both displays [16, 38], In this embodiment, it fetches pixel data from the output buffer [35] and compresses it, then fetches raw data from the cursor buffer [34] as appropriate such that the resulting interleaved stream [39] is written from left to right and top to bottom across all displays [16, 38],
The output engine [36] may also compress one of the streams of display data [310, 311], likely to be the main display data [311] as the greatest benefit will be seen from compressing the larger area of display data. Latency of the accelerated display data [310], in this case the cursor [13], is reduced by the fact that it does not need to be compressed.
Decompression and blending are finally performed at the display device [16] as hereinbefore described. The completed frame is then displayed.
Figure 5 shows a similar embodiment of the data manager [15] in a case where the accelerated display data is part of the main display data and must be identified and copied.
As in the embodiment shown in Figure 4, this data manager includes an input engine [41], input buffer [42], processing engine [43], output buffer [45], accelerated buffer [44], and output engine [46], The connection [37] to the display device [16] operates in the same way and the details of the interleaved stream [39] are not here shown. Likewise, the display device [16] is once again similar to that shown in Figure 3.
The host produces a frame of main display data such as that shown in Figure lb, but without the cursor [13], It also detects the location [47] on the display device [16] on which a user is focussed. This could be done by, for example, multiple webcams, such as that shown in Figure 3 as camera [17], attached to the display device which detect the user’s eyes and the direction in which they are looking. The cameras send the information to the sensing device [18], where they may be combined, possibly with other data, to determine the area on the display device where the user is focussed. This information is then sent to the display control device [15] as location metadata. This and the main display data are then received by the input engine [41], which puts the main display data in the input buffer [42] and forwards the location metadata on to the processing engine [43],
The processing engine [43] receives the location metadata and accesses the main display data in the input buffer [42], It is able to locate the display data at the location [47] and copies this and, in this embodiment, the display data around it to form a rectangle of a preprogrammed size, which will also be the size of the notional display [48], It then places this in the accelerated buffer [44] to form the accelerated display data. It then copies the main display data into the output buffer. In this embodiment, no change is made to the main display data. The processing engine [43] then forwards the location metadata to the output engine [46],
The output engine [46] proceeds in a similar way to that described with respect to Figure 4. It need not be aware of the fact that the accelerated display data has been copied from the main display data as opposed to being provided as a separate stream. It creates a notional display [48] at the location [47] indicated by the received metadata and fetches pixel data from the buffers [45, 44] to produce an interleaved stream. In this embodiment, it would be extremely beneficial for the output engine [46] to compress the main display data, as it would reduce the bandwidth required overall but the area at which the user is looking can be raw quality, resulting in a better user experience.
The interleaved streams can then be received by the display device [16] for decompression, blending and display as hereinbefore described.
Figure 6 shows a schematic of an embodiment of the invention that allows a user to connect multiple hosts. It operates in a similar way to the system shown in Figure 4 and comprises, in this example, four hosts [14], a data manager [15] and a single physical display device [16] which will once again be of the type shown in Figure 3. The data manager [15] includes an input buffer [52] and an output buffer [54], each of which is divided into a number of virtual buffers equal to the number of connected hosts [14], as is shown by the patterns of the hosts [14], input buffers [52], output buffers [54] and notional displays [56] in Figure 6. This division would be triggered by metadata received from the input engine [51] upon connection of the hosts [14], through connections that are not here shown.
The four hosts [14] are all connected to the data manager [15], This connection may be wireless or wired, through connections to multiple input ports or through an adapter that allows all four to connect to a single input port. In any case, they transmit display data to the input engine [51], The input engine [51] is aware of which host has supplied a given packet of display data and places it in the appropriate virtual input buffer [52], For example, display data supplied by Host A [14A] (shown marked with dots) is placed in the first virtual input buffer [52A], The input engine [51] also sends metadata to the processing engine [53] to notify it that the data is ready and its location.
The processing engine [53] takes display data from the input buffer [52] as it becomes available and processes it, for example by decompressing it. It then places the resulting pixel data in the appropriate virtual output buffer [54] according to the host [14] that produced it and therefore the notional display [56] to which it will be sent.
The hosts [14] may send further metadata to the data manager [15] regarding how display data should be cropped, resized, rotated etc. in order to fit on its associated notional display [56], since these may not be of regular and equal size as shown in Figure 6. This metadata is received by the input engine [51], which then passes it on to the processing engine [53], which performs the necessary operations prior to storing the data in the output buffers [54],
When the input engine [51] received the display data from each host [14], it also received a notification of the location and size of the notional display [56] to be associated with that host [14], The locations and sizes of the notional displays [56] may be determined by the hosts [14] in a variety of ways, for example: • Matching software behaviour on the hosts, for example in a videoconferencing setting where all the hosts are running the same videoconferencing software, the software may be configured such that if the associated user is speaking the host will require a large notional display at the top of the screen and otherwise it will require a small notional display at the bottom of the screen; • Negotiation between the hosts such that they are all aware of the size and resolution of the display device and divide this space up between themselves according to heuristics, for example such that they each get an equal portion of space, arranged in the order in which they were connected; • Set availability on the data manager, which may, for example, be a docking station; for example, a maximum of four hosts can be connected and the docking station stores notional display configurations for each possible number of connected hosts. When a host is connected, the docking station informs it of the size and location of its notional display during initial connection handshaking and updates previously-connected hosts accordingly.
It should be understood that these heuristics are examples only and do not define or limit the scope of the claims.
Upon receiving the locations and sizes of the required notional displays, the input engine [51] sends further metadata to the output engine [55] to notify it of these attributes and the output engine [55] creates the appropriate number of notional displays [56] as hereinbefore described.
If the locations and sizes given for the notional displays [56] will result in overlaps between two notional displays [56] such that two notional displays [56] are attempting to occupy the same area on the physical display device [16], the output engine will apply heuristics to determine which notional display [56] will be positioned ‘behind’ the other. Example heuristics include: • The smaller notional display [56] is positioned in front of the larger. • The host [14] connected first has priority and the notional display [56] associated with that host [14] will appear in front. • Both notional displays [56] are reduced in size until they no longer overlap.
Other heuristics may occur to the reader and the above examples may be combined in any way appropriate to the specific embodiment.
The output engine [55] constantly fetches pixel data from the virtual output buffers [54] and creates an interleaved stream comprising a stream of display data for each notional display [56], This is then sent to the display device [16] where the streams are blended as hereinbefore described. In the same way as the movement of a cursor [13] in Figure 3, if only the display data being produced by one host [14] has changed, there is no need for the data manager [15] to interfere with or re-process the display data associated with any of the other hosts [14] or notional displays [56], The notional displays [56] can also be moved and reconfigured as appropriate by sending new location metadata to the input engine [51], which will signal the output engine [55] appropriately.
Although only a few particular embodiments have been described in detail above, it will be appreciated that various changes, modifications and improvements can be made by a person skilled in the art without departing from the scope of the present invention as defined in the claims. For example, hardware aspects may be implemented as software where appropriate and vice versa. Furthermore, instructions to implement the method may be provided on a computer readable medium. For example, although the input engine [22], the blending engine [24], the buffers [25] and the raster engine [26], which form a display data controller are described as being within the display device [16], the display data controller could be a separate device located between the data manager and a conventional display device, conveniently co-located with the conventional display device.

Claims (4)

Claims
1. A method of managing display data, the method comprising: at a data manager: receiving one or more input streams of display data forming an image having a cursor thereon from at least one source; identifying an area of the image at a location having the cursor being an area of the display with which a user of a display device on which the image is to be displayed is interacting, by: capturing a direction of the user’s gaze using cameras connected to the display device; calculating the location on the display device on which the user’s gaze is focussed; and using the location to identify the area having the cursor; isolating display data from the input stream at the area to produce cursor display data, the remaining display data of the image forming image display data wherein the cursor display data has a lower latency than the image display data; identifying a first notional display at the area where the cursor display data is to be displayed on the display device; identifying a second notional display on the display device at which the image display data is to be displayed, the first and second notional displays overlapping each other; compressing the image display data; creating a first output stream comprising the cursor display data; producing a second output stream comprising the compressed image display data, the at least first and second output streams being associated with a display device for display on the display device; providing instructions indicating how the cursor and image display data in the first and second output streams is to be combined for display on the display device, wherein the instructions indicate that the cursor display data is to be displayed on the first notional display on the display device, and the image display data is to be displayed on the second notional display on the display device; outputting the at least first and second output streams of display data onto a multi stream link for transmittal via the multi stream link to a controller for the display device, such that the first output stream is sent to the first notional display and the second output stream is sent to the second notional display as if they were separate displays but addressed to the display device of which the notional displays are part; and outputting the instructions for transmittal to the controller for the display device; at a display device: receiving at least the first output stream comprising cursor display data and the second input stream comprising compressed image display data over the multi stream link from the display control device; receiving the instructions indicating how the cursor and image display data is to be combined for display on the display device; decompressing the compressed image display data; combining the cursor and decompressed image display data from the at least first and second streams according to the instructions into combined display data; and forwarding the combined display data for display on the display device.
2. A method of managing display data according to claim 1, wherein the cursor display data is also compressed but the image display data is compressed more than the cursor display data prior to being output in the respective output stream.
3. A display system configured to perform a method according to either claim 1 or claim 2.
4. A display system according to claim 3, comprising a data manager and a display device having a display screen for displaying the combined display data.
GB1509290.1A 2015-05-29 2015-05-29 Managing display data Active GB2538797B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1509290.1A GB2538797B (en) 2015-05-29 2015-05-29 Managing display data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1509290.1A GB2538797B (en) 2015-05-29 2015-05-29 Managing display data

Publications (3)

Publication Number Publication Date
GB201509290D0 GB201509290D0 (en) 2015-07-15
GB2538797A GB2538797A (en) 2016-11-30
GB2538797B true GB2538797B (en) 2019-09-11

Family

ID=53677440

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1509290.1A Active GB2538797B (en) 2015-05-29 2015-05-29 Managing display data

Country Status (1)

Country Link
GB (1) GB2538797B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11151749B2 (en) 2016-06-17 2021-10-19 Immersive Robotics Pty Ltd. Image compression method and apparatus
US11153604B2 (en) 2017-11-21 2021-10-19 Immersive Robotics Pty Ltd Image compression for digital reality
US11553187B2 (en) 2017-11-21 2023-01-10 Immersive Robotics Pty Ltd Frequency component selection for image compression

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2018218182B2 (en) 2017-02-08 2022-12-15 Immersive Robotics Pty Ltd Antenna control for mobile device communication
CN112965573B (en) * 2021-03-31 2022-05-24 重庆电子工程职业学院 Computer interface conversion device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090121849A1 (en) * 2007-11-13 2009-05-14 John Whittaker Vehicular Computer System With Independent Multiplexed Video Capture Subsystem
US20100045791A1 (en) * 2008-08-20 2010-02-25 Honeywell International Inc. Infinite recursion of monitors in surveillance applications
US20110145879A1 (en) * 2009-12-14 2011-06-16 Qualcomm Incorporated Decomposed multi-stream (dms) techniques for video display systems
US20140355664A1 (en) * 2013-05-31 2014-12-04 Cambridge Silicon Radio Limited Optimizing video transfer

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090121849A1 (en) * 2007-11-13 2009-05-14 John Whittaker Vehicular Computer System With Independent Multiplexed Video Capture Subsystem
US20100045791A1 (en) * 2008-08-20 2010-02-25 Honeywell International Inc. Infinite recursion of monitors in surveillance applications
US20110145879A1 (en) * 2009-12-14 2011-06-16 Qualcomm Incorporated Decomposed multi-stream (dms) techniques for video display systems
US20140355664A1 (en) * 2013-05-31 2014-12-04 Cambridge Silicon Radio Limited Optimizing video transfer

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11151749B2 (en) 2016-06-17 2021-10-19 Immersive Robotics Pty Ltd. Image compression method and apparatus
US11153604B2 (en) 2017-11-21 2021-10-19 Immersive Robotics Pty Ltd Image compression for digital reality
US11553187B2 (en) 2017-11-21 2023-01-10 Immersive Robotics Pty Ltd Frequency component selection for image compression

Also Published As

Publication number Publication date
GB201509290D0 (en) 2015-07-15
GB2538797A (en) 2016-11-30

Similar Documents

Publication Publication Date Title
US11741916B2 (en) Video frame rate compensation through adjustment of timing of scanout
GB2538797B (en) Managing display data
EP3134804B1 (en) Multiple display pipelines driving a divided display
WO2016091082A1 (en) Multi-screen joint display processing method and device
US8766993B1 (en) Methods and apparatus for enabling multiple remote displays
US20200143516A1 (en) Data processing systems
US11127110B2 (en) Data processing systems
US11243786B2 (en) Streaming application visuals using page-like splitting of individual windows
US20180090099A1 (en) A method of processing display data
US10672367B2 (en) Providing data to a display in data processing systems
US7425962B2 (en) Systems and methods for generating a composite video signal from a plurality of independent video signals
CN108243293B (en) Image display method and system based on virtual reality equipment
US9317891B2 (en) Systems and methods for hardware-accelerated key color extraction
JP2014216668A (en) Imaging apparatus
CN118075549A (en) Image processing method, device, computer equipment and image display method
CN110741634A (en) Image processing method, head-mounted display device and head-mounted display system
CN115225838A (en) Control device and method based on FPGA and display equipment
CN115880156A (en) Multi-layer splicing display control method and device
US20130346646A1 (en) Usb display device operation in absence of local frame buffer