EP1763974A1 - Ambient lighting derived from video content and with broadcast influenced by perceptual rules and user preferences - Google Patents

Ambient lighting derived from video content and with broadcast influenced by perceptual rules and user preferences

Info

Publication number
EP1763974A1
EP1763974A1 EP05756729A EP05756729A EP1763974A1 EP 1763974 A1 EP1763974 A1 EP 1763974A1 EP 05756729 A EP05756729 A EP 05756729A EP 05756729 A EP05756729 A EP 05756729A EP 1763974 A1 EP1763974 A1 EP 1763974A1
Authority
EP
European Patent Office
Prior art keywords
color
chromaticity
luminance
pixel
extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP05756729A
Other languages
German (de)
French (fr)
Inventor
Srinivas Gutta
Nevenka Dimitrova
Mark J. Elting
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Koninklijke Philips NV
Original Assignee
Koninklijke Philips Electronics NV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics NV filed Critical Koninklijke Philips Electronics NV
Publication of EP1763974A1 publication Critical patent/EP1763974A1/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/73Colour balance circuits, e.g. white balance circuits or colour temperature control
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • H05B47/165Controlling the light source following a pre-assigned programmed sequence; Logic control [LC]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Definitions

  • This invention relates to production and setting of ambient lighting effects using multiple light sources, and typically based on, or associated with, video content, such as from a video display or display signal. More particularly, it relates to a method to account for user preferences in extracting dominant color information, in conjunction with perceptual rules, from sampled or subsampled video content in real time, and to perform color mapping transformations from the color space of the video content to that which best allows driving a plurality of ambient light sources.
  • Engineers have long sought to broaden the sensory experience obtained consuming video content, such as by enlarging viewing screens and projection areas, modulating sound for realistic 3-dimensional effects, and enhancing video images, including broa der video color gamuts, resolution, and picture aspect ratios, such as with high definition (HD) digital TV television and video systems.
  • film, TV, and video producers also try to influence the experience of the viewer using visual and auditory means, such as by clever use of color, scene cuts, viewing angles, peripheral scenery, and computer -assisted graphical representations. This would include theatrical stage lighting as well. Lighting effects, for example, are usually scripted - synchronized with video or play scenes - and reproduced with the aid of a machine or computer programmed with the appropriate scene scripts encoded with the desired schemes.
  • Philips Netherlands and other companies have disclosed means for changing ambient or peripheral lighting to enhance video content for typical home or business applications, using separate light sources far from the video display, and for many applications, some sort of advance scripting or encoding of the desired Ii ghting effects.
  • Ambient lighting added to a video display or television has been shown to reduce viewer fatigue and improve realism and depth of experience.
  • Color video is founded upon the principles of human vision, and well known trichromatic and opponent channel theories of human vision have been incorporated into our understanding of how to influence the eye to see desired colors and effects which have high fidelity to an original or intended image. In most color models and spaces, three dimensions or coordina tes are used to describe human visual experience.
  • Color video relies absolutely on metamerism, which allows production of color perception using a small number of reference stimuli, rather than actual light of the desired color and character. In th is way, a whole gamut of colors is reproduced in the human mind using a limited number of reference stimuli, such as well known RGB (red, green, blue) tristimulus systems used in video reproduction worldwide. It is well known, for example, that nearly all video displays show yellow scene light by producing approximately equal amounts of red and green light in each pixel or picture element. The pixels are small in relation to the solid angle they subtend, and the eye is fooled into perceiving yellow; it do es not perceive the green or red that is actually being broadcast. There exist many color models and ways of specifying colors, including well known CIE
  • the human visual system is endowed with qualities of compensation and discernment whose understanding is necessary to design any video system.
  • Color in humans can occur in several modes of appearance, among them, object mode and illuminant mode.
  • Illuminant mode the light stimulus is perceived as light reflected from an object illuminated by a light source.
  • illuminant mode the light stimulus is seen as a source of light.
  • Illuminant mode includes stimuli in a complex field that are much brighter than other stimuli. It does not include stimuli known to be light sources, such as video displays, whose brightness or luminance is at or below the over all brightness of the scene or field of view so that the stimuli appear to be in object mode.
  • Video reproduction can take many forms. Spectral color reproduction allows exact reproduction of the spectral power distributions of the original stimuli, but this is not realizable in any video reproduction that uses three primaries. Exact color reproduction can replicate human visual tristimulus values, creating a metameric match to the original, but overall viewing conditions for the picture and the orig inal scene must be similar to obtain a similar appearance. Overall conditions for the picture and original scene include the angular subtense of the picture, the luminance and chromaticity of the surround, and glare. One reason that exact color reproducti on often cannot be achieved is because of limitations on the maximum luminance that can be produced on a color monitor.
  • Colorimetric color reproduction provides a useful alternative where tristimulus values are proportional to those in the original sc ene. Chromaticity coordinates are reproduced exactly, but with proportionally reduced luminances. Colorimetric color reproduction is a good reference standard for video systems, assuming that the original and the reproduced reference whites have the same chromaticity, the viewing conditions are the same, and the system has an overall gamma of unity. Equivalent color reproduction, where chromaticity and luminances match the original scene cannot be achieved because of the limited luminance generated in vi deo displays.
  • whites and neutral grays are typically reproduced with the chromaticity of CIE standard daylight illuminant D65.
  • the system is mimicking the human visual system, which inherently ad apts perceptions so that white surfaces always appear the same, whatever the chromaticity of the illuminant, so that a white piece of paper will appear white, whether it is found in a bright sunlight day at the beach, or a incandescent-lit indoor scene.
  • white balance adjustment usually is made by gain controls on the R, G, and B channels.
  • the light output of a typical color receiver is typically not linear, but rather follows a power-law relationship to applied video voltages.
  • the light output is proportional to the video-driving voltage raised to the power gamma, where gamma is typically 2.5 for a color CRT (cathode ray tube), and 1.8 for other types of light sources. Compensation for this factor is made via three primary gamm a correctors in camera video processing amplifiers, so that the primary video signals that are encoded, transmitted and decoded are in fact not R, G, and B, but R 1/( , G 1/( , and B 1/( .
  • Colorimetric color reproduction requires that the overall gamma for video reproduction - including camera, display, and any gamma-adjusting electronics be unity, but when corresponding color reproduction is attempted, the luminance of t he surround takes precedence. For example, a dim surround requires a gamma of about 1.2, and a dark surround requires a gamma of about 1.5 for optimum color reproduction. Gamma is an important implementation issue for RGB color spaces. Most color reproduction encoding uses standard RGB color spaces, such as sRGB,
  • ROMM RGB Adobe RGB 98, Apple RGB
  • video RGB spaces such as that used in the NTSC standard.
  • an image is captured into a sensor or source device space, which is device and ima ge specific. It may be transformed into an unrendered image space, which is a standard color space describing the original's colorimetry (see Definitions section).
  • RGB color spaces are rendered image spaces.
  • source and output spaces created by cameras and scanners are not CIE -based color spaces, but spectral spaces defined by spectral sensitivities and other characteristics of the camera or scanner.
  • Rendered image spaces are device -specific color spaces based on the colorimetr y of real or virtual device characteristics. Images can be transformed into rendered spaces from either rendered or unrendered image spaces. The complexity of these transforms varies, and can include complicated image dependent algorithms. The transforms c an be non-reversible, with some information of the original scene encoding discarded or compressed to fit the dynamic range and gamut of a specific device.
  • RGB Red, Green, Blue
  • ISO 17321 Green, Blue 3rd Generation
  • images are converted into a rendered color space for either archiving and data transfer, including video signals. Converting from one rendered image or color space to another can cause severe image artifacts. The more mismatched the gamuts and white points are between two devices, the stronger the negative effects.
  • One shortcoming in prior art ambient light display sys terns is that extraction from video content of representative colors for ambient broadcast can be problematic. For example, color-averaging of pixel chromaticities often results in grays, browns, or other color casts that are not perceptually representativ e of a video scene or image. Colors derived from simple averaging of chromaticities often look smudged and wrongly chosen, particularly when contrasted to an image feature such as a bright fish, or a dominant background such as a blue sky.
  • the ambient lighting system acting upon that indication can produc e by default another color (e.g., a nearest color) in its light space that is it capable of producing (e.g., purple).
  • another color e.g., a nearest color
  • this color chosen for production may not be preferred, as it may not perceptually correct or pleasing.
  • ambient light triggering during dark scenes is also often garish, too bright, and not possessed of a chromaticity which seems to match that of the scene content.
  • Ambient light triggering during light scenes can result in production of an ambient color that appears weak and having insufficient color saturation.
  • some aspects of a scene e.g., a blue sky, might be preferable to use for dominant color extraction to inform an ambient lighting system, while others, e.g., cloud cover, might be less preferable .
  • Another problem in the prior art is that newly appearing video scene features are often not represented or are under -represented in dominant color extraction and selection.
  • ambient lighting is often set without taking into account user preferences as to the brightness, color, time -development, and general character of the ambient light produced. For example, some users may prefer a soft, slow moving exposition of ambient lighting effects, with subdued colors and slow changes, while others might prefer fast moving, bright ambient broadcasts which might reflect quite graphically every change in video content (e.g., a newly appearing feature like a fish). This is not easy to achieve, and there does not exist in the prior art a method for imposing perceptual rules t o alleviate these problems.
  • Methods given for various embodiments of the invention include using pixel level statistics or the functional equivalent to determine or extract, one or more dominant colors in a way which presents as little computational load as possible, but at the same time, provides for pleasing and appropriate chromaticities selected to be dominant colors in accordance with perceptual rules.
  • the invention relates to a method for dominant color extraction from video content encoded in a rendered color space to produce, using perceptual rules, a dominant color for emulation by an ambient light source.
  • Possible methods steps include: [1] Performing dominant color extraction from pixel chroma ticities from the video content in the rendered color space to produce a dominant color by extracting any of: [a] a mode of the pixel chromaticities; [b] a median of the pixel chromaticities; [c] a weighted average by chromaticity of the pixel chromaticiti es; [d] a weighted average of the pixel chromaticities using a pixel weighting function that is a function of any of pixel position, chromaticity, and luminance; [2] Further deriving the chromaticity of the dominant color in accordance with a perceptual ru Ie, the perceptual rule chosen from any of: [a] a simple chromaticity transform; [b] a weighted average using the pixel weighting function so further formulated as to exhibit an influence from scene content that is obtained by assessing any of chromaticity and luminance for a plurality of pixels in the video content
  • the pixel chromaticities can be quantized and this can be done by a number of methods (see Definition section), where the goal is ease the computational burden by seeking a reduction in possible color states, such as resulting from assignment of a larger number of chromaticities (e.g., pixel chromaticities) to a smaller number of assigned chromaticities or colors; or a reduction in pixel numbers by a selection process that picks out selected pixels; or binning to produce representative pixels or superpixels.
  • the superpixel thus produced can be of a size, orientation, shape, or location formed in conformity with an image feature.
  • Assigned colors used in the quantization process can be selected to be a regional color vector that is not necessarily in the rendered color space, such as in the second rendered color space.
  • Other embodiments of the method include one in which the simple chromaticity transform chooses a chromaticity found in the second rendered color space used for ambient light production.
  • the extended dominant color extraction can be repeated individually for different scene features in the video content, forming a plurality of dominant colors and step [1] can be repeated where each of the plurality of dominant colors is designated as a pixel chromaticity. Then, if desired, the above step [1] (dominant color extraction) can be repeated separately for pixel chromaticities in a newly appearing scene feature.
  • Quantizing of at least some pixel chromaticities from the video content in the rendered color space can be undertaken to form a distribution of assigned colors, and during step [1], at least some of the pixel chromaticities can be obtained from the distribution of assigned colors.
  • the quantizing can comprise binning the pixel chromaticities into at least one superpixel.
  • At least one of the assigned colors can be a regional color vector that is not necessarily in the rendered color space, such as a regional color vector lying in the second rendered color space used to drive the ambient light source.
  • the method can also additionally comprise establishing at least one color of interest in the distribution of assigned colors and then extracting pixel chromaticities assigned thereto to derive a true dominant color to be designated ultimately as the dominant color.
  • the dominant color can comprise, in reality, a pallet of dominant colors, each derived from applying the method.
  • the method can also be performed after quantizing the rendered color space, namely, quantizing at least some pixel chromaticities from the video content in the rendered color space to form a distribution of assigned colors, so tha t the dominant color extraction of step [1] draws upon the distribution of assigned colors (e.g., [a] a mode of the distribution of assigned colors, etc.).
  • the pixel weighting function can be so formulated as to provide darkness support by: [4] assessing the video content to establish that a scene brightness in the scene content is low; and [5] performing any of: [a] using the pixel weighting function so further formulated to reduce weighting of assigned colors attributable to br ight pixels; and [b] broadcasting a dominant color obtained using reduced luminance relative to that which would otherwise be produced.
  • the pixel weighting function can be so formulated as to provide color support by [6] asse ssing the video content to establish that a scene brightness in the scene content is high; and [7] performing any of: [a] using the pixel weighting function so further formulated to reduce weighting of assigned colors attributable to bright pixels; and [b ] performing step [2][c].
  • the other steps can be altered accordingly to use assigned colors.
  • the method can also optionally comprise [0] Decoding the video content in the rendered color space into a plurality of frames, and quantizing at least some p ixel chromaticities from the video content in the rendered color space to form a distribution of assigned colors.
  • This can be assisted by [3c] matrix transformations of primaries of the rendered color space and second rendered color space to the unrendered color space using f irst and second tristimulus primary matrices;and deriving a transformation of the color information into the second rendered color space by matrix multiplication of the primaries of the rendered color space, the first tristimulus matrix, and the inverse of the second tristimulus matrix.
  • a dominant color is chosen from the distribution of assigned colors, one can then go backwards, so to speak, to obtain actual pixel chromaticities to refine the dominant color. For example, as mentioned, one can e stablish at least one color of interest in the distribution of assigned colors and extract pixel chromaticities assigned thereto to derive a true dominant color to be designated as the dominant color.
  • the assigned colors can be a crude approxi mation of video content
  • the true dominant color can provide the correct chromaticity for ambient distribution, while still saving on computation that would otherwise be required.
  • the pixel chromaticities of step [1] can be obtained from an extractio n region of any shape, size, or position, and one broadcast ambient light of the dominant color from the ambient light source adjacent the extraction region.
  • the unrendered color space that can be used for transformation to the ambient second rendered color space can be one of CIE XYZ; ISO RGB defined in ISO Standard 17321; Photo YCC; CIE LAB; or any other unrendered space.
  • the steps taken to perform dominant color extraction and to impose perceptual rules can be substantially synchronous with the video signal, with ambient light broadcast from or around the video display using the color information in the second rendered color space.
  • the instant teachings take into account user preferences, including disclosure of a method for dominant color extraction from video content encoded in a rendered color space to produce, using perceptual rules in accordance with a user preference, a dominant color for emulation by an ambient light source, comprising:
  • a temporal delivery perceptual rule chosen from at least one of: [a] a decrease in the rate of change in at least one of luminance and chromaticity of the dominant color; [b] an increase in the rate of change in at least one of luminance and chroma ticity of the dominant color; [IV] a spatial extraction perceptual rule chosen from at least one of: [a] giving greater weight in the pixel weighting function to scene content containing newly appearing features; [b] giving lesser weight in the pixel weigh ting function to scene content containing newly appearing features; [c] giving greater weight in the pixel weighting function to scene content from a selected extraction region; and [d] giving lesser weight in the pixel weighting function to scene content from a selected extraction region; and then transforming the luminance and chromaticity of the preferred ambient broadcast from the rendered color space to a second rendered color space so formed as to allow driving the ambient light source.
  • Explicit indicated user preferences can be indicated by any of: [1] repeated up and down varying of a value selected by a user -operated control; [2] an extreme value selected by a user-operated control; [3] a high rate of change in a value selected by a user-operated control; [4] light received by a light sensor in an ambient space; [5] sound received by a sound sensor in an ambient space; [6] vibration received by a vibration sensor in an ambient space; [7] a choice made in a graphical user interface; [8] a choice made on a user-operated control; [9] a sustained actuation call on a user -operated control; [10] repeated actuation calls on a user -operated control; [11] pressure sensing by a pressure sensor inside a user -operated control device; [12] motion sensing by a motion sensor inside a user -operated control device; and [13] any of meta -data, auxiliary data, or sub-code data
  • the degree to which darkness support, color support, and exten ded extraction steps given above are executed can be modulated in response to explicit indicated user preferences.
  • FIG. 1 shows a simple front surface view of a video display showing color information extraction regions and associated broadcasting of ambient light from six ambient light sources according to the invention
  • FIG. 2 shows a downward view - part schematic and part cross -sectional - of a room in which ambient light from multiple ambient light sources i s produced using the invention
  • FIG. 3 shows a system according to the invention to extract color information and effect color space transformations to allow driving an ambient light source;
  • FIG.4 shows an equation for calculating average color information from a video extraction region
  • FIG. 5 shows a prior art matrix equation to transform rendered primaries RGB into unrendered color space XYZ
  • FIGS. 6 and 7 show matrix equations for mapping video and ambient lighting rendered color spaces, respectively, into unrendered color space
  • FIG. 8 shows a solution using known matrix inversion to derive ambient light tristimulus values R'G'B' from unrendered color space XYZ;
  • FIGS. 9-11 show prior art derivation of tristimulus primary m atrix M using a white point method;
  • FIG. 12 shows a system similar to that shown in FIG. 3, additionally comprising a gamma correction step for ambient broadcast;
  • FIG. 13 shows a schematic for a general transformational process used in the invention
  • FIG. 14 shows process steps for acquiring transformation matrix coefficients for an ambient light source used by the invention
  • FIG. 15 shows process steps for estimated video extraction and ambient light reproduction using the invention
  • FIG. 16 shows a schematic of video frame extraction according to the invention
  • FIG. 17 shows process steps for abbreviated chrominance assessment according to the invention
  • FIG. 18 shows an extraction step as shown in FIGS. 3 and 12, employing a frame decoder, setting a frame extraction rate and performing an output calculation for driving an ambient light source;
  • FIGS. 19 and 20 show process steps for color information extraction and processing for the invention
  • FIG. 21 shows a schematic for a general process according to the invention, including dominant color extraction and transformation to an ambient lighting color space
  • FIG. 22 shows schematically one possible method for quantizing pixel chromaticities from video content by assigning the pixel chromaticities to an assigned color
  • FIG. 23 shows schematically one example of quantizing by binning pixel chromaticities into a superpixel
  • FIG. 24 shows schematically a binning process similar to of FIG. 23, but where size, orientation, shape, or location of the superpixel can be formed in conformity with an image feature
  • FIG. 25 shows regional color vectors and their colors or chromaticity coordinates on a standard cartesian CIE color map, where one color vector lies outside the gamut of colors obtainable by PAL/SECAM, NTSC, and Adobe RGB color production standards;
  • FIG. 26 shows a close-up of a portion of the CIE plot of FIG. 25, and additionally showing pixel chromaticities and their assignment to regional color vectors;
  • FIG. 27 shows a histogram that demonstrates a mode of an assigned color distribution according to one possible method of the invention
  • FIG. 28 shows schematically a median of an assigned color distribution according to one possible method of the invention.
  • FIG. 29 shows a mathematical summation for a weighted average by chromaticity of assigned colors according to one possible method of the invention
  • FIG. 30 shows a mathematical summation for a weighted average by chroma ticity of assigned colors using a pixel weighting function according to one possible method of the invention
  • FIG. 31 gives a schematic representation to show establishing a color of interest in a distribution of assigned colors and then extracting pi xel chromaticities assigned thereto to derive a true dominant color to be designated as a dominant color;
  • FIG. 32 shows schematically that dominant color extraction according to the invention can be performed numerous times or separately in parallel t o provide a pallet of dominant colors;
  • FIG. 33 shows the simple front surface view of a video display as shown in FIG. 1, showing an example of unequal weighting given to a preferred spatial region for the methods demonstrated in FIGS. 29 and 30;
  • FIG. 34 gives a simple front surface view of a video display as shown in FIG. 33, showing schematically an image feature extracted for the purpose of dominant color extraction according to the invention
  • FIG. 35 gives a schematic representation of a nother embodiment of the invention whereby video content decoded into a set of frames allows that a dominant color of one frame is obtained at least in part by relying upon a dominant color from a previous frame;
  • FIG. 36 shows process steps for an abridged procedure for choosing a dominant color according to the invention.
  • FIG. 37 shows a simple front surface view of a video display portraying scene content with a newly appearing feature to illustrate dominant color extraction with darkness support
  • FIG. 38 shows a simple front surface view of a video display portraying scene content to illustrate dominant color extraction with color support
  • FIG. 39 shows schematically three illustrative categories into which perceptual rules according to the instant invention can be classified.
  • FIG. 40 shows schematically a simple chromaticity transform as a functional operator
  • FIG. 41 shows schematically a series of possible steps for dominant color extraction employing an average calculated using a pixel weighting function according to the invention to execute two illustrative possible perceptual rules
  • FIG. 42 shows schematically a series of possible steps for dominant color extraction employing an average calculated using a pixel wei ghting function for extended dominant color extraction according to the invention to execute two illustrative possible perceptual rules;
  • FIG. 43 shows possible functional forms for a pixel weighting function used according to the invention.
  • FIG. 44 shows schematically possible functional groups to perform dominant color extraction using perceptual rules in accordance with user preferences according to the invention so as to produce a preferred ambient broadcast
  • FIG. 45 shows symbolically some possible component, methods, and signal sources to communicate user preferences
  • FIGS. 46 and 47 show cartesian plots of a number of waveforms representing luminance as a function of time, for various luminance perceptual rules following different user preferences;
  • FIG. 48 shows schematically a number of simple chromaticity transforms effecting a number of possible chromaticity perceptual rules according to user preferences
  • FIG. 49 shows schematically how the quality or degree of executi on of two perceptual rules as shown in FIG. 41 can be altered by user preferences
  • FIG. 50 shows schematically the extraction of video meta data from an audio -video signal to affect perceptual rules according to the invention
  • FIG. 51 shows cartesian plots of a number of waveforms representing luminance or chromaticity as a function of time, for various temporal delivery rules following different user preferences;
  • FIG. 52 gives a simple front surface view of a video display as shown in FIG. 34, showing schematically an image feature extracted in varying degrees using different spatial extraction perceptual rules according to different user preferences;
  • FIG. 53 gives a simple front surface view of a video display as shown in FIG. 52, but showing schematically a center region extracted in varying degrees using different spatial extraction perceptual rules according to different user preferences.
  • Ambient light source shall, in the appended claims, include any lighting production circuits or drivers needed to effect light production.
  • Ambient space shall connote any and all material bodies or air or space external to a video display unit.
  • Assigned color distribution - shall denote a set of colors chosen to represent (e.g., for computational purposes) the full ranges of pixel chromaticities found in a video image or in video content.
  • - Bright - when referring to pixel luminance shall denote either or both of: [1] a r elative characteristic, that is, brighter than other pixels, or [2] an absolute characteristic, such as a high brightness level. This might include bright red in an otherwise dark red scene, or inherently bright chromaticities such as whites and greys.
  • - Chromaticity transform - shall refer to a substitution of one chromaticity for another, as a result of applying a perceptual rule, as described herein.
  • - Chromaticity / Chrominance - shall, in the context of driving an ambient light source, denote a mecha nical, numerical, or physical way of specifying the color character of light produced, such as CIE chromaticity, and shall not imply a particular methodology, such as that used in NTSC or PAL television broadcasting.
  • - Colored - when referring to pixel chrominance shall denote either or both of: [1] a relative characteristic, that is, exhibiting higher color saturation than other pixels, or [2] an absolute characteristic, such as a color saturation level.
  • Color information - shall include either or both of chrominance and luminance, or functionally equivalent quantities.
  • Computer - shall include not only all processors, such as CPU's (Central Processing Units) that employ known architectures, but also any intelligent device that can allow coding, decoding, reading, processing, execution of setting codes or change codes, such as digital optical devices, or analog electrical circuits that perform the same functions.
  • processors such as CPU's (Central Processing Units) that employ known architectures, but also any intelligent device that can allow coding, decoding, reading, processing, execution of setting codes or change codes, such as digital optical devices, or analog electrical circuits that perform the same functions.
  • - Dark - when referring to pixel luminance shall denote either or both of: [1] a relativ e characteristic, that is, darker than other pixels, or [2] an absolute characteristic, such as a low brightness level.
  • Dominant color - shall denote any chromaticity chosen to represent video content for the purpose of ambient broadcast, including any c olors chosen using illustrative methods disclosed herein.
  • - Explicit indicated user preferences - shall include any and all inputs that communicate a user preference to be used to influence the character and effect of perceptual rules that affect or effect a preferred ambient broadcast, including: [1] meta - data, auxiliary data, or sub -code data associated with video content or an audio -video signal; [2] data obtained through a graphical user interface, whether associated with video content or displayed on a separate display; [3] data obtained from a control panel, remote control pad or other peripheral device, including any from existing control function, e.g., a volume control on a video display; or [4] data obtained from any transducer in ambient space ( AO) about the video display, such as a voice activated, sound-measuring, or other device.
  • the user preference does not have to stipulate specifically how to influence the character and
  • Extended (dominant color) extraction - shall refer to any process for dominant color extraction undertaken after a prior process has eliminated or reduce d the influence of majority pixels or other pixels in a video scene or video content, such as when colors of interest are themselves used for further dominant color extraction.
  • Extraction region - shall include any subset of an entire video image or fr ame, or more generally any or all of a video region or frame that is sampled for the purpose of dominant color extraction.
  • Frame - shall include time -sequential presentations of image information in video content, consistent with the use of the term fram e in industry, but shall also include any partial (e.g., interlaced) or complete image data used to convey video content at any moment or at regular intervals.
  • Goniochromatic - shall refer to the quality of giving different color or chromaticity as a function of viewing angle or angle of observation, such as produced by iridescence.
  • Goniophotometric - shall refer to the quality of giving different light intensity, transmission and/or color as a function of viewing angle or angle of observation, such as found in pearlescent, sparkling or retroreflective phenomena.
  • - Interpolate - shall include linear or mathematical interpolation between two sets of values, as well as functional prescriptions for setting values between two known sets of values.
  • - Light character - shall mean, in the broad sense, any specification of the nature of light such as produced by an ambient light source, including all descriptors other than luminance and chrominance, such as the degree of light transmission or reflection; or any specification of goniophotometric qualities, including the degree to which colors, sparkles, or other known phenomena are produced as a function of viewing angles when observing an ambient light source; a light output direction, including directionality as afforded by specifying a Poynting or other propagation vector; or specification of angular distribution of light, such as solid angles or solid angle distribution functions.
  • Luminance shall denote any parameter or measure of brightness, intensity, or equivalent measure, and shall not imply a particular method of light generation or measurement, or psycho -biological interpretation.
  • pixels - shall refer to pixels conveying similar color information , such as saturation, luminance, or chromaticity in a video scene. Examples, include pixels which are set to appear dark (darkness in a scene) while a smaller n umber, or a different number, of other pixels are brightly illuminated; pixels which are predominantly set to appear white or grey (e.g., cloud cover in a scene); and pixels which share similar chromaticity, such as leafy green colors in a forest scene whi ch also separately portrays a red fox).
  • the criterion used to establish what is deemed similar can vary, and a numerical majority is not required, though often applied.
  • Pixel - shall refer to actual or virtual video picture elements, or equivalent infor mation which allows derivation of pixel information.
  • a pixel can be any sub -portion of the video output which allows itself to be analyzed or characterized.
  • Pixel chromaticity - shall include actual values for pi xel chromaticities, as well as any other color values which are assigned as a result of any quantization or consolidation process, such as when a process has acted to quantize color space . It is therefore anticipated in the appended claims that a pixel ch romaticity can include values from an assigned color distribution .
  • - Quantize Color Space in the specification and in the context of the appended claims, shall refer to a reduction in possible color states, such as resulting from assignment of a larger number of chromaticities (e.g., pixel chromaticities) to a smaller number of assigned chromaticities or colors; or a reduction in pixel numbers by a selection process that picks out selected pixels; or binning to produce representative pixels or superpixels .
  • chromaticities e.g., pixel chromaticities
  • RGB color spaces are rendered image spaces, including the video spaces used to driv e video display D.
  • both the color spaces specific to the video display and the ambient light source 88 are rendered color spaces.
  • - Scene brightness - shall refer to any measure of luminance in scene content according to any desire d criterion.
  • Simple chromaticity transform - shall refer to a change or derivation of a dominant color or chromaticity according to a perceptual rule, not chosen or derived as a function of scene content, and where the change or derivation results in a chromaticity which is different from that which might otherwise be chosen.
  • Transforming color information to an unrendered color space - in the appended claims shall comprise either direct transformation to the unrendered color space, or use or benefit derived from using inversio n of a tristimulus primary matrix obtained by transforming to the unrendered color space (e.g., (M 2) "1 as shown in FIG. 8), or any calculational equivalent.
  • Unrendered color space - shall denote a standard or non -device-specific color space, such as those describing original image colorimetry using standard CIE XYZ; ISO RGB, such as defined in ISO 17321 standards; Photo YCC; and the CIE LAB color space.
  • User preference - shall not be limited to indications of desires of.users, but shall also include any choice made among a plurality of choices, even if that choice was not made by a user, such as when sub -code or meta data for video content is delivered using particular intended character and effect of perceptual rules that affect or effect a preferred ambient broadcast.
  • Video - shall denote any visual or light producing device, whether an active device requiring energy for light production, or any transmissive medium which conveys image information, such as a window in an office building, or an optica I guide where image information is derived remotely.
  • Video signal - shall denote the signal or information delivered for controlling a video display unit, including any audio portion thereof. It is therefore contemplated that video content analysis incl udes possible audio content analysis for the audio portion.
  • a video signal can comprise any type of signal, such as radio frequency signals using any number of known modulation techniques; electrical signals, including analog and quanitized analog waveforms; digital (electrical) signals, such as those using pulse -width modulation, pulse-number modulation, pulse -position modulation, PCM (pulse code modulation) and pulse amplitude modulation; or other signals such as acoustic signals, audio signals, and optical signals, all of which can use digital techniques. Data that is merely sequentially placed among or with other information, such as packetized information in computer-based applications, can be used as well.
  • Weighted - shall refer to any equivalent method to those given here for giving preferential status or higher mathematical weights to certain chromaticities, luminances, or spatial positions, possibly as a function of scene content. However, nothing shall preclude the use of unity as a weight for the purpose of providing a simple mean or average.
  • the pixel weighting function as described herein does not have to take on the functional appearance given (e.g., a summation of W over a plurality of pixels), but shall include all algorithms , operators or other calculus that operates with the same effect.
  • Ambient light derived from video content according to the invention is formed to allow, if desired, a high degree of fidelity to the chromaticity of original video scene light, while maintaining a high degree of specificity of degrees of freedom for ambient lighting with a low required computational burden. This allows ambient light sources with small color gamuts and reduced luminance spaces to emulate video scene light from more advanced light sources with relatively large colors gamuts and luminance response curves.
  • Possible light sources for ambient lighting can include any number of known lighting devices, including LEDs (Light Emitting Diodes) and related semi conductor radiators; electroluminescent devices including non -semiconductor types; incandescent lamps, including modified types using halogens or advanced chemistries; ion discharge lamps, including fluorescent and neon lamps; lasers; light sources that a re modulated, such as by use of LCDs (liquid crystal displays) or other light modulators; photoluminescent emitters, or any number of known controllable light sources, including arrays that functionally resemble displays.
  • the description given here shall relate in part at first to color information extraction from video content, and later, to extraction methods that are subject to perceptual rules to derive dominant or true colors for ambient broadcast that can represent video images or scenes.
  • Display D can comprise any of a number of known devices which decode video content from a rendered color space, su ch
  • RGB space such as Adobe RGB
  • RGB. Display D can comprise optional color information extraction regions Rl, R2, R3,
  • the color inform ation extraction regions are arbitrarily pre -defined and are to be characterized for the purpose of producing characteristic ambient light A8, such as via back-mounted controllable ambient lighting units (not shown) which produce and broadcast ambient ligh t Ll, L2, L3, L4, L5, and L6 as shown, such as by partial light spillage to a wall (not shown) on which display D is mounted.
  • a display frame Df as shown can itself also comprise ambient lighting units which display light in a similar manne r, including outward toward a viewer (not shown).
  • each color information extraction region Rl - R6 can influence ambient light adjacent itself.
  • color information extraction region R4 can influence ambient light L4 as shown.
  • FIG. 2 a downward view - part schematic and part cross -sectional - is shown of a room or ambient space AO in which ambient light from multiple ambient light sources is produced using the invention.
  • ambient space AO is arranged seating and tables 7 as shown which are arrayed to allow viewing of video display D.
  • ambient space AO are also arrayed a plurality of ambient light units which are optionally controlled using the instant invention, including light speakers 1 - 4 as shown, a sublight SL under a sofa or seat as shown, as well as a set of special emulative ambient light units arrayed about display D, namely center lights that produce ambient light Lx like that shown in FIG. 1.
  • Each of these ambient light units can emit ambient light A8, shown as shading in the figure.
  • scotopic or night vision relying on rods tends to be more sensitive to blues and greens.
  • Photopic vision using cones is better suited to detect longer wavelength light such as reds and yellows.
  • changes in relative luminosity of different colors as a function of light level can be counteracted somewhat by modulating or changing color delivered to the video user in ambient space. This can be done by subtracting light from ambient light units such as light speakers 1 - 4 using a light modulator (not shown) or by use of an added component in the light speakers, namely a photoluminescent emitter to further modify light before ambient release.
  • the photoluminescent emitter performs a color transf ormation by absorbing or undergoing excitation from incoming light from light source and then re -emitting that light in higher desired wavelengths.
  • This excitation and re -emission by a photoluminescent emitter can allow re ndering of new colors not originally present in the original video image or light source, and perhaps also not in the range of colors or color gamut inherent to the operation of the display D. This can be helpful for when the desired luminance of ambient light Lx is low, such as during very dark scenes, and the desired level of perception is higher than that normally achieved without light modification.
  • the production of new colors can provide new and interesting visual effects.
  • the illustrative example can be the production of orange light, such as what is termed hunter's orange, for which available fluorescent pigments are well known (see ref[2]).
  • orange light such as what is termed hunter's orange, for which available fluorescent pigments are well known (see ref[2]).
  • the example given involves a fluorescent color, as opposed to the general phenomenon of fluorescence and related phenomena.
  • Using a fluorescent orange or other fluorescent dye species can be particularly useful for low light conditions, where a boost in reds and oranges can counteract the decreased sensitivity of scotopic vision for long wavelengths.
  • Fluorescent dyes that can be used in ambient light units can include known dyes in dye classes such as Perylenes, Naphthalimides, Coumarins, Thioxanthenes, Anthraquinones, Thioindigoids, and proprietary dye classes such as those manufactured by the Day -GIo Color Corporation, Cleveland, Ohio, USA. Colors available include Apache Yellow, Tigris Yellow, Worcester Yellow, Pocono Yellow, Mohawk Yellow, Potomac Yellow, Marigold Orange, Ottawa Red, Volga Red, Salmon Pink, and Columbia Blue. These dye classes can be incorporated into resins, such as PS, PET, and ABS using known processes.
  • Fluorescent dyes and materials have enhanced visual effects because they can be engineered to be considerably brighter than nonfluorescent materials of the same chromaticity. So-called durability problems of traditional organic pigments used to generate fluorescent colors have largely been solved in the last two decades, as technological advances have resulted in the development of durable fluorescent pigments that maintain their vivid coloration for 7 -10 years under exposure to the sun. These pigments are therefore almost indestructible in a home theatre environment where UV ray entry is minimal.
  • fluorescent photopigments can be used, and they work simply by absorbing short wavelength light, and re -emitting this light as a longer wavelength such as red or orange.
  • ambient light units 1 - 4 and SL and Lx can use known goniophotometric elements (not shown), alone, or in combination, such as metallic and pearlescent transmissive colorants; iridescent materials using well -known diffractive or thin -film interference effects, e.g., using fish scale essence; thi n flakes of guanine; or 2 - aminohypoxanthine with preservative.
  • Diffusers using finely ground mica or other substances can be used, such as pearlescent materials made from oxide layers, bornite or peacock ore; metal flakes, glass flakes, or plastic flakes; particulate matter; oil; ground glass, and ground plastics.
  • pearlescent materials made from oxide layers, bornite or peacock ore
  • metal flakes, glass flakes, or plastic flakes particulate matter
  • oil ground glass, and ground plastics.
  • color information is extracted from a video signal AVS using known techniques.
  • Video signal AVS can comprise known digital data frames or packets like those used for MPEG encoding, audio PCM encoding, etc.
  • One can use known encoding schemes for data packets such as program streams with variable length data packets, or transport streams which divide data packets evenly, or other schemes such single program transport streams.
  • the functional ste ps or blocks given in this disclosure can be emulated using computer code and other communications standards, including asynchronous protocols.
  • the video signal AVS as shown can undergo video content analysis CA as shown, possibly using known methods to record and transfer selected content to and from a hard disk HD as shown, and possibly using a library of content types or other information stored in a memory MEM as also shown.
  • video content analysis CA as shown, possibly using known methods to record and transfer selected content to and from a hard disk HD as shown, and possibly using a library of content types or other information stored in a memory MEM as also shown.
  • This can allow independent, parallel, direct, dela yed, continuous, periodic, or aperiodic transfer of selected video content.
  • feature extraction FE such as deriving color information (e.g., dominant color) generally, or from an image feature.
  • This color information is still encoded in a rendered color space, and is then transformed to an unrendered color space, such as CIE XYZ using a RUR Mapping Transformation Circuit 10 as shown.
  • RUR herein stands for the desired transformation type, namely, rendered -unrendered-rendered, and thus RUR Mapping Transformation Circuit 10 also further transforms the color information to a second rendered color space so formed as to allow driving said ambient light source or sources 88 as shown.
  • the RUR transformation is preferred, but other mappings can be used, so long as the ambient lighting production circuit or the equivalent receives information in a second rendered color space that it can use.
  • RUR Mapping Transformation Circuit 10 can be functionally contained in a computer system which uses software to perform the same functions, but in the case of decoding packetized information sent by a data transmission protocol, there could be memory (not shown) in the circuit 10 which contains, or is updated to contain, info rmation that correlates to or provides video rendered color space coefficients and the like. This newly created second rendered color space is appropriate and desired to drive ambient light source 88 (such as shown in FIGS. 1 and 2), and is fed using know n encoding to ambient lighting production circuit 18 as shown.
  • Ambient lighting production circuit 18 takes the second rendered color space information from RUR Mapping Transformation Circuit 10 and then accounts for any input from any user interface and any resultant preferences memory (shown together as U2) to develop actual ambient light output control parameters (such as applied voltages) after possibly consulting an ambient lighting (second rendered) color space lookup table LUT as shown.
  • the ambient light output control parameters generated by ambient lighting production circuit 18 are fed as shown to lamp interface drivers D88 to directly control or feed ambient light source 88 as shown, which can comprise individual ambient light units 1 - N, such as previously cited ambient light speakers 1 - 4 or ambient center lights Lx as shown in FIGS. 1 and 2.
  • the color information removed from video signal AVS can be abbreviated or limited.
  • FIG.4 an equation for calculating average color information from a video extraction region is shown for discussion. It is contemplated, as mentioned below (see FIG. 18), that the video content in video signal AVS will comprise a series of time sequenced video frames, but this is not required.
  • each extraction region e.g., R4
  • Each extraction region can be set to have a certain size, such as 100 by 376 pixels.
  • the resultant gross data for extraction regions Rl - R6 before extracting an average would be 6 x 100 x 376 x 25 or 5.64 million bytes /sec for each video RGB tristimulus primary.
  • This data stream is very large and would be difficult to handle at RUR Mapping Transformation Circuit 10, so extraction of an average color for each extraction region Rl - R6 can be effected during Feature Extraction FE.
  • RGB color channel value e.g., Ry
  • R avg average for each RGB primary
  • R A VG I Ravg, G avg , B avg
  • the same procedure is repeated for all extraction regions Rl - R6 and for each RGB color channel. The number and size of extractive regions can depart from that shown, and be as desired.
  • Transformation Circuit 10 can be illustratively shown and expressed using known tristimulus primary matrices, such as shown in FIG. 5, where a rendered tristimulus color space with vectors R, G, and B is transformed using the tristimulus primary matrix M with elements such as X r ,max, Yr,max, Z r , max where X r ,max is tristimulus value of the R primary at maximum output.
  • the transformation from a rendered color space to unrendered, device -independent space can be image and/or device specific - known linearization, pixel reconstruction (if necessary), and white point selection steps can be effected , followed by a matrix conversion.
  • Unrendered images need to go through additional transforms to make them viewable or printable, and the RUR transformation thus involves a transform to a second rendered color space.
  • FIGS. 6 and 7 show matrix equations for mapping the video rendered color space, expressed by primaries R, G, and B and ambient lighting rendered color space, expressed by primaries R', G', and B' respectively, into unrendered color space X, Y, and Z as shown, where tristimulus primary matrix Mi transforms video RGB into unrendered XYZ, and tristimulus primary matrix M2 transforms ambient light source R'G'B' into unrendered XYZ color space as shown. Equating both rendered color spaces RGB and R'G'B' as shown in FIG.
  • FIGS. 9-11 prior art derivation of a generalized tristimulus primary matrix M using a white point method is shown.
  • quantities like S r X r represent the tristimulus value of each (ambient light source) primary at maximum output, with S r representing a white point amplitude, and X r representing the chromaticities of primary light produced by th e (ambient) light source.
  • the matrix equation equating S 1 - with a vector of the white point reference values using a known inverse of a light source chromaticity matrix is shown.
  • tristimulus value X is set equal to chromaticity x
  • tristimulus value Y is set equal to chromaticity y
  • tristimulus value Z is defined to be set equal to 1 -(x + y).
  • the color primaries and reference white color components for the second rendered ambient light source color space can be acquired using known techniques, such as by using a color spectrometer. Similar quantities for the first rendered video color space can be found. For example, it is known that contemporary studio monitors have slightly different standards in North America, Europe, and Japan.
  • RGB is:
  • FIG. 12 a system similar to that shown in FIG. 3 is shown, additionally comprising a gamma correction step 55 after feature extraction step FE as shown for ambient broadcast.
  • gamma correction step 55 can be performed between the steps performed by RUR Mapping Transformation Circuit IO and Ambient
  • Optimum gamma values for LED ambient light sources has been found to be 1.8, so a negative gamma correction to counteract a typical vid eo color space gamma of 2.5 can be effected with the exact gamma value found using known mathematics.
  • RUR Mapping Transformation Circuit 10 which can be a functional block effected via any suitable known software platform, performs a gener al RUR transformation as shown in FIG. 13, where a schematic as shown takes video signal AVS comprising a Rendered Color Space such as Video RGB, and transforms it to an unrendered color space such as CIE XYZ; then to a Second Rendered Color Space (Ambient Light Source RGB). After this RUR transformation, ambient light sources 88 can be driven, aside from signal processing, as shown.
  • FIG. 14 shows process steps for acquiring transformation matrix coefficients for an ambient light source used by the i nvention, where the steps include, as shown, Driving the ambient light unit(s); and Checking Output Linearity as known in the art.
  • the ambient light source primaries are stable, (shown on left fork, Stable Primaries), one can Acquire Transformation Mat rix Coefficients Using a Color Spectrometer; whereas if the ambient light source primaries are not stable, (shown on right fork, Unstable
  • FIG. 15 shows process steps for estimated video extraction and ambient light reproduction using the invention, where steps include [1] Prepare Colorimetric Estimate of Video Reproduction (From Rendered Color Space, e.g., Video RGB) ; [2] Transform to Unrendered Color Space ; and [3] Transform Colorimetric Estimate for Ambient Reproduction (Second Rendered Color Space, e.g., LED RGB) .
  • FIG. 16 a schematic of video frame extraction according t o the invention is shown.
  • a series of individual successive video frames F namely frames Fi, F2, F 3 and so on - such as individual interlaced or non -interlaced video frames specified by the NTSC, PAL, or SECAM standards - is shown.
  • N 10 gives good results, namely, subsampling 1 frame out of 10 successive frames can work. This provides a refresh period P between frame extractions of low processing overhead during which an interframe interpolation process can provide adequate approximation of the time development of chrominance changes in display D.
  • Selected frames Fi and F N are extracted as shown ( EXTRACT) and intermediate interpolated values for chrominance parameters shown as G 2 , G 3 , G 4 provide the necessary color information to inform the previously cited driving process for ambient light source 88 .
  • the interpolated values can be I inearly determined, such as where the total chrominance difference between extracted frames Fi and F N is spread over the interpolated frames G.
  • a function can spread the chrominance difference between extracted frames Fi and F N in any other manner, such as to suit higher order approximation of the time development of the color information extracted.
  • the results of interpolation can be used by accessing in advance a frame F to influence interpolated frames (such as in a DVD player) or, alter natively, interpolation can be used to influence future interpolated frames without advance access to a frame F (such as in broadcast decoding applications).
  • FIG. 17 shows process steps for abbreviated chrominance assessment according to the invention.
  • Higher order analysis of frame extractions can larger refresh periods P and larger N than would otherwise be possible.
  • interpolation proceeds ( Interpolate), with a delayed next frame extraction resulting in frozen, or incremented chrominance values being used. This can provide even more economical operation in terms of bitstream or bandwidth overhead.
  • FIG. 18 shows the top of FIGS. 3 and 12, where an alternative extraction step is shown whereby a frame decoder FD is used, allowing for regional information from extraction regions (e.g, Rl) is extracted at step 33 as shown.
  • a further process or component step 35 includes assessing a chrominance difference, and using that information to set a video frame extraction rate, as indicated.
  • a next process step of performing output calculations OO, such as the averaging of FIG. 4, or the dominant color extraction discussed below is performed as shown, prior to data transfer to Ambien t Lighting and Production Circuit 18 previously shown.
  • general process steps for color information extraction and processing for the invention include acquiring an video signal AVS; extracting regional (color) information from se lected video frames (such as previously cited Fi and F N ) ; interpolating between the selected video frames; an RUR Mapping Transformation; optional gamma correction; and using this information to drive an ambient light source (88).
  • two additional process steps can be inserted after the regional extraction of information from selected frames: one can perform an assessment of the chrominance difference between selected frames Fi and F N , and depending on a preset criterion, one can set a new frame extraction rate as indicated.
  • a chrominance difference between successive frames Fi and F N is large, or increasing rapidly (e.g, a large first derivative), or satisfies some other criterion, such as based on chrominance difference history, one can then increase the frame extraction rate, thus decreasing refresh period P.
  • a chrominance difference between successive frames Fi and F N is small, and is stable or is not increasing rapidly (e.g, a low or zero absolute first derivative), or satisfies some other criterion, such as based on chrominance difference history, one can then save on the required data bitstream required and decrease the frame extraction rate, thus increasing refresh period P.
  • FIG. 21 a schematic is shown for a general process according to one aspect of the invention.
  • the rendered color space corresponding to the video content is quantized (QCS Quantize Color Space ), such as by using methods given below; then, [2] a dominant color (or a pallet of dominant colors) is chosen ( DCE Dominant Color Extraction); and [3] a color mapping transformation, such as the RUR Mapping Transformation ( 10) is performed ( MT Mapping Transformation to R'G'B' ) to improve the fidelity, range and appropriateness of the ambient light produced.
  • QCS Quantize Color Space such as by using methods given below
  • DCE Dominant Color Extraction DCE Dominant Color Extraction
  • a color mapping transformation such as the RUR Mapping Transformation ( 10) is performed ( MT Mapping Transformation to R'G'B' ) to improve the fidelity, range and appropriateness of the ambient light produced.
  • the optional quantizing of the color space can be likened to reducing the number of possible color states and/or pixels to be surveyed , and can be effected using various methods.
  • FIG. 22 shows schematically one possible method for quantizing pixel chromaticities from video content.
  • assigned color AC is substituted therefor, resulting in a reduction by a factor of 16 for the red primary alone in the number of colors needed in characterizing a video image.
  • FIG. 23 shows schematically another example of quantizing the rendered color space by binning pixel chromaticities from a plurality of pixels Pi (e.g, 16 as shown) into a superpixel XP as shown.
  • Binning is by itself a method whereby adjacent pixels are added together mathematically (or computationally) to form a superpixel which itself is used for further computation or representation.
  • the number of superpixels chosen to represent the video content can reduce the number of pixels for computation to 0.05 million or any other desired smaller number.
  • the number, size , orientation, shape, or location of such superpixels XP can change as a function of video content. Where, for example, it is advantageous during feature extraction FE to insure that superpixels XP are drawn only from the image feature, and not from a border area or background, the superpixel(s) XP can be formed accordingly.
  • FIG. 24 shows schematically a binning process similar to of FIG. 23, but where size, orientation, shape, or location of the superpixel can be formed in conformity with an image feature 38 as shown. Image feature 38 as shown is jagged or irregular in not having straight horizontal or vertical borders. As shown, superpixel XP is selected accordingly to mimic or emulate the image feature shape.
  • the location, size, and orientation of such superpixels can be influenced by image feature 38 using known pixel level computational techniques.
  • Quantization can take pixel chromaticities and substitutes assigned colors (e.g., assigned color AC) to same. Those assigned colors can be assigned at will, including using preferred color vectors. So, rather than using an arbitrary or uniform set of assigned colors, at least some video image pixel chromaticities can be assigned to preferred color vectors.
  • FIG. 25 shows regional color vectors and their colors or chromaticity coordinates on a standard cartesian CIE x-y chromaticity diagram or color map.
  • the map shows all known colors or perceivable colors at maximum luminosity as a function of chromati city coordinates x and y, with nanometer light wavelengths and CIE standard illuminant white points shown for reference.
  • Three regional color vectors V are shown on this map, where it can be seen that one color vector V lies outside the gamut of colors obtainable by PAL/SECAM, NTSC, and Adobe RGB color production standards (gamuts shown).
  • FIG. 26 shows a close-up of a portion of the CIE plot of FIG. 25, and additionally showing pixel chromaticities Cp and their assignment to regional colo r vectors V.
  • the criteria for assignment to a regional color vector can vary, and can include calculation of a Euclidean or other distance from a particular color vector V, using known calculational techniques.
  • the color vector V which is labeled lies ou tside the rendered color space or color gamut of the display systems; this can allow that a preferred chromaticity easily produced by the ambient lighting system or light source 88 can become one of the assigned colors used in quantizing the rendered (vide o) color space.
  • the next step is to perform a dominant color extraction from the distribution of assigned colors by extracting any of: [a] a mode of the assig ned colors; [b] a median of the assigned colors; [c] a weighted average by chromaticity of the assigned colors; or [d] a weighted average using a pixel weighting function.
  • FIG. 27 shows a histogram that gives the assigned pixel color or colors (Assigned Colors) occurring the most often (see ordinate, Pixel Percent), namely, the mode of the assigned color distribution. This mode or most of ten used assigned color can be selected as a dominant color DC (shown) for use or emulation by the ambient lighting system.
  • the median of the assigned color distribution can be selected to be, or help influence the selection of, the dominan t color DC.
  • FIG. 28 shows schematically a median of an assigned color distribution, where the median or middle value (interpolated for an even number of assigned colors) is shown selected as dominant color DC.
  • FIG. 29 shows a mathematical summation for a weighted average b y chromaticity of the assigned colors.
  • a single variable R is shown, but any number of dimensions or coordinates (e.g., CIE coordinates x and y) can be used.
  • Chromaticity variable R is summed as shown over pixel coordinates (or superpixel co ordinates, if needed) i and j, running in this example between 1 and n and m, respectively.
  • Chromaticity variable R is multiplied throughout the summation by a pixel weighting function W with indices i and j as shown; the result is divided by the number o f pixels n x m to obtain the weighted average.
  • FIG. 30 is similar to FIG. 29, except that W as shown is now a function also of pixel locations i and j as shown, thus allowing a spatial dominance function.
  • W is now a function also of pixel locations i and j as shown, thus allowing a spatial dominance function.
  • the center or any other portion of display D can be emphasized during selection or extraction of dominant color DC, as discussed below.
  • the weighed summations can be performed b y as given in the Extract Regional Information step 33 as given above, and W can be chosen and stored in any known manner.
  • Pixel weighting function W can be any function or operator, and thus, for example, can be unity for inclusion, and zero for exclusio n, for particular pixel locations.
  • Image features can be recognized using known techniques, and W can be altered accordingly to serve a larger purpose, as shown in FIG. 34 below.
  • FIG. 31 gives an illustrative schematic representation to show establishing a color of interest in a distribution of assigned colors and then extracting pixel chromaticities assigned thereto to derive a true dominant color to be designated as a dominant color.
  • pixel chromaticities Cp are assigned to two assigned colors AC; the assigned color AC shown at the bottom of the figure is not selected to be dominant, while the top assigned color is d eemed dominant (DC) and is selected to be a color of interest COI as shown.
  • DC d eemed dominant
  • FIG. 33 shows the simple front surface view of a video display as shown in FIG. 1, and showing an example of unequal weighting given to pixels Pi in a preferred spatial region.
  • the central region C of the display can be weighted using a numerically large weight function W, while an extraction region (or any region, such as a scene background) can be weighted using a numerically small weight function w, as shown.
  • This weighting or emphasis can be applied to image features J8 as shown in FIG. 34, which gives a simple front surface view of a video display as shown in FIG. 33, and where an image feature J8 (a fish) is selected using known techniques by feature extractor step FE (see FIGS. 3 and 12).
  • This image feature 38 can be the only video content used, or just part of the video content used, during dominant color extraction DCE as shown and described above.
  • This abridged procedure is shown in FIG. 36, where to reduce computational burden, a provisional dominant color extraction DC4 * uses a colorimetric estimate, and then in the next step is aided by Dominant Colors Extracted from Previous Frames (or a single previous frame), helping prepare a choice for DC4 (Prepare DC4 Using Abridged Procedure ). This procedure can b e applied to good effect to the description below.
  • FIG. 37 a simple front surface view is shown of a video display portraying scene content, including with a possible newly appearing feature, to illustrate the need for dominant color extraction with darkness support and other perceptual prerogatives according to the invention.
  • dominant color extraction often produces results at odds with desired perceptual output.
  • FIG. 37 gives a schematic portrayal of a da rk or night scene featuring a particular scene feature VlIl (e.g., a green fir tree).
  • VlIl e.g., a green fir tree
  • a large number of, or a majority of pixels form the bulk of — or a large part of - the frame image, and these majority pixels MP possess, on average, little or no luminance.
  • dark effects for ambient broadcast can be preferable, and chromaticities preferred by designers for a mbient broadcast are often those of a separate scene entity, such as the tree in scene feature VlIl, rather than a chromaticity derived in large part from majority pixels MP, which in this illustrative example express darkness by having low average luminan ce, and nominal chromaticities which, if expressed in ambient lighting, might seem contrived.
  • Methods for accomplishing this include imposing a perceptual rule effected by providing darkness support as discussed below, where a dark scene is detected, and such majority pixels MP are identified, and either eliminated from consideration in dominant color extraction, or given reduced weighting in relation to other pixels forming scene features such as scene feature VlH.
  • This requires recognition of a sce ne element using scene content analysis CA (see FIG. 12), and then effecting special treatment for various other scene elements, such as a dark background or a scene feature.
  • Imposing perceptual rules can also include removing scene portions that are unde sirable for dominant color extraction, such as scene speckle or scene artifacts, and/or can include image feature recognition, such as for scene feature VlIl, by feature recognition (e.g., feature extraction FE, e.g., FIGS. 3 and 12 or a functional equival ent) and as discussed for FIG. 34.
  • feature recognition e.g., feature extraction FE, e.g., FIGS. 3 and 12 or a functional equival ent
  • a new scene feature such as V999, a lightning bolt or flash of light
  • V999 a lightning bolt or flash of light
  • FIG. 38 a simple front surface view is shown of a video display portraying scene content to illustrate dominant color extraction with color support.
  • FIG. 38 gives a scene that portrays a relatively bright, somewhat self - similar region as scene feature V333, which might depict cloud cover, or white water splashing from a waterfall.
  • This scene feature V333 might be predominantly grey or white, and therefore can be deemed to be comprised of majority pixels MP as shown, while another scene feature, V888, e.g., a blue sky, is not composed of majority pixels, and can be preferred over majority pixels MP for dominant color extraction - i.e., an ambient lighting effects designer might prefer that blue be broadcast in this instance, rather than a white or grey color, particularly if scene feature V888 is newly appearin g, or contains a preferred chromaticity (e.g., sky blue) for ambient broadcast.
  • dominant color extraction can sometimes result in color being underestimated, and dominated by bright or highly saturated whites, greys, or other undersaturated colors.
  • a perceptual rule or set of perceptual rules can be imposed to provide color support, such as to assess scene brightness and reduce or eliminate the influence or weighing of white/grey majority pixels MP, while boosting the influence of other scene features such as blue sky V888.
  • Perceptual Rules for Dominant Color Selection can comprise any or all of: Simple Chromaticity Transforms SCT, Pixel Weighting as a Function of Scene Content PF8, and Extended Extraction / Search EE8. These categories are meant to be merely illustrative , and those of ordinary skill will be able to use the teachings given here to develop alternate similar schemes.
  • FIGS. 40 - 43 examples of specific methodologies relating to imposition of these perceptual rule groups are given.
  • the first, simple chromaticity transforms SCT can represent many methodologies, all of which seek to substitute or transform initially intended dominant colors with other, distinct chromaticities. Specifically, a particular chosen chromaticity ( x, y) produced by dominant color extraction can be replaced in any desired instance with transformed chromaticity (x', y') as shown in FIG. 40 which shows schematically a simple chromaticity transform SCT as a functional operator.
  • a particular dominant color e.g., a brown
  • the nearest match for that dominant color in the light space of the ambient light source 88 is a chromaticity (x, y), such as a color that has a purplish cast - and that nearest match chromaticity is not preferred from a perceptual standpoint -
  • a transformation or substitution can be made to a chromaticity ( x', y'), such as a color made from orange and green ambient light production, and developed by ambient lighting production circuit 18 or the equivalent as previously cited.
  • transformations can take the form of chromaticity -by-chromaticity mapping, perhaps contained in a lookup table ( LUT), or can be embodied in machine code, software, a data file, an algorithm or a functional operator. Because this type of perceptual rule does not need involve explicit content analysis, it is termed a simple chromaticity transform.
  • Simple chromaticity transforms SCT can exercise perceptual rules that give greater broadcast time to preferred chromaticities than would otherwise be given. If, for example, a particular blue is preferred or is deemed desirable, it can be the subject or result of a simple chromaticity transform SCT which favors it by mapping a large number of similar blue chromaticities to that particular blue. Also, the invention can be practiced where a simple chromaticity transform is used to preferentially choose a chromaticity found in the second rendered color space of ambient light source 88. Also according to the invention, scene content analysis CA can be used to add functionality to pixel weighting function W in a manner to allow imposition of perceptual rules. FIG. 43 shows possible functional forms for such a pixel weighting function.
  • Pixel weighting function W can be a function of multiple variables, including any or all of: video display pixel spatial position, as indexed, for example, by indices i and j; chromaticity, such as a phosphor luminance level or primary value R (where R can be a vector representing R, G, and B) or chromaticity variables x and y; and luminance itself, L (or an equivalent) as shown.
  • chromaticity such as a phosphor luminance level or primary value R (where R can be a vector representing R, G, and B) or chromaticity variables x and y
  • luminance itself L (or an equivalent) as shown.
  • pixel weighting function W can be a functional operator, it can be set to reduce, - or eliminate, if necessary - any influence from selected pixels, such as those representing screen speckle, screen artifacts, or those deemed to be major ity pixels MP by content analysis, such as when cloud cover, water, or darkness, or other scene content is given less weighting or zero weighting to comply with a perceptual rule.
  • FIG. 41 a series of possible steps is shown schematic ally for dominant color extraction employing an average calculated using a pixel weighting function according to the invention to execute two illustrative possible perceptual rules.
  • the general step, termed Pixel Weighting as a Function of Scene Content PF8 can comprise many more possible functions than the illustrative two shown using arrows.
  • scene content analysis is performed.
  • One possible or step is to Assess Scene Brightness, such as by calculating, for any or all pixels, or for a distribution of assigned colors, the overall or average luminance or brightnes s per pixel.
  • the overall scene brightness is deemed low (this step omitted for clarity) and a possible resultant step is to Lower Ambient Lighting Luminance as shown, to make the production of ambient light match the scene dark ness more than it would otherwise.
  • the chosen threshold lumi nance level to decide what constitutes a bright or colored pixel can vary, and be established as a fixed threshold, or can be a function of scene content, scene history, and user preferences. As an example, all bright or colored pixels can have their W values reduced by a factor of 3 in order to reduce ambient lighting luminance for whatever dominant color is chosen from them.
  • the step of lowering the ambient lighting luminance can also operate for this goal, such as to lower equally all pixel luminances by further reducing pixel weighting function W accordingly.
  • the pixel weighting function W can be reduced by a separate function that itself a function of the luminance of a particular pixel, such as a factor 1/L ⁇ 2 where L is a luminance.
  • Another possible step for darkness support is Possible Selection of COIs from Bright/ Colored Pixels, namely the above -cited process whereby a color of interest is established from the subset of pixels in video content which are bright and perhaps have high saturation (colored), e.g., from feature VlIl of FIG. 37..
  • certain chromaticities can be chosen for further analysis in a manner similar to that discussed and shown in FIG. 31 above, whether it is to discern the true color for a assigned color that has been chosen, or whether the color of interest is from a pixel chromaticity and will itself become part of an assigned color distribution for further analysis, such as repeating dominant color extraction for such colors of interest (e.g, find ing a representative green for the fir tree VlIl).
  • This can lead to another possible step shown, Possible Extended Extraction as discussed further below and Select Dominant Color as shown, which could be the result of doing extended dominant color extract ion on a distribution of colors of interest gleaned from a prior dominant color extraction process.
  • scene content analysis is again performed.
  • One possible or step is to Assess Scene Brightness, such as by calculating, for any or all pixels, or for a distribution of assigned colors, the overall or average luminance or brightness p er pixel, as done before. In this example, a high overall scene brightness is found.
  • Another possible step is to eliminate or reduce the weighting given by the pixel weighting function W for high luminance, white, grey, or bright pixels, shown as Truncate / Reduce Weighting for Bright/Colored Pixels.
  • the pixels representing cloud cover V333 of FIG. 38 can be eliminated from the pixel weighting function W by setting contributions therefrom to a negligible value or to zero.
  • Possible Extended Extraction as shown can also be performed to help perform the step of Select Dominant Color as shown, and discussed below.
  • the step of Extended Extraction / Search EE8 as mentioned above and as shown in FIG. 42 can be any process undertak en after an initial dominant color extraction process, such as a process of using perceptual rules to narrow down a set of candidate dominant colors.
  • FIG. 42 shows schematically a series of possible steps for dominant color extraction employing a chromati city / luminance average calculated using a pixel weighting function for extended dominant color extraction according to the invention to execute two illustrative possible perceptual rules. Two such examples of extended extraction shown are Static Support Perceptual Rule and Dynamic Support Perceptual Rule as shown.
  • One the left side as shown, one possible static support perceptual rule process can include a step of Identify, then Truncate / Reduce Weighting for Majority Pixels .
  • Possible Selection of COI from Remaining Chromaticities e.g., Histogram Method
  • an extended dominant color extraction on pixels that are not majority pixels MP such as the earlier cited dominant color extraction from the pixel chromaticities or distribution of assigned colors by extracting any of: [a] a mode (e.g., histogram method); [b] a median; [c] a weighted average by chromaticity; or [d] a weighted average using a pixel weighting function of the pixel chromaticities or assigned colors. It can be similar to a functional repeat of dominant color extraction after applying a perceptual rule, such as reducing the weight given to majority pixels. From this dominant color extraction process, the last step, Select Dominant Color for Ambient Broadcast can be executed.
  • Another possible perceptual rule is the Dynamic Support Perceptual Rule as shown on the right side. The first two steps shown are ide ntical to those for Static Support on the left side. A third possible step is identifying a newly appearing scene feature (such as lightning bolt VlIl) and performing Dominant Color Extraction from Newly Appearing Scene Feature as shown. A fourth possibl e step is to Select Chromaticities from Either or Both of Previous Steps for Ambient Broadcast as indicated, namely that this perceptual rule can involve taking either or both of the result of performing dominant color extraction on the newly appearing see ne feature or from performing dominant color extraction on the remaining chromaticities obtained after reducing or eliminating the effect of majority pixels MP.
  • both the newly appearing lightning strike V999 and the tree VlIl can contribute to the derivation of one or more dominant colors DC for ambient broadcast, rather than taking a straight dominant color extraction without a perceptual rule.
  • exercising a perceptual rule in this way nothing precludes quantizing the col or space beforehand, as given above. Also, these methods can be repeated for chosen scene features, or to search further for preferred chromaticities for ambient broadcast.
  • a newly appearing feature causes, using another perceptual rule that emphasizes new content, a white output based on dominant color extraction for the boat so that the ambient broadcast turns white until the boat recedes in the video scene.
  • another perceptual rule that deems newly appearing content to be no longer controlling when the number of pixels it represents drops below a certain percentage - or below a share outside the features already in play (sand, sky, sun) - allows that the three background features again are set for ambient broadcast through their respective dominant colors.
  • sand tone pixel s again are greater in number, their effect is again suppressed by allowing pixel weighting function W to be zeroed for them.
  • the dominant color extracted might be time-varying shades of a light blueish white throughout, not representative of scene content, and having less entertainment or information value for the viewer.
  • the imposition of perceptual rules as thus given allows specificity in the form of parameters, and yet, once effected, has the effect of appearing to be intelligently choreographed. Results of applying perceptual rules in dominant color extraction can be used as previously given, so that such color information is made available to ambient light source 88 in a second rendered color spa ce.
  • ambient light produced at L3 to emulate extraction region R3 as shown in FIG. 1 can have a chromaticity that provides a perceptual extension of a phenomenon in that region, such as the moving fish as shown. This can multiply the visu al experience and provide hues which are appropriate and not garish or unduly mismatched.
  • FIG. 44 there is shown schematically a number of possible functional groups to perform dominant color extraction using more general perceptual rules in accordance with user preferences according to the invention so as to produce a preferred ambient broadcast. As can be seen from FIG. 44, the perceptual rules previously discussed can be expanded, especially if added user preferences are to be taken into account.
  • Chromaticity rules can be applied as previously described, with Simple Chromaticity Transforms SCT, Pixel Weighting as a Function of Scene Content PF8, and Extended Extraction / Search EE8 as shown. Chromaticity rules can be augm ented by adding explicit Luminance Perceptual Rules
  • Temporal Delivery Perceptual Rules TDPR as shown can allow faster or slower time delivery or altered time development of ambient broadcasts. This can include slowing down or speeding up changes in luminance and/or chromaticities, but also more complex functions or operators which selectively speed up or slow down ambient lighting effects in response to Scene Content as read from functional step PF8 as shown, or other factors.
  • Spatial Extraction Perceptual Rules SEPR can allow, as previously discussed, weighted averages of pixel chromaticities using pixel weighting function W which take into account pixel position (i, j) - but now these spatial and other general perceptual rules are also a function of Possible Explicit Indicated User Preferences as shown.
  • this set of general p erceptual rules shown at the figure upper right as General Perceptual Rules in Accordance with User Preferences , is developed in conjunction with, and can be altered as a function of - Possible Explicit Indicated User Preferences as shown in the upper left, and the result is a Preferred Ambient Broadcast PAB as shown.
  • Each of the arrows from the explicit indicated user preferences into the general perceptual rules signifies symbolically and illustratively the effect of a particular user preference, and in cludes any and all inputs that communicate a user preference to be used to influence the character and effect of perceptual rules that affect or effect a preferred ambient broadcast - see Definitions section.
  • user preferences can include steps which affect the general perceptual rules and therefore the nature and character of the ambient light produced, e.g., lively, responsive, bright, etc. - versus subdued, slow -moving, dim or subtle.
  • FIG. 45 shows symbolically some possible com ponents, methods, and signal sources to communicate user preferences, including some that can use an existing component system that may not have been designed for communication of explicit indicated user preferences. While it is contemplated that a remot e control device or similar user- operated control can allow entry of explicit indicated user preferences directly, other inputs of user preferences can include the detection of particular selections or selection behavior on a user-operated control. One ca n work with a default set of user preferences that influence the general perceptual rules, and then, for example, one can allow for more lively preferences to coincide with extreme value selection from the user -operated control.
  • This can allow, for example, for toggling between a set of user preferences (e.g., lively versus subdued).
  • the up/down control can be a bona fide up/down function, or can be any up/down change of a value, such as a volume change or channel change.
  • a user preference can be communicated by selection of an extreme value on the user -operated control, such as the value K selected to 33/40 ... 970/980/990/999 as illustratively shown.
  • a room vibration sensor VS can sense dancing or loud voices, while a sound sensor SS can perform similar functions.
  • a light sensor LS can allow for brighter, more lively ambient broadcasts during daylight hours, for example, while darkness can allow for lower luminances and perhaps a lesser degr ee of Darkness Support and/or Color Support as discussed for FIG. 41.
  • a known graphical user interface GUI such as choices displayed on video display D or any other display, such as a display on remote control or user-operated control RC to input user preferences.
  • the user preferences can be displayed as choices with pre -set characteristics as a package, or the user can be asked to select specific parameter based general perceptual rules, such as the degree to which the Brightness or Darkness Support of FIG. 41 is effected.
  • specific parameter based general perceptual rules such as the degree to which the Brightness or Darkness Support of FIG. 41 is effected.
  • the degree of Darkness Support can, for example, be selected on a scale of 1 to 10, or can be more specific, even including having the user specify actions to be taken with regard to specific phenomena, such as the displaying of certain chromaticities, such as whether or not one wa nts to view bright, fully saturated colors, or partially saturated colors; or whether one wants to limit total or maximum luminance for the ambient broadcast from ambient light source 88.
  • the data so encoded does not have to be absolute, but can include scripts that use any of the other methods given here to further specify user preferences used for a viewing session.
  • a choice selector 155 can allow a choice, including a choice presented by receiving video meta dat a VMD.
  • any selector or button on a user - operated control can communicate a user preference or toggle between established user preferences by a sustained or repeated actuation call, such as by pressing choice selector 155 continuously or pressing rep eated times, even though it is not strictly necessary for the functionality it otherwise represents. For example, sustained or repeated pressing of an ON button or channel change button, or actuating call for same, can set a user preference.
  • Sustained pressing of the front of remote control RC can communicate action and brightness, including emphasizing dominant color extraction selected from the display center and being very responsive to new image features, while sustained pressing of the back of the remote control can signal the opposite in a preferred ambient broadcast.
  • a known motion sensor MS as shown inside the user -operated control to establish a user preference.
  • a motion sensor can be, for example, a simple accelerometer using capacitive or mag netic effects to provide sensing of motion.
  • the front of the remote can be pitched back and forth as shown using the heavy arrow at the figure lower right, so as to communicate a preference, while the back can be pitched for another indication.
  • the motio n can also sense back and forth pitching in 3 dimensions, allowing for example, for six degrees of freedom in selecting an explicit indicated user preference.
  • the nature and character of the preferred ambient broadcast produced can be a function of choices or options obtained through user preferences.
  • the luminance of an ambient broadc ast is a very important parameter to be set according to user preferences.
  • FIGS. 46 and 47 show cartesian plots of a number of waveforms representing luminance as a function of time, for various luminance perceptual rules following different illustrative user preferences UPl, UP2, UP3, UP4, UP5, and UP6.
  • the first illustrative waveform resulting from a user preference choice UPl represents a normal luminance profile or delivery that comes from the chromaticity perceptual rules and dominant color extraction previously described.
  • the second waveform results from applying user preference choice UP2 as shown, and is a halved luminance profile for lower broadcast brightness, which might result from a desire for a subdued preferred ambient broadcast and can be effected easily using known methods.
  • the third illustrative waveform shows a luminance profile which is the result of applying a user preference choice UP3, and offers an ambient broadcast only when the nominal luminance called for using dominant color extraction exceeds a luminance suppressive threshold LT as shown, so that the dotted luminance lines represent luminance not expressed (a dark ambient broadcast), while the solid lines represent the luminance of ambient light produced.
  • the fourth user preference choice UP4 shows a luminance ceiling cap or limit on the maximum brightness or luminance, so that nominal luminance developed by dominant color extraction cannot exceed a value, such luminance ceiling L9 as shown.
  • a luminance floor Ll as shown in the next waveform using a user preference UP5 as shown can allow for a minimum luminance regardless of what is being developed by the dominant color extraction methods specified here, as can be seen.
  • a luminance transform LX as shown associated with user preference choice UP6 can allow for a complex functional change, not just ceiling, floors, thresholds or multipliers - in the expressed luminance for preferred ambient broadcast.
  • Lumin ance transform LX can take any functional form, including the use of operators, and as a function of any variable available in this teaching to alter the expressed luminance, increasing or decreasing the luminance from what it otherwise would be without us ing user preferences to alter general perceptual rules.
  • FIG. 48 shows schematically a number of Simple Chromaticity Transforms SCT effecting a number of possible chromaticity perceptual rules according to Explicit Indicated User Preferences (shown).
  • a Lockout of Selected Chromaticities as shown can effect an elimination or lockout of certain chromaticities, such as blood red, or other colors preselected among colors deemed to be lively and only for use when a lively preferred ambient broadcast is wanted.
  • This and other such general perceptual rules can be effected by software design, and/or by the User Interface and Preferences Memory U2 (e.g., FIGS. 3 and 12).
  • a less drastic step is to perform a Change in Weight Given to a Chromaticity (shown), such as by giving a lesser weight to a selected chromaticity in a pixel weighting function W, so that this color is less influential in the process of dominant color extraction DCE.
  • simple chromaticity transform SCT does not have to involve a naked set of simple substitutions of one chromaticity for another in the sense that the character of the dominant color DC selected can be changed in a systematic fashion to satisfy general objectives.
  • explicit indicated user preferences can be used to offer a variety of different degrees of color saturation.
  • a Change in Saturation (shown) can be a very effective tool to provide for differing appearances and characters in ambient broadcasts.
  • FIG. 49 shows schematically how the quality or degree of execution of two perceptual rules as shown in FIG. 41 can be altered by user preferences.
  • the figure shows symbolically the Darkness Support Perceptual Rule and the Color Support
  • Perceptual Rule as being fully enabled by user preference choices UP2 and UP4, respectively (dark heavy arrows as shown), and partially (or fully) disabled by user preference choices UPl and UP3, respectively (light dotted arrows as shown).
  • the extent to which one truncates or reduces the weight assigned to bright/colored pixels, or grey/white pixels can be altered as a function of user preferences.
  • FIG. 50 shows schematically the extraction of video meta data from an audio -video signal to affect perceptual rules according to the invention as previously shown in FIG. 45, but with a known buffer B which can - but does not have to - store video meta data VMD, auxiliary data, or sub -code data associated with video content or an audio -video signal AVS.
  • buffer B can extract or derive parameters that allow specifying general perceptual rules for use at a time not synch ronous with audio -video signal AVS or playback of video content.
  • the buffer can be a memory device, or simply a registry or lookup table or other software function that allows call -up of meta data, auxiliary data, or subcode - or derivatives thereof - for use in providing a preferred ambient broadcast, especially using temporal delivery perceptual rules.
  • FIG. 51 shows cartesian plots of a number of waveforms representing luminance - or chromaticity (shown Luminance / x / y) - as a function of time, for various temporal delivery rules following or resulting from different illustrative user preferences UPl, UP2, and UP3.
  • the first illustrative waveform resulting from a user preference choice UPl represents a normal temporal delivery profile that comes from the chromaticity perceptual rules and dominant color extraction previously described.
  • the second waveform results from applying user preference choice UP2 as shown, and is a slowed down temporal delivery profile for lowe r speeds for changes in broadcast parameters.
  • imposing this rule might leave the possibility of truncation or ignoring of subsequent changes in luminance or chromaticity because the time development of the expressed luminance or chromaticity pa rameter lags behind the corresponding real time parameter ordinarily developed by dominant color extraction.
  • the third illustrative waveform shows a luminance profile which is the result of applying a user preference choice UP3, and offers as shown an ambient broadcast with a sped up temporal delivery. This might require the use of a buffer B from the previous figure 50.
  • FIG. 52 gives a simple front surface view of a video display as shown in FIG. 34, showing schematically and illustratively an image feature J8 in a center region C extracted in varying degrees using different spatial extraction perceptual rules according to different user p references - a partially enabled extraction (light arrow, user preference choice UPl), and a fully enabled extraction (heavy arrow, user preference choice UP2).
  • FIG. 53 the degree to which extraction occurs throughout all of a center region C can be varied in a similar manner, as shown.
  • FIG. 53 gives a simple front surface view of a video display as shown in FIG. 52, but showing schematically a center region extracted in varying degrees using different spatial extraction perceptual rules according to different user preferences.
  • Either of these spatial extraction perceptual rules can be effected, for example, by altering pixel weighting function W to allow either a great weight to be given to newly arriving features ( J8) or a region (e.g., center region C) or to allow a lesser weight so as to register relatively little influence from same.
  • Center region C is chosen for illustrative purposes, and any display area can be singled out for altered treatment in accordance with user preferences operating to affect the general perceptual rules.
  • the Darkness Support and Color Support perceptual rules of FIG. 41 can be altered so that the degree to which one reduces weighting for bright pixels, and/or the degree to which one performs extended dominant c olor extraction EE8, and/or the degree to which one reduces or increases luminance - is a function of explicit indicated user preferences which the software designer has formulated to achieve a particular visual effect.
  • the extent to which one executes extended dominant color extraction generally can be modulated.
  • ambient light source 88 can embody various diffuser effects to produce light mixing, as well as translucence or other phenomena, such as by use of lamp structures having a frosted or glazed surface; ribbed glass or plastic; or apertured structures, such as by using metal structures surrounding an individual light source.
  • any number of known diffusing or scattering materials or phenomena can be used, including that obtain by exploiting scattering from small suspended particles; clouded plastics or resins, preparations using colloids, emulsions, or globules 1-5 :m or less, such as less than 1 :m, including long-life organic mixtures; gels; and sols, the production and fabrication of which is known by those skilled in the art.
  • Scattering phenomena can be engineered to include Rayleigh scattering for visible wavelengths, such as for blue production for blue enhancement of ambient light.
  • the colors produced can be defined regionally, such as an overall bluish tint in certain areas or regional tints, such as a blue light -producing top section (ambient light Ll or L2).
  • Ambient lamps can also be fitted with a goniophotometric element, such as a cylindrical prism or lens which can be formed within, integral to, or inserted within a lamp structure. This can allow special effects where the character of the light pro prised changes as a function of the position of the viewer.
  • a goniophotometric element such as a cylindrical prism or lens which can be formed within, integral to, or inserted within a lamp structure.
  • Other optical shapes and forms can be used, including rectangular, triangular or irregularly -shaped prisms or shapes, and they can be placed upon or integral to an ambient light unit or units.
  • Th e result is that rather than yielding an isotropic output, the effect gained can be infinitely varied, e.g., bands of interesting light cast on surrounding walls, objects, and surfaces placed about an ambient light source, making a sort of light show in a darkened room as the scene elements, color, and intensity change on a video display unit.
  • the effect can be a theatrical ambient lighting element which changes light character very sensitively as a function of viewer position - such as viewing bluish sparkles, then red light - when one is getting up from a chair or shifting viewing position when watching a home theatre.
  • the number and type , of goniophotometric elements that can be used is nearly unlimited, including pieces of plastic, glass, and the optica l effects produced from scoring and mildly destructive fabrication techniques.
  • Ambient lamps can be made to be unique, and even interchangeable, for different theatrical effects. And these effects can be modulatable, such as by changing the amount of lig ht allowed to pass through a goniophotometric element, or by illuminating different portions (e.g., using sublamps or groups of LEDs) of an ambient light unit.
  • Video signal AVS can of course be a digital datastream and contain synchronization bits and concatenation bits; parity bits; error codes; interleaving; special modulation; burst headers, and desired metadata such as a description of the ambient lighting effect (e.g., "lightning storm”; “sunrise”; etc.) and those skilled in the art will realize that functional steps given here are merely illustrative and do not include, for clarity, conventional steps or data. Using these teachings to allow user preferences to alter general perceptual rules, the
  • FIGS. 3 and 12 can be used to change the ambient lighting system behavior, such as changing the degree of color fidelity to the video content of video display D desired; changing flamboyance, including the extent to which any fluorescent colors or out-of-gamut colors are broadcast into ambient space, or how quickly or greatly responsive to changes in video content the ambient light is, such as by exaggerating the luminance or oth er quality of changes in the preferred ambient broadcast.
  • This can include advanced content analysis which can make subdued tones for movies or content of certain character.
  • Video content containing many dark scenes in content can influence behavior of t he ambient light source 88, causing a dimming of broadcast ambient light, while flamboyant or bright tones can be used for certain other content, like lots of flesh tone or bright scenes (a sunny beach, a tiger on savannah, etc.).
  • flamboyant or bright tones can be used for certain other content, like lots of flesh tone or bright scenes (a sunny beach, a tiger on savannah, etc.).
  • the description is given here to enable those of ordinary skill in the art to practice the invention. Many configurations are possible using the instant teachings, and the configurations and arrangements given here are only illustrative. Not all objectives sought here need be practiced - for example, specific transformations to a second rendered color space can be eliminated from the teachings given here without departing from the invention, particularly if both rendered color spaces RGB and R'G'B' are similar or identical. In practice, the methods taught and claimed might appear as part of a larger system, such as

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Of Color Television Signals (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

Extracting video content encoded in a rendered color space for broadcast by an ambient light source, using perceptual rules in concert with user preferences for intelligent dominant color selection. Steps include quantizing the video color space; performing dominant color extraction by using a mode, median, mean, or weighted average of pixel chromaticities; applying perceptual rules to further derive dominant chromaticities via [1] chromaticity transforms; [2] a weighted average using a pixel weighting function influenced by scene content; and [3] extended dominant color extraction where pixel weighting is reduced for majority pixels;[4] Spatial extraction, temporal delivery and luminance perceptual rules; and [5] transforming the dominant color chosen to the ambient light color space using tristimulus matrices All perceptual rules are modulated in response to explicit indicated user preferences obtained via remote controls, sensors, video meta data, or a graphical user interface.

Description

Ambient Lighting Derived from Video Content and with Broadcast Influenced by Perceptual Rules and User
Preferences
This invention relates to production and setting of ambient lighting effects using multiple light sources, and typically based on, or associated with, video content, such as from a video display or display signal. More particularly, it relates to a method to account for user preferences in extracting dominant color information, in conjunction with perceptual rules, from sampled or subsampled video content in real time, and to perform color mapping transformations from the color space of the video content to that which best allows driving a plurality of ambient light sources.
Engineers have long sought to broaden the sensory experience obtained consuming video content, such as by enlarging viewing screens and projection areas, modulating sound for realistic 3-dimensional effects, and enhancing video images, including broa der video color gamuts, resolution, and picture aspect ratios, such as with high definition (HD) digital TV television and video systems. Moreover, film, TV, and video producers also try to influence the experience of the viewer using visual and auditory means, such as by clever use of color, scene cuts, viewing angles, peripheral scenery, and computer -assisted graphical representations. This would include theatrical stage lighting as well. Lighting effects, for example, are usually scripted - synchronized with video or play scenes - and reproduced with the aid of a machine or computer programmed with the appropriate scene scripts encoded with the desired schemes.
In the prior art digital domain, automatic adaptation of lighting to fast changes in a scene, including unplanned or unscripted scenes, has not been easy to orchestrate in large part because of the overhead of large high bandwidth bit streams required using present systems.
Philips (Netherlands) and other companies have disclosed means for changing ambient or peripheral lighting to enhance video content for typical home or business applications, using separate light sources far from the video display, and for many applications, some sort of advance scripting or encoding of the desired Ii ghting effects. Ambient lighting added to a video display or television has been shown to reduce viewer fatigue and improve realism and depth of experience.
Sensory experiences are naturally a function of aspects of human vision, which uses an enormously complex sensory and neural apparatus to produce sensations of color and light effects. Humans can distinguish perhaps 10 million distinct colors. In the human eye, for color-receiving or photopic vision, there are three sets of approximately 2 milli on sensory bodies called cones which have absorption distributions which peak at 445, 535, and 565 nm light wavelengths, with a great deal of overlap. These three cone types form what is called a tristimulus system and are called B (blue), G (green), and R (red) for historical reasons; the peaks do not necessarily correspond with those of any primary colors used in a display, e.g., commonly used RGB phosphors. There is also interaction for scotopic, or so -called night vision bodies called rods. The huma n eye typically has 120 million rods, which influence video experiences, especially for low light conditions such as found in a home theatre.
Color video is founded upon the principles of human vision, and well known trichromatic and opponent channel theories of human vision have been incorporated into our understanding of how to influence the eye to see desired colors and effects which have high fidelity to an original or intended image. In most color models and spaces, three dimensions or coordina tes are used to describe human visual experience.
Color video relies absolutely on metamerism, which allows production of color perception using a small number of reference stimuli, rather than actual light of the desired color and character. In th is way, a whole gamut of colors is reproduced in the human mind using a limited number of reference stimuli, such as well known RGB (red, green, blue) tristimulus systems used in video reproduction worldwide. It is well known, for example, that nearly all video displays show yellow scene light by producing approximately equal amounts of red and green light in each pixel or picture element. The pixels are small in relation to the solid angle they subtend, and the eye is fooled into perceiving yellow; it do es not perceive the green or red that is actually being broadcast. There exist many color models and ways of specifying colors, including well known CIE
(Commission Internationale de I'Eclairage) color coordinate systems in use to describe and specify color for video reproduction. Any number of color models can be employed using the instant invention, including application to unrendered opponent color spaces, such as the CIE L*U*V* (CIELUV) or CIE L*a*b* (CIELAB) systems. The CIE established in 193 1 a foundation for all color management and reproduction, and the result is a chromaticity diagram which uses three coordinates, x, y, and z. A plot of this three dimensional system at maximum luminosity is universally used to describe color in terms of x and y, and this plot, called the 1931 x,y chromaticity diagram, is believed to be able to describe all perceived color in humans. This is in contrast to color reproduction, where metamerism is used to fool the eye and brain. Many color models or spaces are in use today for reproducing color by using three primary colors or phosphors, among them Adobe RGB, NTSC RGB, etc. It is important to note, however, that the range of all possible colors exhibited by video systems using these tristimulus systems is limited. The NTSC (National Television Standards Committee) RGB system has a relatively wide range of colors available, but this system can only reproduce half of all colors perceivable by humans. Many blues and violets, blue-greens, and oranges/reds are not rendered adequately using the available scope of traditional video systems.
Furthermore, the human visual system is endowed with qualities of compensation and discernment whose understanding is necessary to design any video system. Color in humans can occur in several modes of appearance, among them, object mode and illuminant mode.
In object mode, the light stimulus is perceived as light reflected from an object illuminated by a light source. In illuminant mode, the light stimulus is seen as a source of light. Illuminant mode includes stimuli in a complex field that are much brighter than other stimuli. It does not include stimuli known to be light sources, such as video displays, whose brightness or luminance is at or below the over all brightness of the scene or field of view so that the stimuli appear to be in object mode.
Remarkably, there are many colors which appear only in object mode, among them, brown, olive, maroon, grey, and beige flesh tone. There is no such thing, fo r example, as a brown illuminant source of light, such as a brown -colored traffic light. For this reason, ambient lighting supplements to video systems which attempt to add object colors cannot do so using direct sources of bright light. No combinati on of bright red and green sources of light at close range can reproduce brown or maroon, and this limits choices considerably. Only spectral colors of the rainbow, in varying intensities and saturation, can be reproduced by direct observation of bright sources of light. This underscores the need for fine control over ambient lighting systems, such as to provide low intensity luminance output from light sources with particular attention to hue management. This fine control is not presently addressed in a way that permits fast- changing and subtle ambient lighting under present data architectures.
Video reproduction can take many forms. Spectral color reproduction allows exact reproduction of the spectral power distributions of the original stimuli, but this is not realizable in any video reproduction that uses three primaries. Exact color reproduction can replicate human visual tristimulus values, creating a metameric match to the original, but overall viewing conditions for the picture and the orig inal scene must be similar to obtain a similar appearance. Overall conditions for the picture and original scene include the angular subtense of the picture, the luminance and chromaticity of the surround, and glare. One reason that exact color reproducti on often cannot be achieved is because of limitations on the maximum luminance that can be produced on a color monitor. Colorimetric color reproduction provides a useful alternative where tristimulus values are proportional to those in the original sc ene. Chromaticity coordinates are reproduced exactly, but with proportionally reduced luminances. Colorimetric color reproduction is a good reference standard for video systems, assuming that the original and the reproduced reference whites have the same chromaticity, the viewing conditions are the same, and the system has an overall gamma of unity. Equivalent color reproduction, where chromaticity and luminances match the original scene cannot be achieved because of the limited luminance generated in vi deo displays.
Most video reproduction in practice attempts to achieve corresponding color reproduction, where colors reproduced have the same appearance that colors in the original would have had if they had been illuminated to produce the same averag e luminance level and the same reference white chromaticity as that of the reproduction. Many, however, argue that the ultimate aim for display systems is in practice preferred color reproduction, where preferences of the viewer influence color fidelity. For example, suntanned skin color is preferred to average real skin color, and sky is preferred bluer and foliage greener than they really are. Even if corresponding color reproduction is accepted as a design standard, some colors are more important than others, such as flesh tones, the subject of special treatment in many reproduction systems such as the NTSC video standard. In reproducing scene light, chromatic adaptation to achieve white balance is important.
With properly adjusted cameras and di splays, whites and neutral grays are typically reproduced with the chromaticity of CIE standard daylight illuminant D65. By always reproducing a white surface with the same chromaticity, the system is mimicking the human visual system, which inherently ad apts perceptions so that white surfaces always appear the same, whatever the chromaticity of the illuminant, so that a white piece of paper will appear white, whether it is found in a bright sunlight day at the beach, or a incandescent-lit indoor scene. In color reproduction, white balance adjustment usually is made by gain controls on the R, G, and B channels.
The light output of a typical color receiver is typically not linear, but rather follows a power-law relationship to applied video voltages. The light output is proportional to the video-driving voltage raised to the power gamma, where gamma is typically 2.5 for a color CRT (cathode ray tube), and 1.8 for other types of light sources. Compensation for this factor is made via three primary gamm a correctors in camera video processing amplifiers, so that the primary video signals that are encoded, transmitted and decoded are in fact not R, G, and B, but R 1/(, G1/(, and B1/(. Colorimetric color reproduction requires that the overall gamma for video reproduction - including camera, display, and any gamma-adjusting electronics be unity, but when corresponding color reproduction is attempted, the luminance of t he surround takes precedence. For example, a dim surround requires a gamma of about 1.2, and a dark surround requires a gamma of about 1.5 for optimum color reproduction. Gamma is an important implementation issue for RGB color spaces. Most color reproduction encoding uses standard RGB color spaces, such as sRGB,
ROMM RGB, Adobe RGB 98, Apple RGB, and video RGB spaces such as that used in the NTSC standard. Typically, an image is captured into a sensor or source device space, which is device and ima ge specific. It may be transformed into an unrendered image space, which is a standard color space describing the original's colorimetry (see Definitions section).
However, video images are nearly always directly transformed from a source device space into a rendered image space (see Definitions section), which describes the color space of some real or virtual output device such as a video display. Most existing standard RGB color spaces are rendered image spaces. For example, source and output spaces created by cameras and scanners are not CIE -based color spaces, but spectral spaces defined by spectral sensitivities and other characteristics of the camera or scanner.
Rendered image spaces are device -specific color spaces based on the colorimetr y of real or virtual device characteristics. Images can be transformed into rendered spaces from either rendered or unrendered image spaces. The complexity of these transforms varies, and can include complicated image dependent algorithms. The transforms c an be non-reversible, with some information of the original scene encoding discarded or compressed to fit the dynamic range and gamut of a specific device.
There is currently only one unrendered RGB color space that is in the process of becoming a standard, ISO RGB defined in ISO 17321, most often used for color characterization of digital still cameras. In most applications today, images are converted into a rendered color space for either archiving and data transfer, including video signals. Converting from one rendered image or color space to another can cause severe image artifacts. The more mismatched the gamuts and white points are between two devices, the stronger the negative effects. One shortcoming in prior art ambient light display sys terns is that extraction from video content of representative colors for ambient broadcast can be problematic. For example, color-averaging of pixel chromaticities often results in grays, browns, or other color casts that are not perceptually representativ e of a video scene or image. Colors derived from simple averaging of chromaticities often look smudged and wrongly chosen, particularly when contrasted to an image feature such as a bright fish, or a dominant background such as a blue sky.
Another problem in prior art ambient light display systems is that no specific method is given to provide for synchronous real time operation to transform rendered tristimulus values from video to that of ambient light sources to give proper colorimetry and appearance. For example, output from LED ambient light sources is often garish, with limited or skewed color gamuts - and generally, hue and chroma are difficult to assess and reproduce. For example, US Patent 6,611,297 to Akashi et al. deals with realism in ambient lighting, but no specific method is given to insure correct and pleasing chromaticity, and the teaching of Akashi '297 does not allow for analyzing video in real time, but rather needs a script or the equivalent.
In addition, setting of ambient I ight sources using gamma corrected color spaces from video content often result in garish, bright colors. Another serious problem in the prior art is the large amount of transmitted information that is needed to drive ambient light sources as a function of real time video content, and to suit a desired fast -changing ambient light environment where highly intelligent color selection is desired to fit a number of user preferences for ambient lighting. In particular, average or other chromaticities extra cted for use in ambient lighting effects often are not producible (e.g., browns) or are not preferred for perceptual reasons. For example, if a dominant color (e.g., a brown) is indicated, the ambient lighting system acting upon that indication can produc e by default another color (e.g., a nearest color) in its light space that is it capable of producing (e.g., purple). However, this color chosen for production may not be preferred, as it may not perceptually correct or pleasing.
Also, ambient light triggering during dark scenes is also often garish, too bright, and not possessed of a chromaticity which seems to match that of the scene content. Ambient light triggering during light scenes can result in production of an ambient color that appears weak and having insufficient color saturation. Furthermore, some aspects of a scene, e.g., a blue sky, might be preferable to use for dominant color extraction to inform an ambient lighting system, while others, e.g., cloud cover, might be less preferable . There is also no mechanism in the prior art for continued exploration of scene elements shorn of the distraction of a majority, or large number, of pixels whose chromaticity is not preferred according to perceptual preferences. Another problem in the prior art is that newly appearing video scene features are often not represented or are under -represented in dominant color extraction and selection.
In addition, ambient lighting is often set without taking into account user preferences as to the brightness, color, time -development, and general character of the ambient light produced. For example, some users may prefer a soft, slow moving exposition of ambient lighting effects, with subdued colors and slow changes, while others might prefer fast moving, bright ambient broadcasts which might reflect quite graphically every change in video content (e.g., a newly appearing feature like a fish). This is not easy to achieve, and there does not exist in the prior art a method for imposing perceptual rules t o alleviate these problems.
It is therefore advantageous to expand the possible gamut of colors produced by ambient lighting in conjunction with a typical tristimulus video display system, while exploiting characteristics of the human eye, such as cha nges in relative visual luminosity of different colors as a function of light levels, by modulating or changing color and light character delivered to the video user using an ambient lighting system that uses to good advantage compensating effects, sensiti vities, and other peculiarities of human vision, and provides ambient output that appears to be not only properly derived from video content, but also makes clever use of the many potential dominant colors that lie in a scene.
It is also advantageous to create a quality ambient atmosphere free from the effects of gamma-induced distortion. It is further desired to be able to provide a method for providing emulative ambient lighting through dominant color extracts drawn from selected video regions using an economical data stream that encodes average or characterized color values. It is yet further desired to reduce the required size of such a datastream further, and to allow imposition of perceptual rules to improve viewablity, fidelity, and to allow exercise of perceptual prerogatives in choosing chromaticities and luminances selected for ambient broadcast. It is yet further desired that the character and effect of these perceptual prerogatives or rules be influenced by explicit indicated user prefere nces to allow the nature of the ambient broadcast to differ when desired.
Information about video and television engineering, compression technologies, data transfer and encoding, human vision, color science and perception, color spaces, colorimetry and image rendering, including video reproduction, can be found in the following references which are hereby incorporated herein in their entirety: ref[l] Color Perception, Alan R. Robertson, Physics Today, December 1992, VoI 45, No 12, pp. 24 -29; ref[2] The Physics and Chemistry of Color, 2ed, Kurt Nassau, John Wiley & Sons, Inc., New York © 2001; ref[3] Principles of Color Technology, 3ed, Roy S. Berns, John Wiley & Sons, Inc., New York, © 2000; ref[4] Standard Handbook of Video and Television Engineering, 4ed, Jerry Whitaker and K. Blair Benson, McGraw -Hill, New York © 2003. Methods given for various embodiments of the invention include using pixel level statistics or the functional equivalent to determine or extract, one or more dominant colors in a way which presents as little computational load as possible, but at the same time, provides for pleasing and appropriate chromaticities selected to be dominant colors in accordance with perceptual rules. The invention relates to a method for dominant color extraction from video content encoded in a rendered color space to produce, using perceptual rules, a dominant color for emulation by an ambient light source. Possible methods steps include: [1] Performing dominant color extraction from pixel chroma ticities from the video content in the rendered color space to produce a dominant color by extracting any of: [a] a mode of the pixel chromaticities; [b] a median of the pixel chromaticities; [c] a weighted average by chromaticity of the pixel chromaticiti es; [d] a weighted average of the pixel chromaticities using a pixel weighting function that is a function of any of pixel position, chromaticity, and luminance; [2] Further deriving the chromaticity of the dominant color in accordance with a perceptual ru Ie, the perceptual rule chosen from any of: [a] a simple chromaticity transform; [b] a weighted average using the pixel weighting function so further formulated as to exhibit an influence from scene content that is obtained by assessing any of chromaticity and luminance for a plurality of pixels in the video content; [c] an extended dominant color extraction using a weighted average where the pixel weighting function is formulated as a function of scene content that is obtained by assessing any of chromaticity and luminance for a plurality of pixels in the video content, with the pixel weighting function further formulated such that weighting is at least reduced for majority pixels; and [3] Transforming the dominant color from the rendered color space to a second rendered color space so formed as to allow driving the ambient light source.
If desired, the pixel chromaticities (or the rendered color space) can be quantized and this can be done by a number of methods (see Definition section), where the goal is ease the computational burden by seeking a reduction in possible color states, such as resulting from assignment of a larger number of chromaticities (e.g., pixel chromaticities) to a smaller number of assigned chromaticities or colors; or a reduction in pixel numbers by a selection process that picks out selected pixels; or binning to produce representative pixels or superpixels. If this quantizing of the rendered color space is performed in part by binning the pixel chromaticities into at least o ne superpixel, the superpixel thus produced can be of a size, orientation, shape, or location formed in conformity with an image feature. Assigned colors used in the quantization process can be selected to be a regional color vector that is not necessarily in the rendered color space, such as in the second rendered color space. Other embodiments of the method include one in which the simple chromaticity transform chooses a chromaticity found in the second rendered color space used for ambient light production.
One can also formulate the pixel weighting function so as to provide darkness support by: [4] assessing the video content to establish that a scene brightness in the scene content is low; and then [5] performing any of: [a] using the pixel w eighting function so further formulated to reduce weighting of bright pixels; and [b] broadcasting a dominant color obtained using reduced luminance relative to that which would otherwise be produced.
Alternatively, one can also formulate the pixel we ighting function so as to provide color support by [6] assessing the video content to establish that a scene brightness in the scene content is high; and then [7] performing any of: [a] using the pixel weighting function so further formulated to reduce w eighting of bright pixels; and [b] performing step [2] [C].
The extended dominant color extraction can be repeated individually for different scene features in the video content, forming a plurality of dominant colors and step [1] can be repeated where each of the plurality of dominant colors is designated as a pixel chromaticity. Then, if desired, the above step [1] (dominant color extraction) can be repeated separately for pixel chromaticities in a newly appearing scene feature.
Quantizing of at least some pixel chromaticities from the video content in the rendered color space can be undertaken to form a distribution of assigned colors, and during step [1], at least some of the pixel chromaticities can be obtained from the distribution of assigned colors. Alternatively, the quantizing can comprise binning the pixel chromaticities into at least one superpixel.
If an assigned color distribution is made, at least one of the assigned colors can be a regional color vector that is not necessarily in the rendered color space, such as a regional color vector lying in the second rendered color space used to drive the ambient light source.
The method can also additionally comprise establishing at least one color of interest in the distribution of assigned colors and then extracting pixel chromaticities assigned thereto to derive a true dominant color to be designated ultimately as the dominant color.
The dominant color can comprise, in reality, a pallet of dominant colors, each derived from applying the method.
The method can also be performed after quantizing the rendered color space, namely, quantizing at least some pixel chromaticities from the video content in the rendered color space to form a distribution of assigned colors, so tha t the dominant color extraction of step [1] draws upon the distribution of assigned colors (e.g., [a] a mode of the distribution of assigned colors, etc.). Then, in a similar manner, the pixel weighting function can be so formulated as to provide darkness support by: [4] assessing the video content to establish that a scene brightness in the scene content is low; and [5] performing any of: [a] using the pixel weighting function so further formulated to reduce weighting of assigned colors attributable to br ight pixels; and [b] broadcasting a dominant color obtained using reduced luminance relative to that which would otherwise be produced. Likewise, for color support, the pixel weighting function can be so formulated as to provide color support by [6] asse ssing the video content to establish that a scene brightness in the scene content is high; and [7] performing any of: [a] using the pixel weighting function so further formulated to reduce weighting of assigned colors attributable to bright pixels; and [b ] performing step [2][c]. The other steps can be altered accordingly to use assigned colors. The method can also optionally comprise [0] Decoding the video content in the rendered color space into a plurality of frames, and quantizing at least some p ixel chromaticities from the video content in the rendered color space to form a distribution of assigned colors. In addition, one can optionally [3a] Transform the dominant color from the rendered color space to an unrendered color space; then [3b] Tran sform the dominant color from the unrendered color space to the second rendered color space. This can be assisted by [3c] matrix transformations of primaries of the rendered color space and second rendered color space to the unrendered color space using f irst and second tristimulus primary matrices;and deriving a transformation of the color information into the second rendered color space by matrix multiplication of the primaries of the rendered color space, the first tristimulus matrix, and the inverse of the second tristimulus matrix. Once a dominant color is chosen from the distribution of assigned colors, one can then go backwards, so to speak, to obtain actual pixel chromaticities to refine the dominant color. For example, as mentioned, one can e stablish at least one color of interest in the distribution of assigned colors and extract pixel chromaticities assigned thereto to derive a true dominant color to be designated as the dominant color. Thus, while the assigned colors can be a crude approxi mation of video content, the true dominant color can provide the correct chromaticity for ambient distribution, while still saving on computation that would otherwise be required.
The pixel chromaticities of step [1] can be obtained from an extractio n region of any shape, size, or position, and one broadcast ambient light of the dominant color from the ambient light source adjacent the extraction region.
These steps can be combined in many ways to express various simultaneously applied perceptual rules, such as by establishing a plurality of criteria that must co -exist and compete for priority in dominant color extraction and selection. The unrendered color space that can be used for transformation to the ambient second rendered color space can be one of CIE XYZ; ISO RGB defined in ISO Standard 17321; Photo YCC; CIE LAB; or any other unrendered space. The steps taken to perform dominant color extraction and to impose perceptual rules can be substantially synchronous with the video signal, with ambient light broadcast from or around the video display using the color information in the second rendered color space.
The instant teachings take into account user preferences, including disclosure of a method for dominant color extraction from video content encoded in a rendered color space to produce, using perceptual rules in accordance with a user preference, a dominant color for emulation by an ambient light source, comprising:
[1] Performing dominant color extraction from pixel chromaticities fro m the video content in the rendered color space to produce a dominant color by extracting any of: [a] a mode of the pixel chromaticities; [b] a median of the pixel chromaticities; [c] a weighted average by chromaticity of the pixel chromaticities; [d] a we ighted average of the pixel chromaticities using a pixel weighting function that is a function of any of pixel position), chromaticity, and luminance;
[2] Further deriving at least one of the luminance, the chromaticity, a temporal delivery, , and a spatial extraction of the dominant color in accordance with respective perceptual rules to produce a preferred ambient broadcast, and where the respective perceptual rules are varied in character and effect by at least one of a plurality of possible explicit indie ated user preferences; and where the respective perceptual rules comprise at least one of:
[I] a luminance perceptual rule chosen from any of: [a] a luminance increase; [b] a luminance decrease; [c] a luminance floor; and [4] a luminance ceiling; [5] a sup pressive luminance threshold; [6] a luminance transform;
[II] a chromaticity perceptual rule chosen from at least one of: [a] a simple chromaticity transform; [b] a weighted average using the pixel weighting function so further formulated as to exhibit an influence from scene content that is obtained by assessing any of chromaticity and luminance for a plurality of pixels in the video content; [c] an extended dominant color extraction using a weighted average where the pixel weighting function is formulated as a function of scene content that is obtained by assessing any of chromaticity and luminance for a plurality of pixels in the video content, with the pixel weighting function further formulated such that weighting is at least reduced for majority pixels;
[III] a temporal delivery perceptual rule chosen from at least one of: [a] a decrease in the rate of change in at least one of luminance and chromaticity of the dominant color; [b] an increase in the rate of change in at least one of luminance and chroma ticity of the dominant color; [IV] a spatial extraction perceptual rule chosen from at least one of: [a] giving greater weight in the pixel weighting function to scene content containing newly appearing features; [b] giving lesser weight in the pixel weigh ting function to scene content containing newly appearing features; [c] giving greater weight in the pixel weighting function to scene content from a selected extraction region; and [d] giving lesser weight in the pixel weighting function to scene content from a selected extraction region; and then transforming the luminance and chromaticity of the preferred ambient broadcast from the rendered color space to a second rendered color space so formed as to allow driving the ambient light source.
Explicit indicated user preferences can be indicated by any of: [1] repeated up and down varying of a value selected by a user -operated control; [2] an extreme value selected by a user-operated control; [3] a high rate of change in a value selected by a user-operated control; [4] light received by a light sensor in an ambient space; [5] sound received by a sound sensor in an ambient space; [6] vibration received by a vibration sensor in an ambient space; [7] a choice made in a graphical user interface; [8] a choice made on a user-operated control; [9] a sustained actuation call on a user -operated control; [10] repeated actuation calls on a user -operated control; [11] pressure sensing by a pressure sensor inside a user -operated control device; [12] motion sensing by a motion sensor inside a user -operated control device; and [13] any of meta -data, auxiliary data, or sub-code data associated with an audio -video signal associated with said video content.
The degree to which darkness support, color support, and exten ded extraction steps given above are executed can be modulated in response to explicit indicated user preferences.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 shows a simple front surface view of a video display showing color information extraction regions and associated broadcasting of ambient light from six ambient light sources according to the invention;
FIG. 2 shows a downward view - part schematic and part cross -sectional - of a room in which ambient light from multiple ambient light sources i s produced using the invention; FIG. 3 shows a system according to the invention to extract color information and effect color space transformations to allow driving an ambient light source;
FIG.4 shows an equation for calculating average color information from a video extraction region;
FIG. 5 shows a prior art matrix equation to transform rendered primaries RGB into unrendered color space XYZ;
FIGS. 6 and 7 show matrix equations for mapping video and ambient lighting rendered color spaces, respectively, into unrendered color space;
FIG. 8 shows a solution using known matrix inversion to derive ambient light tristimulus values R'G'B' from unrendered color space XYZ; FIGS. 9-11 show prior art derivation of tristimulus primary m atrix M using a white point method;
FIG. 12 shows a system similar to that shown in FIG. 3, additionally comprising a gamma correction step for ambient broadcast;
FIG. 13 shows a schematic for a general transformational process used in the invention;
FIG. 14 shows process steps for acquiring transformation matrix coefficients for an ambient light source used by the invention;
FIG. 15 shows process steps for estimated video extraction and ambient light reproduction using the invention;
FIG. 16 shows a schematic of video frame extraction according to the invention; FIG. 17 shows process steps for abbreviated chrominance assessment according to the invention;
FIG. 18 shows an extraction step as shown in FIGS. 3 and 12, employing a frame decoder, setting a frame extraction rate and performing an output calculation for driving an ambient light source;
FIGS. 19 and 20 show process steps for color information extraction and processing for the invention;
FIG. 21 shows a schematic for a general process according to the invention, including dominant color extraction and transformation to an ambient lighting color space;
FIG. 22 shows schematically one possible method for quantizing pixel chromaticities from video content by assigning the pixel chromaticities to an assigned color; FIG. 23 shows schematically one example of quantizing by binning pixel chromaticities into a superpixel;
FIG. 24 shows schematically a binning process similar to of FIG. 23, but where size, orientation, shape, or location of the superpixel can be formed in conformity with an image feature; FIG. 25 shows regional color vectors and their colors or chromaticity coordinates on a standard cartesian CIE color map, where one color vector lies outside the gamut of colors obtainable by PAL/SECAM, NTSC, and Adobe RGB color production standards;
FIG. 26 shows a close-up of a portion of the CIE plot of FIG. 25, and additionally showing pixel chromaticities and their assignment to regional color vectors; FIG. 27 shows a histogram that demonstrates a mode of an assigned color distribution according to one possible method of the invention;
FIG. 28 shows schematically a median of an assigned color distribution according to one possible method of the invention;
FIG. 29 shows a mathematical summation for a weighted average by chromaticity of assigned colors according to one possible method of the invention;
FIG. 30 shows a mathematical summation for a weighted average by chroma ticity of assigned colors using a pixel weighting function according to one possible method of the invention;
FIG. 31 gives a schematic representation to show establishing a color of interest in a distribution of assigned colors and then extracting pi xel chromaticities assigned thereto to derive a true dominant color to be designated as a dominant color; FIG. 32 shows schematically that dominant color extraction according to the invention can be performed numerous times or separately in parallel t o provide a pallet of dominant colors;
FIG. 33 shows the simple front surface view of a video display as shown in FIG. 1, showing an example of unequal weighting given to a preferred spatial region for the methods demonstrated in FIGS. 29 and 30;
FIG. 34 gives a simple front surface view of a video display as shown in FIG. 33, showing schematically an image feature extracted for the purpose of dominant color extraction according to the invention;
FIG. 35 gives a schematic representation of a nother embodiment of the invention whereby video content decoded into a set of frames allows that a dominant color of one frame is obtained at least in part by relying upon a dominant color from a previous frame;
FIG. 36 shows process steps for an abridged procedure for choosing a dominant color according to the invention.
FIG. 37 shows a simple front surface view of a video display portraying scene content with a newly appearing feature to illustrate dominant color extraction with darkness support;
FIG. 38 shows a simple front surface view of a video display portraying scene content to illustrate dominant color extraction with color support;
FIG. 39 shows schematically three illustrative categories into which perceptual rules according to the instant invention can be classified;
FIG. 40 shows schematically a simple chromaticity transform as a functional operator;
FIG. 41 shows schematically a series of possible steps for dominant color extraction employing an average calculated using a pixel weighting function according to the invention to execute two illustrative possible perceptual rules; FIG. 42 shows schematically a series of possible steps for dominant color extraction employing an average calculated using a pixel wei ghting function for extended dominant color extraction according to the invention to execute two illustrative possible perceptual rules;
FIG. 43 shows possible functional forms for a pixel weighting function used according to the invention;
FIG. 44 shows schematically possible functional groups to perform dominant color extraction using perceptual rules in accordance with user preferences according to the invention so as to produce a preferred ambient broadcast;
FIG. 45 shows symbolically some possible component, methods, and signal sources to communicate user preferences;
FIGS. 46 and 47 show cartesian plots of a number of waveforms representing luminance as a function of time, for various luminance perceptual rules following different user preferences;
FIG. 48 shows schematically a number of simple chromaticity transforms effecting a number of possible chromaticity perceptual rules according to user preferences;
FIG. 49 shows schematically how the quality or degree of executi on of two perceptual rules as shown in FIG. 41 can be altered by user preferences;
FIG. 50 shows schematically the extraction of video meta data from an audio -video signal to affect perceptual rules according to the invention;
FIG. 51 shows cartesian plots of a number of waveforms representing luminance or chromaticity as a function of time, for various temporal delivery rules following different user preferences;
FIG. 52 gives a simple front surface view of a video display as shown in FIG. 34, showing schematically an image feature extracted in varying degrees using different spatial extraction perceptual rules according to different user preferences;
FIG. 53 gives a simple front surface view of a video display as shown in FIG. 52, but showing schematically a center region extracted in varying degrees using different spatial extraction perceptual rules according to different user preferences.
DEFINITIONS
The following definitions shall be used throughout: - Ambient light source - shall, in the appended claims, include any lighting production circuits or drivers needed to effect light production.
- Ambient space - shall connote any and all material bodies or air or space external to a video display unit.
- Assigned color distribution - shall denote a set of colors chosen to represent (e.g., for computational purposes) the full ranges of pixel chromaticities found in a video image or in video content.
- Bright - when referring to pixel luminance, shall denote either or both of: [1] a r elative characteristic, that is, brighter than other pixels, or [2] an absolute characteristic, such as a high brightness level. This might include bright red in an otherwise dark red scene, or inherently bright chromaticities such as whites and greys.
- Chromaticity transform - shall refer to a substitution of one chromaticity for another, as a result of applying a perceptual rule, as described herein. - Chromaticity / Chrominance - shall, in the context of driving an ambient light source, denote a mecha nical, numerical, or physical way of specifying the color character of light produced, such as CIE chromaticity, and shall not imply a particular methodology, such as that used in NTSC or PAL television broadcasting. - Colored - when referring to pixel chrominance, shall denote either or both of: [1] a relative characteristic, that is, exhibiting higher color saturation than other pixels, or [2] an absolute characteristic, such as a color saturation level.
- Color information - shall include either or both of chrominance and luminance, or functionally equivalent quantities. -- Computer - shall include not only all processors, such as CPU's (Central Processing Units) that employ known architectures, but also any intelligent device that can allow coding, decoding, reading, processing, execution of setting codes or change codes, such as digital optical devices, or analog electrical circuits that perform the same functions.
- Dark - when referring to pixel luminance, shall denote either or both of: [1] a relativ e characteristic, that is, darker than other pixels, or [2] an absolute characteristic, such as a low brightness level.
- Dominant color - shall denote any chromaticity chosen to represent video content for the purpose of ambient broadcast, including any c olors chosen using illustrative methods disclosed herein. - Explicit indicated user preferences - shall include any and all inputs that communicate a user preference to be used to influence the character and effect of perceptual rules that affect or effect a preferred ambient broadcast, including: [1] meta - data, auxiliary data, or sub -code data associated with video content or an audio -video signal; [2] data obtained through a graphical user interface, whether associated with video content or displayed on a separate display; [3] data obtained from a control panel, remote control pad or other peripheral device, including any from existing control function, e.g., a volume control on a video display; or [4] data obtained from any transducer in ambient space ( AO) about the video display, such as a voice activated, sound-measuring, or other device. The user preference does not have to stipulate specifically how to influence the character and effect of the perceptual rules, but rather merely has to impose a choice among a plurality of choices of explicit indicated user preferences for such influence and effect.
- Extended (dominant color) extraction - shall refer to any process for dominant color extraction undertaken after a prior process has eliminated or reduce d the influence of majority pixels or other pixels in a video scene or video content, such as when colors of interest are themselves used for further dominant color extraction.
- Extraction region - shall include any subset of an entire video image or fr ame, or more generally any or all of a video region or frame that is sampled for the purpose of dominant color extraction.
- Frame - shall include time -sequential presentations of image information in video content, consistent with the use of the term fram e in industry, but shall also include any partial (e.g., interlaced) or complete image data used to convey video content at any moment or at regular intervals.
- Goniochromatic - shall refer to the quality of giving different color or chromaticity as a function of viewing angle or angle of observation, such as produced by iridescence.
- Goniophotometric - shall refer to the quality of giving different light intensity, transmission and/or color as a function of viewing angle or angle of observation, such as found in pearlescent, sparkling or retroreflective phenomena.
- Interpolate - shall include linear or mathematical interpolation between two sets of values, as well as functional prescriptions for setting values between two known sets of values. - Light character - shall mean, in the broad sense, any specification of the nature of light such as produced by an ambient light source, including all descriptors other than luminance and chrominance, such as the degree of light transmission or reflection; or any specification of goniophotometric qualities, including the degree to which colors, sparkles, or other known phenomena are produced as a function of viewing angles when observing an ambient light source; a light output direction, including directionality as afforded by specifying a Poynting or other propagation vector; or specification of angular distribution of light, such as solid angles or solid angle distribution functions. It can also include a coordinate or coordinates to specify locations on an ambi ent light source, such as element pixels or lamp locations. - Luminance - shall denote any parameter or measure of brightness, intensity, or equivalent measure, and shall not imply a particular method of light generation or measurement, or psycho -biological interpretation.
- Majority pixels - shall refer to pixels conveying similar color information , such as saturation, luminance, or chromaticity in a video scene. Examples, include pixels which are set to appear dark (darkness in a scene) while a smaller n umber, or a different number, of other pixels are brightly illuminated; pixels which are predominantly set to appear white or grey (e.g., cloud cover in a scene); and pixels which share similar chromaticity, such as leafy green colors in a forest scene whi ch also separately portrays a red fox). The criterion used to establish what is deemed similar can vary, and a numerical majority is not required, though often applied.
- Pixel - shall refer to actual or virtual video picture elements, or equivalent infor mation which allows derivation of pixel information. For vector -based video display systems, a pixel can be any sub -portion of the video output which allows itself to be analyzed or characterized.
- Pixel chromaticity - shall include actual values for pi xel chromaticities, as well as any other color values which are assigned as a result of any quantization or consolidation process, such as when a process has acted to quantize color space . It is therefore anticipated in the appended claims that a pixel ch romaticity can include values from an assigned color distribution .
- Quantize Color Space - in the specification and in the context of the appended claims, shall refer to a reduction in possible color states, such as resulting from assignment of a larger number of chromaticities (e.g., pixel chromaticities) to a smaller number of assigned chromaticities or colors; or a reduction in pixel numbers by a selection process that picks out selected pixels; or binning to produce representative pixels or superpixels .
- Rendered color space - shall denote an image or color space captured from a sensor, or specific to a source or display device, which is device and image -specific. Most RGB color spaces are rendered image spaces, including the video spaces used to driv e video display D. In the appended claims, both the color spaces specific to the video display and the ambient light source 88 are rendered color spaces.
- Scene brightness - shall refer to any measure of luminance in scene content according to any desire d criterion. - Scene content - shall refer to that characteristic of video information capable of forming a viewable image that can be used to influence a desired choice of dominant color. Examples include white clouds, or darkness throughout much of a vi deo image, which might cause certain pixels making such an image to be deemed majority pixels, or might result in non -isotropic treatment of pixels in a pixel weighting function ( W in Fig. 30); or might cause an image feature (e.g., J8 of Fig. 34) to be detected and subject to special or extended dominant color extraction.
- Simple chromaticity transform - shall refer to a change or derivation of a dominant color or chromaticity according to a perceptual rule, not chosen or derived as a function of scene content, and where the change or derivation results in a chromaticity which is different from that which might otherwise be chosen. Example: a transform of a first dominant color (x, y) chosen via dominant color extraction (e.g., purple) to a second color (x', y') in order to satisfy a perceptual rule.
- Transforming color information to an unrendered color space - in the appended claims shall comprise either direct transformation to the unrendered color space, or use or benefit derived from using inversio n of a tristimulus primary matrix obtained by transforming to the unrendered color space (e.g., (M 2)"1 as shown in FIG. 8), or any calculational equivalent.. - Unrendered color space - shall denote a standard or non -device-specific color space, such as those describing original image colorimetry using standard CIE XYZ; ISO RGB, such as defined in ISO 17321 standards; Photo YCC; and the CIE LAB color space.
- User preference - shall not be limited to indications of desires of.users, but shall also include any choice made among a plurality of choices, even if that choice was not made by a user, such as when sub -code or meta data for video content is delivered using particular intended character and effect of perceptual rules that affect or effect a preferred ambient broadcast.
- Video - shall denote any visual or light producing device, whether an active device requiring energy for light production, or any transmissive medium which conveys image information, such as a window in an office building, or an optica I guide where image information is derived remotely.
- Video signal - shall denote the signal or information delivered for controlling a video display unit, including any audio portion thereof. It is therefore contemplated that video content analysis incl udes possible audio content analysis for the audio portion. Generally, a video signal can comprise any type of signal, such as radio frequency signals using any number of known modulation techniques; electrical signals, including analog and quanitized analog waveforms; digital (electrical) signals, such as those using pulse -width modulation, pulse-number modulation, pulse -position modulation, PCM (pulse code modulation) and pulse amplitude modulation; or other signals such as acoustic signals, audio signals, and optical signals, all of which can use digital techniques. Data that is merely sequentially placed among or with other information, such as packetized information in computer-based applications, can be used as well.
- Weighted - shall refer to any equivalent method to those given here for giving preferential status or higher mathematical weights to certain chromaticities, luminances, or spatial positions, possibly as a function of scene content. However, nothing shall preclude the use of unity as a weight for the purpose of providing a simple mean or average. The pixel weighting function as described herein does not have to take on the functional appearance given (e.g., a summation of W over a plurality of pixels), but shall include all algorithms , operators or other calculus that operates with the same effect.
DETAILED DESCRIPTION
Ambient light derived from video content according to the invention is formed to allow, if desired, a high degree of fidelity to the chromaticity of original video scene light, while maintaining a high degree of specificity of degrees of freedom for ambient lighting with a low required computational burden. This allows ambient light sources with small color gamuts and reduced luminance spaces to emulate video scene light from more advanced light sources with relatively large colors gamuts and luminance response curves. Possible light sources for ambient lighting can include any number of known lighting devices, including LEDs (Light Emitting Diodes) and related semi conductor radiators; electroluminescent devices including non -semiconductor types; incandescent lamps, including modified types using halogens or advanced chemistries; ion discharge lamps, including fluorescent and neon lamps; lasers; light sources that a re modulated, such as by use of LCDs (liquid crystal displays) or other light modulators; photoluminescent emitters, or any number of known controllable light sources, including arrays that functionally resemble displays. The description given here shall relate in part at first to color information extraction from video content, and later, to extraction methods that are subject to perceptual rules to derive dominant or true colors for ambient broadcast that can represent video images or scenes.
Now referring to FIG. 1, a simple front surface view of a video display D according to the invention is shown for illustrative purposes only. Display D can comprise any of a number of known devices which decode video content from a rendered color space, su ch
, as an NTSC, PAL or SECAM broadcast standard, or an rendered RGB space, such as Adobe
RGB. Display D can comprise optional color information extraction regions Rl, R2, R3,
R4, R5, and R6 whose borders can depart from those illustrated. The color inform ation extraction regions are arbitrarily pre -defined and are to be characterized for the purpose of producing characteristic ambient light A8, such as via back-mounted controllable ambient lighting units (not shown) which produce and broadcast ambient ligh t Ll, L2, L3, L4, L5, and L6 as shown, such as by partial light spillage to a wall (not shown) on which display D is mounted. Alternatively, a display frame Df as shown can itself also comprise ambient lighting units which display light in a similar manne r, including outward toward a viewer (not shown). If desired, each color information extraction region Rl - R6 can influence ambient light adjacent itself. For example, color information extraction region R4 can influence ambient light L4 as shown.
Now referring to FIG. 2, a downward view - part schematic and part cross -sectional - is shown of a room or ambient space AO in which ambient light from multiple ambient light sources is produced using the invention. In ambient space AO is arranged seating and tables 7 as shown which are arrayed to allow viewing of video display D. In ambient space AO are also arrayed a plurality of ambient light units which are optionally controlled using the instant invention, including light speakers 1 - 4 as shown, a sublight SL under a sofa or seat as shown, as well as a set of special emulative ambient light units arrayed about display D, namely center lights that produce ambient light Lx like that shown in FIG. 1. Each of these ambient light units can emit ambient light A8, shown as shading in the figure.
In cooperation with the instant invention, one can optionally produce ambient light from these ambient light units with colors or chromaticities derived from, but not actually broadcast by video display D. This allows exploiting characteristics of the human eye and visual system. It should be noted that the luminosity function of the human visual system, which gives detection sensitivity for various visible wavelengths, changes as a function of light levels.
For example, scotopic or night vision relying on rods tends to be more sensitive to blues and greens. Photopic vision using cones is better suited to detect longer wavelength light such as reds and yellows. In a darkened home theatre environment, such changes in relative luminosity of different colors as a function of light level can be counteracted somewhat by modulating or changing color delivered to the video user in ambient space. This can be done by subtracting light from ambient light units such as light speakers 1 - 4 using a light modulator (not shown) or by use of an added component in the light speakers, namely a photoluminescent emitter to further modify light before ambient release. The photoluminescent emitter performs a color transf ormation by absorbing or undergoing excitation from incoming light from light source and then re -emitting that light in higher desired wavelengths. This excitation and re -emission by a photoluminescent emitter, such as a fluorescent pigment, can allow re ndering of new colors not originally present in the original video image or light source, and perhaps also not in the range of colors or color gamut inherent to the operation of the display D. This can be helpful for when the desired luminance of ambient light Lx is low, such as during very dark scenes, and the desired level of perception is higher than that normally achieved without light modification. The production of new colors can provide new and interesting visual effects. The illustrative example can be the production of orange light, such as what is termed hunter's orange, for which available fluorescent pigments are well known (see ref[2]). The example given involves a fluorescent color, as opposed to the general phenomenon of fluorescence and related phenomena. Using a fluorescent orange or other fluorescent dye species can be particularly useful for low light conditions, where a boost in reds and oranges can counteract the decreased sensitivity of scotopic vision for long wavelengths.
Fluorescent dyes that can be used in ambient light units can include known dyes in dye classes such as Perylenes, Naphthalimides, Coumarins, Thioxanthenes, Anthraquinones, Thioindigoids, and proprietary dye classes such as those manufactured by the Day -GIo Color Corporation, Cleveland, Ohio, USA. Colors available include Apache Yellow, Tigris Yellow, Savannah Yellow, Pocono Yellow, Mohawk Yellow, Potomac Yellow, Marigold Orange, Ottawa Red, Volga Red, Salmon Pink, and Columbia Blue. These dye classes can be incorporated into resins, such as PS, PET, and ABS using known processes.
Fluorescent dyes and materials have enhanced visual effects because they can be engineered to be considerably brighter than nonfluorescent materials of the same chromaticity. So-called durability problems of traditional organic pigments used to generate fluorescent colors have largely been solved in the last two decades, as technological advances have resulted in the development of durable fluorescent pigments that maintain their vivid coloration for 7 -10 years under exposure to the sun. These pigments are therefore almost indestructible in a home theatre environment where UV ray entry is minimal. Alternatively, fluorescent photopigments can be used, and they work simply by absorbing short wavelength light, and re -emitting this light as a longer wavelength such as red or orange. Technologically advanced inorganic pigments are now readily available that undergo excitation using visible light, such as blues and violets, e. g., 400-440 nm light. Goniophotometric and goniochromatic effects can similarly be deployed to produce different light colors, intensity, and character as a function of viewing angles. To realize this effect, ambient light units 1 - 4 and SL and Lx can use known goniophotometric elements (not shown), alone, or in combination, such as metallic and pearlescent transmissive colorants; iridescent materials using well -known diffractive or thin -film interference effects, e.g., using fish scale essence; thi n flakes of guanine; or 2 - aminohypoxanthine with preservative. Diffusers using finely ground mica or other substances can be used, such as pearlescent materials made from oxide layers, bornite or peacock ore; metal flakes, glass flakes, or plastic flakes; particulate matter; oil; ground glass, and ground plastics. Now referring FIG. 3, a system according to the invention to extract color information
(such as a dominant color or true color) and effect color space transformations to allow driving an ambient light source is shown. As a first step, color information is extracted from a video signal AVS using known techniques.
Video signal AVS can comprise known digital data frames or packets like those used for MPEG encoding, audio PCM encoding, etc. One can use known encoding schemes for data packets such as program streams with variable length data packets, or transport streams which divide data packets evenly, or other schemes such single program transport streams. Alternately, the functional ste ps or blocks given in this disclosure can be emulated using computer code and other communications standards, including asynchronous protocols.
As a general example, the video signal AVS as shown can undergo video content analysis CA as shown, possibly using known methods to record and transfer selected content to and from a hard disk HD as shown, and possibly using a library of content types or other information stored in a memory MEM as also shown. This can allow independent, parallel, direct, dela yed, continuous, periodic, or aperiodic transfer of selected video content. From this video content one can perform feature extraction FE as shown, such as deriving color information (e.g., dominant color) generally, or from an image feature. This color information is still encoded in a rendered color space, and is then transformed to an unrendered color space, such as CIE XYZ using a RUR Mapping Transformation Circuit 10 as shown. RUR herein stands for the desired transformation type, namely, rendered -unrendered-rendered, and thus RUR Mapping Transformation Circuit 10 also further transforms the color information to a second rendered color space so formed as to allow driving said ambient light source or sources 88 as shown. The RUR transformation is preferred, but other mappings can be used, so long as the ambient lighting production circuit or the equivalent receives information in a second rendered color space that it can use. RUR Mapping Transformation Circuit 10 can be functionally contained in a computer system which uses software to perform the same functions, but in the case of decoding packetized information sent by a data transmission protocol, there could be memory (not shown) in the circuit 10 which contains, or is updated to contain, info rmation that correlates to or provides video rendered color space coefficients and the like. This newly created second rendered color space is appropriate and desired to drive ambient light source 88 (such as shown in FIGS. 1 and 2), and is fed using know n encoding to ambient lighting production circuit 18 as shown. Ambient lighting production circuit 18 takes the second rendered color space information from RUR Mapping Transformation Circuit 10 and then accounts for any input from any user interface and any resultant preferences memory (shown together as U2) to develop actual ambient light output control parameters (such as applied voltages) after possibly consulting an ambient lighting (second rendered) color space lookup table LUT as shown. The ambient light output control parameters generated by ambient lighting production circuit 18 are fed as shown to lamp interface drivers D88 to directly control or feed ambient light source 88 as shown, which can comprise individual ambient light units 1 - N, such as previously cited ambient light speakers 1 - 4 or ambient center lights Lx as shown in FIGS. 1 and 2.
To reduce any real time computational burden, the color information removed from video signal AVS can be abbreviated or limited. Now referring to FIG.4, an equation for calculating average color information from a video extraction region is shown for discussion. It is contemplated, as mentioned below (see FIG. 18), that the video content in video signal AVS will comprise a series of time sequenced video frames, but this is not required. For each video frame or equivalent temporal block, one can extract average or other color information from each extraction region (e.g., R4). Each extraction region can be set to have a certain size, such as 100 by 376 pixels. Assuming, for example, a frame rate of 25 frame/sec, the resultant gross data for extraction regions Rl - R6 before extracting an average (assuming only one byte needed to specify 8 bit color) would be 6 x 100 x 376 x 25 or 5.64 million bytes /sec for each video RGB tristimulus primary. This data stream is very large and would be difficult to handle at RUR Mapping Transformation Circuit 10, so extraction of an average color for each extraction region Rl - R6 can be effected during Feature Extraction FE. Specifically, as shown one can sum the RGB color channel value (e.g., Ry) for each pixel in each extraction region of m x n pixels, and divide by the number of pixels m x n to arrive at an average for each RGB primary, e.g., Ravg for red, as shown. Thus repeating this summation for each RGB color channel, the average for each extraction region would be a triplet RAVG = I Ravg, Gavg, Bavg| . The same procedure is repeated for all extraction regions Rl - R6 and for each RGB color channel. The number and size of extractive regions can depart from that shown, and be as desired. The next step of performing color mapping transformations by RUR Mapping
Transformation Circuit 10 can be illustratively shown and expressed using known tristimulus primary matrices, such as shown in FIG. 5, where a rendered tristimulus color space with vectors R, G, and B is transformed using the tristimulus primary matrix M with elements such as Xr,max, Yr,max, Zr,max where Xr,max is tristimulus value of the R primary at maximum output.
The transformation from a rendered color space to unrendered, device -independent space can be image and/or device specific - known linearization, pixel reconstruction (if necessary), and white point selection steps can be effected , followed by a matrix conversion. In this case, we simply elect to adopt the rendered video output space as a starting point for transformation to an unrendered color space colorimetry. Unrendered images need to go through additional transforms to make them viewable or printable, and the RUR transformation thus involves a transform to a second rendered color space.
As a first possible step, FIGS. 6 and 7 show matrix equations for mapping the video rendered color space, expressed by primaries R, G, and B and ambient lighting rendered color space, expressed by primaries R', G', and B' respectively, into unrendered color space X, Y, and Z as shown, where tristimulus primary matrix Mi transforms video RGB into unrendered XYZ, and tristimulus primary matrix M2 transforms ambient light source R'G'B' into unrendered XYZ color space as shown. Equating both rendered color spaces RGB and R'G'B' as shown in FIG. 8 allows matrix transformations of primaries RGB and R'G'B' of the rendered (video) color space and second rendered (ambient) color space to said unrendered color space (the RUR Mapping Transformation) using the first and second tristimulus primary matrices ( Mi, M2);and deriving a transformation of color information into the second rendered color space ( R'G'B') by matrix multiplication of the RGB primaries of the rendered video color space, the first tristimulus matrix Mi, and the inverse of the second tristimulus matrix ( M2)"1. While the tristimulus primary matrix for known display devices is readily available, that for the ambient light source can be determined using a known white point method by those of ordinary skill in the art.
Now referring to FIGS. 9-11, prior art derivation of a generalized tristimulus primary matrix M using a white point method is shown. In FIG. 9, quantities like SrXr represent the tristimulus value of each (ambient light source) primary at maximum output, with Sr representing a white point amplitude, and Xr representing the chromaticities of primary light produced by th e (ambient) light source. Using the white point method, the matrix equation equating S1- with a vector of the white point reference values using a known inverse of a light source chromaticity matrix is shown. FIG. 11 is an algebraic manipulation to remind that the white point reference values such as Xw are a product of the white point amplitudes or luminances and the light source chromaticities. Throughout, the tristimulus value X is set equal to chromaticity x; tristimulus value Y is set equal to chromaticity y; and tristimulus value Z is defined to be set equal to 1 -(x + y). The color primaries and reference white color components for the second rendered ambient light source color space can be acquired using known techniques, such as by using a color spectrometer. Similar quantities for the first rendered video color space can be found. For example, it is known that contemporary studio monitors have slightly different standards in North America, Europe, and Japan. However, as an example, internati onal agreement has been obtained on primaries for high -definition television (HDTV), and these primaries are closely representative of contemporary monitors in studio video, computing, and computer graphics. The standard is formally denoted ITU -R Recommend ation BT.709, which contains the required parameters, where the relevant tristimulus primary matrix ( M) for
RGB is:
0.640 0.300 0.150 Matrix M for ITU -R BT.709
0.330 0.600 0.060 0.030 0.100 0.790 and the white point values are known as well.
Now referring to FIG. 12, a system similar to that shown in FIG. 3 is shown, additionally comprising a gamma correction step 55 after feature extraction step FE as shown for ambient broadcast. Alternatively, gamma correction step 55 can be performed between the steps performed by RUR Mapping Transformation Circuit IO and Ambient
Lighting Production Circuit 18. Optimum gamma values for LED ambient light sources has been found to be 1.8, so a negative gamma correction to counteract a typical vid eo color space gamma of 2.5 can be effected with the exact gamma value found using known mathematics.
Generally, RUR Mapping Transformation Circuit 10, which can be a functional block effected via any suitable known software platform, performs a gener al RUR transformation as shown in FIG. 13, where a schematic as shown takes video signal AVS comprising a Rendered Color Space such as Video RGB, and transforms it to an unrendered color space such as CIE XYZ; then to a Second Rendered Color Space (Ambient Light Source RGB). After this RUR transformation, ambient light sources 88 can be driven, aside from signal processing, as shown. FIG. 14 shows process steps for acquiring transformation matrix coefficients for an ambient light source used by the i nvention, where the steps include, as shown, Driving the ambient light unit(s); and Checking Output Linearity as known in the art. If the ambient light source primaries are stable, (shown on left fork, Stable Primaries), one can Acquire Transformation Mat rix Coefficients Using a Color Spectrometer; whereas if the ambient light source primaries are not stable, (shown on right fork, Unstable
Primaries), one can reset the previously given gamma correction (shown, Reset Gamma Curve).
In general, it is desirable, but not necessary, to extract color information from every pixel in extraction regions such as R4, and instead, if desired, polling of selected pixels can allow a faster estimation of average color, or a faster creation of a extraction region color characterization, to take place. FIG. 15 shows process steps for estimated video extraction and ambient light reproduction using the invention, where steps include [1] Prepare Colorimetric Estimate of Video Reproduction (From Rendered Color Space, e.g., Video RGB) ; [2] Transform to Unrendered Color Space ; and [3] Transform Colorimetric Estimate for Ambient Reproduction (Second Rendered Color Space, e.g., LED RGB) .
It has been discovered that the required data bitstream required to support extraction and processing of video content (such as dominant color) from video frames (see FIG. 18 below) can be reduced according to the invention by judicious subsampling of video frames. Now referring to FIG. 16, a schematic of video frame extraction according t o the invention is shown. A series of individual successive video frames F, namely frames Fi, F2, F3 and so on - such as individual interlaced or non -interlaced video frames specified by the NTSC, PAL, or SECAM standards - is shown. By doing content ana lysis and/or feature extraction - such as extracting dominant color information - from selected successive frames, such as frames Fi and FN, one can reduce data load or overhead while maintaining acceptable ambient light source responsiveness, realism, an d fidelity. It has been found that N = 10 gives good results, namely, subsampling 1 frame out of 10 successive frames can work. This provides a refresh period P between frame extractions of low processing overhead during which an interframe interpolation process can provide adequate approximation of the time development of chrominance changes in display D. Selected frames Fi and FN are extracted as shown ( EXTRACT) and intermediate interpolated values for chrominance parameters shown as G2, G3, G4 provide the necessary color information to inform the previously cited driving process for ambient light source 88 . This obviates the need to simply freeze or maintain the same color information throughout frames 2 through N -1. The interpolated values can be I inearly determined, such as where the total chrominance difference between extracted frames Fi and FN is spread over the interpolated frames G. Alternatively, a function can spread the chrominance difference between extracted frames Fi and FN in any other manner, such as to suit higher order approximation of the time development of the color information extracted. The results of interpolation can be used by accessing in advance a frame F to influence interpolated frames (such as in a DVD player) or, alter natively, interpolation can be used to influence future interpolated frames without advance access to a frame F (such as in broadcast decoding applications).
FIG. 17 shows process steps for abbreviated chrominance assessment according to the invention. Higher order analysis of frame extractions can larger refresh periods P and larger N than would otherwise be possible. During frame extraction, or during a provisional polling of selected pixels in extraction regions Rx, one can conduct an abbreviated chrominance assessment as shown that will either result in a delay in the next frame extraction, as shown on the left, or initiating a full frame extraction, as shown on the right. In either case, interpolation proceeds ( Interpolate), with a delayed next frame extraction resulting in frozen, or incremented chrominance values being used. This can provide even more economical operation in terms of bitstream or bandwidth overhead.
FIG. 18 shows the top of FIGS. 3 and 12, where an alternative extraction step is shown whereby a frame decoder FD is used, allowing for regional information from extraction regions (e.g, Rl) is extracted at step 33 as shown. A further process or component step 35 includes assessing a chrominance difference, and using that information to set a video frame extraction rate, as indicated. A next process step of performing output calculations OO, such as the averaging of FIG. 4, or the dominant color extraction discussed below is performed as shown, prior to data transfer to Ambien t Lighting and Production Circuit 18 previously shown.
As shown in FIG. 19, general process steps for color information extraction and processing for the invention include acquiring an video signal AVS; extracting regional (color) information from se lected video frames (such as previously cited Fi and FN) ; interpolating between the selected video frames; an RUR Mapping Transformation; optional gamma correction; and using this information to drive an ambient light source (88). As shown in FIG. 20, two additional process steps can be inserted after the regional extraction of information from selected frames: one can perform an assessment of the chrominance difference between selected frames Fi and FN, and depending on a preset criterion, one can set a new frame extraction rate as indicated. Thus, if a chrominance difference between successive frames Fi and FN is large, or increasing rapidly (e.g, a large first derivative), or satisfies some other criterion, such as based on chrominance difference history, one can then increase the frame extraction rate, thus decreasing refresh period P. If, however, a chrominance difference between successive frames Fi and FN is small, and is stable or is not increasing rapidly (e.g, a low or zero absolute first derivative), or satisfies some other criterion, such as based on chrominance difference history, one can then save on the required data bitstream required and decrease the frame extraction rate, thus increasing refresh period P.
Now referring to FIG. 21, a schematic is shown for a general process according to one aspect of the invention. As shown, and as an optional step, possibly to ease the computational burden, [1] the rendered color space corresponding to the video content is quantized (QCS Quantize Color Space ), such as by using methods given below; then, [2] a dominant color (or a pallet of dominant colors) is chosen ( DCE Dominant Color Extraction); and [3] a color mapping transformation, such as the RUR Mapping Transformation ( 10) is performed ( MT Mapping Transformation to R'G'B' ) to improve the fidelity, range and appropriateness of the ambient light produced.
The optional quantizing of the color space can be likened to reducing the number of possible color states and/or pixels to be surveyed , and can be effected using various methods. As an example, FIG. 22 shows schematically one possible method for quantizing pixel chromaticities from video content. Here, as shown, illustrative video primary values R ranging from values = 1 to 16 are show n, and an arbitrary assignment is made for any of these primary values R to an assigned color AC as shown. Thus, for example, whenever any red pixel chromaticities or values = 1 to 16 are encountered in video content, assigned color AC is substituted therefor, resulting in a reduction by a factor of 16 for the red primary alone in the number of colors needed in characterizing a video image. For all three primaries, such a reduction in possible color states can result in this example in a reduction by a f actor of 16 x 16 x 16, or 4096 - in the number of colors used for computation. This can be especially useful to reduce computational load in determining dominant color in many video systems, such as those having 8 bit color which presents 256x256x256 or 16.78 million possible color states.
Another method for quantizing the video color space is given in FIG. 23, which shows schematically another example of quantizing the rendered color space by binning pixel chromaticities from a plurality of pixels Pi (e.g, 16 as shown) into a superpixel XP as shown. Binning is by itself a method whereby adjacent pixels are added together mathematically (or computationally) to form a superpixel which itself is used for further computation or representation. Thus, in a video format which normally has, for example, 0.75 million pixels, the number of superpixels chosen to represent the video content can reduce the number of pixels for computation to 0.05 million or any other desired smaller number.
The number, size , orientation, shape, or location of such superpixels XP can change as a function of video content. Where, for example, it is advantageous during feature extraction FE to insure that superpixels XP are drawn only from the image feature, and not from a border area or background, the superpixel(s) XP can be formed accordingly. FIG. 24 shows schematically a binning process similar to of FIG. 23, but where size, orientation, shape, or location of the superpixel can be formed in conformity with an image feature 38 as shown. Image feature 38 as shown is jagged or irregular in not having straight horizontal or vertical borders. As shown, superpixel XP is selected accordingly to mimic or emulate the image feature shape. In addition to having a customized shape, the location, size, and orientation of such superpixels can be influenced by image feature 38 using known pixel level computational techniques. Quantization can take pixel chromaticities and substitutes assigned colors (e.g., assigned color AC) to same. Those assigned colors can be assigned at will, including using preferred color vectors. So, rather than using an arbitrary or uniform set of assigned colors, at least some video image pixel chromaticities can be assigned to preferred color vectors.
FIG. 25 shows regional color vectors and their colors or chromaticity coordinates on a standard cartesian CIE x-y chromaticity diagram or color map. The map shows all known colors or perceivable colors at maximum luminosity as a function of chromati city coordinates x and y, with nanometer light wavelengths and CIE standard illuminant white points shown for reference. Three regional color vectors V are shown on this map, where it can be seen that one color vector V lies outside the gamut of colors obtainable by PAL/SECAM, NTSC, and Adobe RGB color production standards (gamuts shown).
For clarity, FIG. 26 shows a close-up of a portion of the CIE plot of FIG. 25, and additionally showing pixel chromaticities Cp and their assignment to regional colo r vectors V. The criteria for assignment to a regional color vector can vary, and can include calculation of a Euclidean or other distance from a particular color vector V, using known calculational techniques. The color vector V which is labeled lies ou tside the rendered color space or color gamut of the display systems; this can allow that a preferred chromaticity easily produced by the ambient lighting system or light source 88 can become one of the assigned colors used in quantizing the rendered (vide o) color space.
Once a distribution of assigned colors is made using one or more of the methods given above, the next step is to perform a dominant color extraction from the distribution of assigned colors by extracting any of: [a] a mode of the assig ned colors; [b] a median of the assigned colors; [c] a weighted average by chromaticity of the assigned colors; or [d] a weighted average using a pixel weighting function.
For example, one can use a histogram method to select the assigned color which occurs with the highest frequency. FIG. 27 shows a histogram that gives the assigned pixel color or colors (Assigned Colors) occurring the most often (see ordinate, Pixel Percent), namely, the mode of the assigned color distribution. This mode or most of ten used assigned color can be selected as a dominant color DC (shown) for use or emulation by the ambient lighting system.
Similarly, the median of the assigned color distribution can be selected to be, or help influence the selection of, the dominan t color DC. FIG. 28 shows schematically a median of an assigned color distribution, where the median or middle value (interpolated for an even number of assigned colors) is shown selected as dominant color DC.
Alternatively, one can perform a summati on over the assigned colors using a weighted average, so as to influence the dominant color(s) chosen, perhaps to better suit the strengths in the color gamut of the ambient lighting system. FIG. 29 shows a mathematical summation for a weighted average b y chromaticity of the assigned colors. For clarity, a single variable R is shown, but any number of dimensions or coordinates (e.g., CIE coordinates x and y) can be used. Chromaticity variable R is summed as shown over pixel coordinates (or superpixel co ordinates, if needed) i and j, running in this example between 1 and n and m, respectively. Chromaticity variable R is multiplied throughout the summation by a pixel weighting function W with indices i and j as shown; the result is divided by the number o f pixels n x m to obtain the weighted average.
A similar weighted average using a pixel weighting function is given in FIG. 30, which is similar to FIG. 29, except that W as shown is now a function also of pixel locations i and j as shown, thus allowing a spatial dominance function. By weighting also for pixel position, the center or any other portion of display D can be emphasized during selection or extraction of dominant color DC, as discussed below.
The weighed summations can be performed b y as given in the Extract Regional Information step 33 as given above, and W can be chosen and stored in any known manner. Pixel weighting function W can be any function or operator, and thus, for example, can be unity for inclusion, and zero for exclusio n, for particular pixel locations. Image features can be recognized using known techniques, and W can be altered accordingly to serve a larger purpose, as shown in FIG. 34 below. Once an assigned color is chosen to be dominant using the above methods or any equivalent method, a better assessment of the chromaticity appropriate for expression by the ambient lighting system can be performed, especially since the computational steps required are much less than they would otherwise be if all chromaticitie s and/or all video pixels had to be considered. FIG. 31 gives an illustrative schematic representation to show establishing a color of interest in a distribution of assigned colors and then extracting pixel chromaticities assigned thereto to derive a true dominant color to be designated as a dominant color. As can be seen, pixel chromaticities Cp are assigned to two assigned colors AC; the assigned color AC shown at the bottom of the figure is not selected to be dominant, while the top assigned color is d eemed dominant (DC) and is selected to be a color of interest COI as shown. One can then examine further the pixels that were assigned (or at least a portion thereof) to the assigned color AC deemed to be a color of interest COI, and by reading directly their chromaticity (such as using an average, as given in FIG. 4, or by performing the dominant color extraction steps already given on a small scale for this particular purpose), one can obtain a better rendition of the dominant color, shown here as true d ominant color TDC. Any processing steps needed for this can be carried out using the steps and/or components given above, or by using a separate True Color Selector, which may be a known software program or subroutine or a task circuit or the equivalent. The imposition of perceptual rules is discussed below, but generally, and as schematically shown in FIG. 32, dominant color extraction according to the invention can be performed numerous times or separately in parallel to provide a pallet of dominant colors, where dominant color DC can comprise dominant colors DCl + DC2 + DC3 as shown. This pallet can be the result of applying methods taught here to produce, using perceptual rules, a superior set of dominant colors.
As mentioned under FIG. 30, a pixel weighting function or the equivalent can provide weighting by pixel position to allow special consideration or emphasis for certain displays regions. FIG. 33 shows the simple front surface view of a video display as shown in FIG. 1, and showing an example of unequal weighting given to pixels Pi in a preferred spatial region. For example, the central region C of the display can be weighted using a numerically large weight function W, while an extraction region (or any region, such as a scene background) can be weighted using a numerically small weight function w, as shown.
This weighting or emphasis can be applied to image features J8 as shown in FIG. 34, which gives a simple front surface view of a video display as shown in FIG. 33, and where an image feature J8 (a fish) is selected using known techniques by feature extractor step FE (see FIGS. 3 and 12). This image feature 38 can be the only video content used, or just part of the video content used, during dominant color extraction DCE as shown and described above.
Referring now to FIG. 35, it can be seen that it is also possible using the teachings given here to allow a dominant color selected for a video frame to be obtained at least in part by relying upon at least one dominant color from a previous frame. Frames Fi, F2, F3, and F4 are shown schematically undergoing processing for obtain dominant color extraction DCE as shown, whose aim is to extract dominant colors DCl, DC2, DC3, and DC4 respectively as shown, and where by calculatio n, one can establish the dominant color chosen for a frame, shown here as DC4, as a function of dominant colors DCl, DC2, and DC3 as shown (DC4 = F (DCl, DC2, DC3). This can allow either an abridged procedure for choosing dominant color DC4 for frame F4, or a better informed one where the dominant colors chosen for previous frames Fi, F2, and F3 help influence the choice of dominant color DC4. This abridged procedure is shown in FIG. 36, where to reduce computational burden, a provisional dominant color extraction DC4 * uses a colorimetric estimate, and then in the next step is aided by Dominant Colors Extracted from Previous Frames (or a single previous frame), helping prepare a choice for DC4 (Prepare DC4 Using Abridged Procedure ). This procedure can b e applied to good effect to the description below.
Referring now to FIG. 37, a simple front surface view is shown of a video display portraying scene content, including with a possible newly appearing feature, to illustrate the need for dominant color extraction with darkness support and other perceptual prerogatives according to the invention. For reasons stated above, dominant color extraction often produces results at odds with desired perceptual output. FIG. 37 gives a schematic portrayal of a da rk or night scene featuring a particular scene feature VlIl (e.g., a green fir tree). Using a dominant color extraction without exercising perceptual rules, a problem often arises: color in scene content or in a particular frame often has too big an effect from a perceptual standpoint, with ambient broadcast colors appearing too bright and not subtle or appropriate for the dark scene content. In the example afforded by FIG. 37, a large number of, or a majority of pixels, such as majority pixels MP as shown, form the bulk of — or a large part of - the frame image, and these majority pixels MP possess, on average, little or no luminance. In this instance, dark effects for ambient broadcast can be preferable, and chromaticities preferred by designers for a mbient broadcast are often those of a separate scene entity, such as the tree in scene feature VlIl, rather than a chromaticity derived in large part from majority pixels MP, which in this illustrative example express darkness by having low average luminan ce, and nominal chromaticities which, if expressed in ambient lighting, might seem contrived. Methods for accomplishing this include imposing a perceptual rule effected by providing darkness support as discussed below, where a dark scene is detected, and such majority pixels MP are identified, and either eliminated from consideration in dominant color extraction, or given reduced weighting in relation to other pixels forming scene features such as scene feature VlH. This requires recognition of a sce ne element using scene content analysis CA (see FIG. 12), and then effecting special treatment for various other scene elements, such as a dark background or a scene feature. Imposing perceptual rules can also include removing scene portions that are unde sirable for dominant color extraction, such as scene speckle or scene artifacts, and/or can include image feature recognition, such as for scene feature VlIl, by feature recognition (e.g., feature extraction FE, e.g., FIGS. 3 and 12 or a functional equival ent) and as discussed for FIG. 34.
In addition, a new scene feature, such as V999, a lightning bolt or flash of light, can take precedence over - or be co -existent with - the chromaticity afforded by extracting a general chromaticity from scene feature VlIl that is obtained using methods given above.
Similarly, light, bright, white, greyish, or uniformly high luminance scenes can benefit from imposition of perceptual rules. Now referring to FIG. 38, a simple front surface view is shown of a video display portraying scene content to illustrate dominant color extraction with color support. FIG. 38 gives a scene that portrays a relatively bright, somewhat self - similar region as scene feature V333, which might depict cloud cover, or white water splashing from a waterfall. This scene feature V333 might be predominantly grey or white, and therefore can be deemed to be comprised of majority pixels MP as shown, while another scene feature, V888, e.g., a blue sky, is not composed of majority pixels, and can be preferred over majority pixels MP for dominant color extraction - i.e., an ambient lighting effects designer might prefer that blue be broadcast in this instance, rather than a white or grey color, particularly if scene feature V888 is newly appearin g, or contains a preferred chromaticity (e.g., sky blue) for ambient broadcast. One problem with the prior art is that dominant color extraction can sometimes result in color being underestimated, and dominated by bright or highly saturated whites, greys, or other undersaturated colors. To remedy this, a perceptual rule or set of perceptual rules can be imposed to provide color support, such as to assess scene brightness and reduce or eliminate the influence or weighing of white/grey majority pixels MP, while boosting the influence of other scene features such as blue sky V888.
Now referring to FIG. 39, there are shown schematically three illustrative categories into which perceptual rules according to the instant invention can be classified. As show n, Perceptual Rules for Dominant Color Selection can comprise any or all of: Simple Chromaticity Transforms SCT, Pixel Weighting as a Function of Scene Content PF8, and Extended Extraction / Search EE8. These categories are meant to be merely illustrative , and those of ordinary skill will be able to use the teachings given here to develop alternate similar schemes.
Now referring to FIGS. 40 - 43, examples of specific methodologies relating to imposition of these perceptual rule groups are given.
The first, simple chromaticity transforms SCT, can represent many methodologies, all of which seek to substitute or transform initially intended dominant colors with other, distinct chromaticities. Specifically, a particular chosen chromaticity ( x, y) produced by dominant color extraction can be replaced in any desired instance with transformed chromaticity (x', y') as shown in FIG. 40 which shows schematically a simple chromaticity transform SCT as a functional operator.
If, for example, if feature extraction FE obtains a particular dominant color (e.g., a brown) for ambient broadcast, and the nearest match for that dominant color in the light space of the ambient light source 88 is a chromaticity (x, y), such as a color that has a purplish cast - and that nearest match chromaticity is not preferred from a perceptual standpoint - a transformation or substitution can be made to a chromaticity ( x', y'), such as a color made from orange and green ambient light production, and developed by ambient lighting production circuit 18 or the equivalent as previously cited. These transformations can take the form of chromaticity -by-chromaticity mapping, perhaps contained in a lookup table ( LUT), or can be embodied in machine code, software, a data file, an algorithm or a functional operator. Because this type of perceptual rule does not need involve explicit content analysis, it is termed a simple chromaticity transform.
Simple chromaticity transforms SCT can exercise perceptual rules that give greater broadcast time to preferred chromaticities than would otherwise be given. If, for example, a particular blue is preferred or is deemed desirable, it can be the subject or result of a simple chromaticity transform SCT which favors it by mapping a large number of similar blue chromaticities to that particular blue. Also, the invention can be practiced where a simple chromaticity transform is used to preferentially choose a chromaticity found in the second rendered color space of ambient light source 88. Also according to the invention, scene content analysis CA can be used to add functionality to pixel weighting function W in a manner to allow imposition of perceptual rules. FIG. 43 shows possible functional forms for such a pixel weighting function. Pixel weighting function W can be a function of multiple variables, including any or all of: video display pixel spatial position, as indexed, for example, by indices i and j; chromaticity, such as a phosphor luminance level or primary value R (where R can be a vector representing R, G, and B) or chromaticity variables x and y; and luminance itself, L (or an equivalent) as shown. By performing feature extraction FE and content analysis CA, the values of pixel weighting function W can be set to execute perceptu al rules. Because pixel weighting function W can be a functional operator, it can be set to reduce, - or eliminate, if necessary - any influence from selected pixels, such as those representing screen speckle, screen artifacts, or those deemed to be major ity pixels MP by content analysis, such as when cloud cover, water, or darkness, or other scene content is given less weighting or zero weighting to comply with a perceptual rule.
Now referring to FIG. 41, a series of possible steps is shown schematic ally for dominant color extraction employing an average calculated using a pixel weighting function according to the invention to execute two illustrative possible perceptual rules. The general step, termed Pixel Weighting as a Function of Scene Content PF8 can comprise many more possible functions than the illustrative two shown using arrows.
As indicated on the left side of FIG. 41, and to provide for the Darkness Support Perceptual Rule indicated, or darkness support (as discussed for FIG. 37), scene content analysis is performed. One possible or step, optionally a first possible step, is to Assess Scene Brightness, such as by calculating, for any or all pixels, or for a distribution of assigned colors, the overall or average luminance or brightnes s per pixel. In this particular example, the overall scene brightness is deemed low (this step omitted for clarity) and a possible resultant step is to Lower Ambient Lighting Luminance as shown, to make the production of ambient light match the scene dark ness more than it would otherwise. Another possible step is to eliminate or reduce the weighting given by the pixel weighting function W for high luminance pixels, shown as Truncate / Reduce Weighting for Bright/Colored Pixels . The chosen threshold lumi nance level to decide what constitutes a bright or colored pixel can vary, and be established as a fixed threshold, or can be a function of scene content, scene history, and user preferences. As an example, all bright or colored pixels can have their W values reduced by a factor of 3 in order to reduce ambient lighting luminance for whatever dominant color is chosen from them. The step of lowering the ambient lighting luminance can also operate for this goal, such as to lower equally all pixel luminances by further reducing pixel weighting function W accordingly. Alternately, the pixel weighting function W can be reduced by a separate function that itself a function of the luminance of a particular pixel, such as a factor 1/LΛ2 where L is a luminance.
Another possible step for darkness support is Possible Selection of COIs from Bright/ Colored Pixels, namely the above -cited process whereby a color of interest is established from the subset of pixels in video content which are bright and perhaps have high saturation (colored), e.g., from feature VlIl of FIG. 37.. Specifically, certain chromaticities can be chosen for further analysis in a manner similar to that discussed and shown in FIG. 31 above, whether it is to discern the true color for a assigned color that has been chosen, or whether the color of interest is from a pixel chromaticity and will itself become part of an assigned color distribution for further analysis, such as repeating dominant color extraction for such colors of interest (e.g, find ing a representative green for the fir tree VlIl). This can lead to another possible step shown, Possible Extended Extraction as discussed further below and Select Dominant Color as shown, which could be the result of doing extended dominant color extract ion on a distribution of colors of interest gleaned from a prior dominant color extraction process.
As shown on the right side of FIG. 41, to provide for the Color Support Perceptual Rule indicated (color support as discussed in FIG. 38), scene content analysis is again performed. One possible or step, optionally a first possible step, is to Assess Scene Brightness, such as by calculating, for any or all pixels, or for a distribution of assigned colors, the overall or average luminance or brightness p er pixel, as done before. In this example, a high overall scene brightness is found. Another possible step is to eliminate or reduce the weighting given by the pixel weighting function W for high luminance, white, grey, or bright pixels, shown as Truncate / Reduce Weighting for Bright/Colored Pixels. This can prevent the dominant color chosen from being a bland or overly bright chromaticity which might be oversaturated or too white or too grey. For example, the pixels representing cloud cover V333 of FIG. 38 can be eliminated from the pixel weighting function W by setting contributions therefrom to a negligible value or to zero. One can then select a dominant color or a color of interest, such as Select COI from Remaining Chromaticities as shown. Possible Extended Extraction as shown can also be performed to help perform the step of Select Dominant Color as shown, and discussed below.
The step of Extended Extraction / Search EE8 as mentioned above and as shown in FIG. 42 can be any process undertak en after an initial dominant color extraction process, such as a process of using perceptual rules to narrow down a set of candidate dominant colors. FIG. 42 shows schematically a series of possible steps for dominant color extraction employing a chromati city / luminance average calculated using a pixel weighting function for extended dominant color extraction according to the invention to execute two illustrative possible perceptual rules. Two such examples of extended extraction shown are Static Support Perceptual Rule and Dynamic Support Perceptual Rule as shown. One the left side as shown, one possible static support perceptual rule process can include a step of Identify, then Truncate / Reduce Weighting for Majority Pixels . This can involve using scene content analysis to identify majority pixels MP as shown in FIGS. 37 and 38, using edge analysis, form recognition, or other content analysis technigues on the video content. One then reduces or sets to zero the pixel weighting function W for pixels deemed to be majority pixels MP as discussed earlier.
Then, in the next possible step, Possible Selection of COI from Remaining Chromaticities (e.g., Histogram Method) , one performs an extended dominant color extraction on pixels that are not majority pixels MP, such as the earlier cited dominant color extraction from the pixel chromaticities or distribution of assigned colors by extracting any of: [a] a mode (e.g., histogram method); [b] a median; [c] a weighted average by chromaticity; or [d] a weighted average using a pixel weighting function of the pixel chromaticities or assigned colors. It can be similar to a functional repeat of dominant color extraction after applying a perceptual rule, such as reducing the weight given to majority pixels. From this dominant color extraction process, the last step, Select Dominant Color for Ambient Broadcast can be executed.
Another possible perceptual rule is the Dynamic Support Perceptual Rule as shown on the right side. The first two steps shown are ide ntical to those for Static Support on the left side. A third possible step is identifying a newly appearing scene feature (such as lightning bolt VlIl) and performing Dominant Color Extraction from Newly Appearing Scene Feature as shown. A fourth possibl e step is to Select Chromaticities from Either or Both of Previous Steps for Ambient Broadcast as indicated, namely that this perceptual rule can involve taking either or both of the result of performing dominant color extraction on the newly appearing see ne feature or from performing dominant color extraction on the remaining chromaticities obtained after reducing or eliminating the effect of majority pixels MP. In this way, for example, both the newly appearing lightning strike V999 and the tree VlIl can contribute to the derivation of one or more dominant colors DC for ambient broadcast, rather than taking a straight dominant color extraction without a perceptual rule. In exercising a perceptual rule in this way, nothing precludes quantizing the col or space beforehand, as given above. Also, these methods can be repeated for chosen scene features, or to search further for preferred chromaticities for ambient broadcast.
As a further example, consider a particular illustrative scenario for video c ontent comprising background three scene features, and one newly appearing feature. A background appears, comprising sand, sky, and sun. Using content analysis, the scene is assessed. Sand tones are then found to make up 47% of image pixels. A percept ual rule is utilized such that these sand -colored pixels are designated majority pixels, and given, via pixel weighting function W, zero influence as long as other large scene elements are present. The sky is selected for extended extraction, and the resu ltant blue, extracted using methods given above, is set as a color of interest COI. The true dominant color extraction process (see FIG. 31) is then started to derive a true color representative of actual pixel chromaticities in the sky featured. This p rocess is updated on a frame -by- frame basis (see FIGS. 16 and 17). The sun, in turn, is recognized by feature extraction FE, and using a simple chromaticity transform SCT, a more pleasing yellowish -white chromaticity is chosen instead of a brighter white inherent to the video color information. When the sand tone pixels drop below a certain numerical threshold, another perceptual rule allows that all three features are then set as dominant colors, and can be set for ambient broadcast, either individually, depending on pixel positions (e.g., extraction regions such as Rl, R2, etc.), or together. Then a newly appearing feature, a white boat, causes, using another perceptual rule that emphasizes new content, a white output based on dominant color extraction for the boat so that the ambient broadcast turns white until the boat recedes in the video scene. When the boat recedes in the scene, another perceptual rule that deems newly appearing content to be no longer controlling when the number of pixels it represents drops below a certain percentage - or below a share outside the features already in play (sand, sky, sun) - allows that the three background features again are set for ambient broadcast through their respective dominant colors. When sand tone pixel s again are greater in number, their effect is again suppressed by allowing pixel weighting function W to be zeroed for them. However, another perceptual rule allows that when the other two background features (sky and sun) are no longer present, pixel weighting function W for the sand -tone pixels is then restored, subject to reduction again in the presence of a newly appearing scene feature. A red snake appears, and content analysis attributes 11% of pixels to that feature. Sand -tone pixels are again eliminated from effect in dominant color extraction, and feature extraction from the snake yields a color of interest COI, from which extended extraction and/or an optional true color selector process refines the dominant color extracted to represent the s nake color for ambient broadcast. It can be readily seen from the foregoing that without the mechanism for altering the dominant color extraction to follow perceptual rules, the dominant color extracted might be time-varying shades of a light blueish white throughout, not representative of scene content, and having less entertainment or information value for the viewer. The imposition of perceptual rules as thus given allows specificity in the form of parameters, and yet, once effected, has the effect of appearing to be intelligently choreographed. Results of applying perceptual rules in dominant color extraction can be used as previously given, so that such color information is made available to ambient light source 88 in a second rendered color spa ce.
In this way, ambient light produced at L3 to emulate extraction region R3 as shown in FIG. 1 can have a chromaticity that provides a perceptual extension of a phenomenon in that region, such as the moving fish as shown. This can multiply the visu al experience and provide hues which are appropriate and not garish or unduly mismatched. Now referring to FIG. 44, there is shown schematically a number of possible functional groups to perform dominant color extraction using more general perceptual rules in accordance with user preferences according to the invention so as to produce a preferred ambient broadcast. As can be seen from FIG. 44, the perceptual rules previously discussed can be expanded, especially if added user preferences are to be taken into account. Chromaticity rules can be applied as previously described, with Simple Chromaticity Transforms SCT, Pixel Weighting as a Function of Scene Content PF8, and Extended Extraction / Search EE8 as shown. Chromaticity rules can be augm ented by adding explicit Luminance Perceptual Rules
LPR which function to modify further the luminance information inherent in the dominant color extraction using just Chromaticity Perceptual Rules as shown.
Temporal Delivery Perceptual Rules TDPR as shown can allow faster or slower time delivery or altered time development of ambient broadcasts. This can include slowing down or speeding up changes in luminance and/or chromaticities, but also more complex functions or operators which selectively speed up or slow down ambient lighting effects in response to Scene Content as read from functional step PF8 as shown, or other factors.
Spatial Extraction Perceptual Rules SEPR can allow, as previously discussed, weighted averages of pixel chromaticities using pixel weighting function W which take into account pixel position (i, j) - but now these spatial and other general perceptual rules are also a function of Possible Explicit Indicated User Preferences as shown.
Specifically, this set of general p erceptual rules, shown at the figure upper right as General Perceptual Rules in Accordance with User Preferences , is developed in conjunction with, and can be altered as a function of - Possible Explicit Indicated User Preferences as shown in the upper left, and the result is a Preferred Ambient Broadcast PAB as shown. Each of the arrows from the explicit indicated user preferences into the general perceptual rules signifies symbolically and illustratively the effect of a particular user preference, and in cludes any and all inputs that communicate a user preference to be used to influence the character and effect of perceptual rules that affect or effect a preferred ambient broadcast - see Definitions section. As mentioned before, user preferences can include steps which affect the general perceptual rules and therefore the nature and character of the ambient light produced, e.g., lively, responsive, bright, etc. - versus subdued, slow -moving, dim or subtle.
FIG. 45 shows symbolically some possible com ponents, methods, and signal sources to communicate user preferences, including some that can use an existing component system that may not have been designed for communication of explicit indicated user preferences. While it is contemplated that a remot e control device or similar user- operated control can allow entry of explicit indicated user preferences directly, other inputs of user preferences can include the detection of particular selections or selection behavior on a user-operated control. One ca n work with a default set of user preferences that influence the general perceptual rules, and then, for example, one can allow for more lively preferences to coincide with extreme value selection from the user -operated control. For example, one can h ave the explicit indicated user preference indicated by repeated up and down varying of a value selected by a user -operated control, such as shown in FIG. 45 on remote control RC where up control 90 and down control 100 are repeatedly and alternatingly actuated as typographically shown. This can allow, for example, for toggling between a set of user preferences (e.g., lively versus subdued). The up/down control can be a bona fide up/down function, or can be any up/down change of a value, such as a volume change or channel change. One can formulate the control feel such that higher requests for particular parameters do not, in fact, result in parameter change, but are merely present to signal a user preference, such as lively or bright preferred ambient broadcasts. Alternatively, a user preference can be communicated by selection of an extreme value on the user -operated control, such as the value K selected to 33/40 ... 970/980/990/999 as illustratively shown. Or a user preference can be inputted with a high rate of change in a value K selected by a user-operated control (such as K=33 to 511 in one step as shown). While these methods for obtaining user preferences is limited in scope, it does allow inputs using existing hardware, and using intuitive met hods
Other methods for input of information that allow specifying user preferences include sensing of conditions using known components and methods in the ambient space AO as symbolically shown - for example, a room vibration sensor VS can sense dancing or loud voices, while a sound sensor SS can perform similar functions. A light sensor LS can allow for brighter, more lively ambient broadcasts during daylight hours, for example, while darkness can allow for lower luminances and perhaps a lesser degr ee of Darkness Support and/or Color Support as discussed for FIG. 41.
In addition, nothing precludes use of a known graphical user interface GUI as shown, such as choices displayed on video display D or any other display, such as a display on remote control or user-operated control RC to input user preferences. The user preferences can be displayed as choices with pre -set characteristics as a package, or the user can be asked to select specific parameter based general perceptual rules, such as the degree to which the Brightness or Darkness Support of FIG. 41 is effected. Using parametrized or sampled change vectors or other functions, it possible using known techniques to alter the degree of effect of such perceptual rules. The degree of Darkness Support can, for example, be selected on a scale of 1 to 10, or can be more specific, even including having the user specify actions to be taken with regard to specific phenomena, such as the displaying of certain chromaticities, such as whether or not one wa nts to view bright, fully saturated colors, or partially saturated colors; or whether one wants to limit total or maximum luminance for the ambient broadcast from ambient light source 88. Alternatively, one can, using known methods, use any of video m eta-data (shown VMD), auxiliary data, or sub -code data associated with an audio -video signal AVS associated with the video content as shown. This can operate as an explicit indicated user preference, even though the user may not have explicitly consented or re-consented to it. The data so encoded does not have to be absolute, but can include scripts that use any of the other methods given here to further specify user preferences used for a viewing session.
For example, one can use another method in conjunction with that just described to specify a user preference, such as by making a choice on a user -operated control such as remote control RC as shown. A choice selector 155 can allow a choice, including a choice presented by receiving video meta dat a VMD. Also, any selector or button on a user - operated control can communicate a user preference or toggle between established user preferences by a sustained or repeated actuation call, such as by pressing choice selector 155 continuously or pressing rep eated times, even though it is not strictly necessary for the functionality it otherwise represents. For example, sustained or repeated pressing of an ON button or channel change button, or actuating call for same, can set a user preference. This action would not change power status or change the channel, and therefore could be accommodated on existing remote control or other video control hardware. The methods for newly interpreting remote control commands in this way is known in the electronic and software arts and can be incorporated into existing components and methods. Alternatively, one can perform pressure sensing by a pressure sensor (also indicated as 155 for clarity) inside the remote control RC or user-operated control device. This is perhaps the most intuitive, and can include inputting of complex behaviors to be interpreted as communicating user preferences. For example, a tightly squeezed remote control can signal the desire for action and bright, fast -moving preferred ambient broadcasts, while gentle pressures can signal the opposite. Known electrical pressure - sensitive films can be used, including the use of strategically placed sensors that communicate based on their location. Sustained pressing of the front of remote control RC, for example, can communicate action and brightness, including emphasizing dominant color extraction selected from the display center and being very responsive to new image features, while sustained pressing of the back of the remote control can signal the opposite in a preferred ambient broadcast.
Finally, one can use a known motion sensor MS as shown inside the user -operated control to establish a user preference. Such a motion sensor can be, for example, a simple accelerometer using capacitive or mag netic effects to provide sensing of motion. The front of the remote can be pitched back and forth as shown using the heavy arrow at the figure lower right, so as to communicate a preference, while the back can be pitched for another indication. The motio n can also sense back and forth pitching in 3 dimensions, allowing for example, for six degrees of freedom in selecting an explicit indicated user preference.
These methods for input of a user preference can be further combined. For example, one can toggle back and forth between acceptance of a user preference induced by an ambient condition in ambient space AO, and non-acceptance of same, by shaking the remote, or by repeated pressing of a choice selector 155. The permutations for this type of control can be seen using this teaching.
Regarding general perceptual rules, the nature and character of the preferred ambient broadcast produced can be a function of choices or options obtained through user preferences. The luminance of an ambient broadc ast is a very important parameter to be set according to user preferences.
FIGS. 46 and 47 show cartesian plots of a number of waveforms representing luminance as a function of time, for various luminance perceptual rules following different illustrative user preferences UPl, UP2, UP3, UP4, UP5, and UP6. The first illustrative waveform resulting from a user preference choice UPl (perhaps a default choice) represents a normal luminance profile or delivery that comes from the chromaticity perceptual rules and dominant color extraction previously described. The second waveform results from applying user preference choice UP2 as shown, and is a halved luminance profile for lower broadcast brightness, which might result from a desire for a subdued preferred ambient broadcast and can be effected easily using known methods. Alternatively, the third illustrative waveform shows a luminance profile which is the result of applying a user preference choice UP3, and offers an ambient broadcast only when the nominal luminance called for using dominant color extraction exceeds a luminance suppressive threshold LT as shown, so that the dotted luminance lines represent luminance not expressed (a dark ambient broadcast), while the solid lines represent the luminance of ambient light produced. The fourth user preference choice UP4 shows a luminance ceiling cap or limit on the maximum brightness or luminance, so that nominal luminance developed by dominant color extraction cannot exceed a value, such luminance ceiling L9 as shown. Alternatively, a luminance floor Ll as shown in the next waveform using a user preference UP5 as shown can allow for a minimum luminance regardless of what is being developed by the dominant color extraction methods specified here, as can be seen. Finally, a luminance transform LX as shown associated with user preference choice UP6 can allow for a complex functional change, not just ceiling, floors, thresholds or multipliers - in the expressed luminance for preferred ambient broadcast. Lumin ance transform LX can take any functional form, including the use of operators, and as a function of any variable available in this teaching to alter the expressed luminance, increasing or decreasing the luminance from what it otherwise would be without us ing user preferences to alter general perceptual rules.
As a simple illustration of part of the function made possible by the arrangement shown in FIG. 44, FIG. 48 shows schematically a number of Simple Chromaticity Transforms SCT effecting a number of possible chromaticity perceptual rules according to Explicit Indicated User Preferences (shown).
For example, a Lockout of Selected Chromaticities as shown can effect an elimination or lockout of certain chromaticities, such as blood red, or other colors preselected among colors deemed to be lively and only for use when a lively preferred ambient broadcast is wanted. This and other such general perceptual rules can be effected by software design, and/or by the User Interface and Preferences Memory U2 (e.g., FIGS. 3 and 12).
Alternatively, a less drastic step is to perform a Change in Weight Given to a Chromaticity (shown), such as by giving a lesser weight to a selected chromaticity in a pixel weighting function W, so that this color is less influential in the process of dominant color extraction DCE.
Generally, simple chromaticity transform SCT does not have to involve a naked set of simple substitutions of one chromaticity for another in the sense that the character of the dominant color DC selected can be changed in a systematic fashion to satisfy general objectives. For example, explicit indicated user preferences can be used to offer a variety of different degrees of color saturation. In this way, a Change in Saturation (shown) can be a very effective tool to provide for differing appearances and characters in ambient broadcasts.
FIG. 49 shows schematically how the quality or degree of execution of two perceptual rules as shown in FIG. 41 can be altered by user preferences. The figure shows symbolically the Darkness Support Perceptual Rule and the Color Support
Perceptual Rule as being fully enabled by user preference choices UP2 and UP4, respectively (dark heavy arrows as shown), and partially (or fully) disabled by user preference choices UPl and UP3, respectively (light dotted arrows as shown). For example, the extent to which one truncates or reduces the weight assigned to bright/colored pixels, or grey/white pixels (one step shown in FIG. 41) can be altered as a function of user preferences.
FIG. 50 shows schematically the extraction of video meta data from an audio -video signal to affect perceptual rules according to the invention as previously shown in FIG. 45, but with a known buffer B which can - but does not have to - store video meta data VMD, auxiliary data, or sub -code data associated with video content or an audio -video signal AVS. For example, buffer B can extract or derive parameters that allow specifying general perceptual rules for use at a time not synch ronous with audio -video signal AVS or playback of video content. The buffer can be a memory device, or simply a registry or lookup table or other software function that allows call -up of meta data, auxiliary data, or subcode - or derivatives thereof - for use in providing a preferred ambient broadcast, especially using temporal delivery perceptual rules. FIG. 51 shows cartesian plots of a number of waveforms representing luminance - or chromaticity (shown Luminance / x / y) - as a function of time, for various temporal delivery rules following or resulting from different illustrative user preferences UPl, UP2, and UP3. The first illustrative waveform resulting from a user preference choice UPl (perhaps a default choice) represents a normal temporal delivery profile that comes from the chromaticity perceptual rules and dominant color extraction previously described.
The second waveform results from applying user preference choice UP2 as shown, and is a slowed down temporal delivery profile for lowe r speeds for changes in broadcast parameters. Obviously, imposing this rule might leave the possibility of truncation or ignoring of subsequent changes in luminance or chromaticity because the time development of the expressed luminance or chromaticity pa rameter lags behind the corresponding real time parameter ordinarily developed by dominant color extraction. Alternatively, the third illustrative waveform shows a luminance profile which is the result of applying a user preference choice UP3, and offers as shown an ambient broadcast with a sped up temporal delivery. This might require the use of a buffer B from the previous figure 50.
Spatial extraction perceptual rules can also be altered by explicit indicated user preferences. FIG. 52 gives a simple front surface view of a video display as shown in FIG. 34, showing schematically and illustratively an image feature J8 in a center region C extracted in varying degrees using different spatial extraction perceptual rules according to different user p references - a partially enabled extraction (light arrow, user preference choice UPl), and a fully enabled extraction (heavy arrow, user preference choice UP2). Similarly, as shown in FIG. 53, the degree to which extraction occurs throughout all of a center region C can be varied in a similar manner, as shown.
FIG. 53 gives a simple front surface view of a video display as shown in FIG. 52, but showing schematically a center region extracted in varying degrees using different spatial extraction perceptual rules according to different user preferences.
Either of these spatial extraction perceptual rules can be effected, for example, by altering pixel weighting function W to allow either a great weight to be given to newly arriving features ( J8) or a region (e.g., center region C) or to allow a lesser weight so as to register relatively little influence from same. Center region C is chosen for illustrative purposes, and any display area can be singled out for altered treatment in accordance with user preferences operating to affect the general perceptual rules.
Generally, there are a number of possible ways, some already mentioned, of varying the character and effect of the respective perceptual rules using user preferences. One is to effect changes in pixel weighting function W so as to emphasize or de -emphasize certain display areas ( i, j), chromaticities, and luminances, as a function of time, scene content, and explicit indicated user preferences. Another is to parametrize those processes and take a desired action, such as de -rating or reducing luminances, shifting chromaticities, or changing the degree of inclusion for majority pixels MP. In varying one or more parameters that one of ordinary skill would know would affect the preferred ambient broadcast, an economical way of effecting explicit indicated user preferences is obtained. Yet another is to change the luminance and chrominance variables directly, such as found in the functional block, User Interface and &. Preferences Memory U2 as discussed above (FIGS. 3 and 12). The names of the explicit indicated user preferences can be chosen by a software designer, and using the instant teaching, the methods here can be used to alter the dominant color extraction process perceptual rules t o reflect user preferences.
For example, the Darkness Support and Color Support perceptual rules of FIG. 41 can be altered so that the degree to which one reduces weighting for bright pixels, and/or the degree to which one performs extended dominant c olor extraction EE8, and/or the degree to which one reduces or increases luminance - is a function of explicit indicated user preferences which the software designer has formulated to achieve a particular visual effect. Similarly, the extent to which one executes extended dominant color extraction generally can be modulated.
Generally, ambient light source 88 can embody various diffuser effects to produce light mixing, as well as translucence or other phenomena, such as by use of lamp structures having a frosted or glazed surface; ribbed glass or plastic; or apertured structures, such as by using metal structures surrounding an individual light source. To provide interesting effects, any number of known diffusing or scattering materials or phenomena can be used, including that obtain by exploiting scattering from small suspended particles; clouded plastics or resins, preparations using colloids, emulsions, or globules 1-5 :m or less, such as less than 1 :m, including long-life organic mixtures; gels; and sols, the production and fabrication of which is known by those skilled in the art. Scattering phenomena can be engineered to include Rayleigh scattering for visible wavelengths, such as for blue production for blue enhancement of ambient light. The colors produced can be defined regionally, such as an overall bluish tint in certain areas or regional tints, such as a blue light -producing top section (ambient light Ll or L2).
Ambient lamps can also be fitted with a goniophotometric element, such as a cylindrical prism or lens which can be formed within, integral to, or inserted within a lamp structure. This can allow special effects where the character of the light pro duced changes as a function of the position of the viewer. Other optical shapes and forms can be used, including rectangular, triangular or irregularly -shaped prisms or shapes, and they can be placed upon or integral to an ambient light unit or units. Th e result is that rather than yielding an isotropic output, the effect gained can be infinitely varied, e.g., bands of interesting light cast on surrounding walls, objects, and surfaces placed about an ambient light source, making a sort of light show in a darkened room as the scene elements, color, and intensity change on a video display unit. The effect can be a theatrical ambient lighting element which changes light character very sensitively as a function of viewer position - such as viewing bluish sparkles, then red light - when one is getting up from a chair or shifting viewing position when watching a home theatre. The number and type, of goniophotometric elements that can be used is nearly unlimited, including pieces of plastic, glass, and the optica l effects produced from scoring and mildly destructive fabrication techniques. Ambient lamps can be made to be unique, and even interchangeable, for different theatrical effects. And these effects can be modulatable, such as by changing the amount of lig ht allowed to pass through a goniophotometric element, or by illuminating different portions (e.g., using sublamps or groups of LEDs) of an ambient light unit.
Video signal AVS can of course be a digital datastream and contain synchronization bits and concatenation bits; parity bits; error codes; interleaving; special modulation; burst headers, and desired metadata such as a description of the ambient lighting effect (e.g., "lightning storm"; "sunrise"; etc.) and those skilled in the art will realize that functional steps given here are merely illustrative and do not include, for clarity, conventional steps or data. Using these teachings to allow user preferences to alter general perceptual rules, the
User Interface & Preferences Memory as shown i n FIGS. 3 and 12 (or any functional equivalent, such as by executing software instructions) can be used to change the ambient lighting system behavior, such as changing the degree of color fidelity to the video content of video display D desired; changing flamboyance, including the extent to which any fluorescent colors or out-of-gamut colors are broadcast into ambient space, or how quickly or greatly responsive to changes in video content the ambient light is, such as by exaggerating the luminance or oth er quality of changes in the preferred ambient broadcast. This can include advanced content analysis which can make subdued tones for movies or content of certain character. Video content containing many dark scenes in content can influence behavior of t he ambient light source 88, causing a dimming of broadcast ambient light, while flamboyant or bright tones can be used for certain other content, like lots of flesh tone or bright scenes (a sunny beach, a tiger on savannah, etc.). The description is given here to enable those of ordinary skill in the art to practice the invention. Many configurations are possible using the instant teachings, and the configurations and arrangements given here are only illustrative. Not all objectives sought here need be practiced - for example, specific transformations to a second rendered color space can be eliminated from the teachings given here without departing from the invention, particularly if both rendered color spaces RGB and R'G'B' are similar or identical. In practice, the methods taught and claimed might appear as part of a larger system, such as an entertainment center or home theatre center.
It is well known that for the functions and calculations illustratively taught here can be functionally reproduced or emulated using software or machine code, and those of ordinary skill in the art will be able to use these teachings regardless of the way that the encoding and decoding taught here is managed. This is particularly true when one considers that it is not strictly necessary to decode video information into frames in order to perform pixel level statistics as given here. Those with ordinary skill in the art will, based on these teachings, be able to modify the apparatus and methods taught and claim ed here and thus, for example, re ¬ arrange steps or data structures to suit specific applications, and create systems that may bear little resemblance to those chosen for illustrative purposes here.
The invention as disclosed using the above examples may b e practiced using only some of the features mentioned above. Also, nothing as taught and claimed here shall preclude addition of other structures or functional elements.
Obviously, many modifications and variations of the present invention are possible in light of the above teaching. It is therefore to be understood that, within the scope of the appended claims, the invention may be practiced otherwise than as specifically described or suggested here.

Claims

CLAIMS:
1. A method for dominant color extractio n from video content encoded in a rendered color space (RGB) to produce, using perceptual rules in accordance with a user preference, a dominant color ( DC) for emulation by an ambient light source ( 88), comprising:
[1] Performing dominant color extraction from pixel chromaticities ( Cp) from said video content in said rendered color space to produce a dominant color by extracting any of: [a] a mode of said pixel chromaticities; [b] a median of said pixel chromaticities; [c] a weighted average by chromaticity of said pixel chromaticities; [d] a weighted average of said pixel chromaticities using a pixel weighting function ( W) that is a function of any of pixel position ( ϊ, j), chromaticity (x, y, R), and luminance (L);
[2] Further deriving at least one of the luminance, the chromaticity, a temporal delivery, and a spatial extraction of said dominant color in accordance with respective perceptual rules to produce a preferred ambient broadcast, and where said respective perceptual rules are varied in character an d effect by at least one of a plurality of possible explicit indicated user preferences; and where said respective perceptual rules comprise at least one of:
[I] a luminance perceptual rule ( LPR) chosen from any of: [a] a luminance increase; [b] a luminance decrease; [c] a luminance floor; and [4] a luminance ceiling; [5] a suppressive luminance threshold; [6] a luminance transform;
[II] a chromaticity perceptual rule chosen from at least one of: [a] a simple chromaticity transform (SCT); [b] a weighted ave rage using said pixel weighting function ( PF8) so further formulated as to exhibit an influence from scene content that is obtained by assessing any of chromaticity and luminance for a plurality of pixels in said video content; [c] an extended dominant col or extraction (EE8) using a weighted average where said pixel weighting function is formulated as a function of scene content that is obtained by assessing any of chromaticity and luminance for a plurality of pixels in said video content, with said pixel weighting function further formulated such that weighting is at least reduced for majority pixels ( MP); [III] a temporal delivery perceptual rule ( TDPR) chosen from at least one of: [a] a decrease in the rate of change in at least one of luminance and chrom aticity of said dominant color; [b] an increase in the rate of change in at least one of luminance and chromaticity of said dominant color; [IV] a spatial extraction perceptual rule ( SEPR) chosen from at least one of: [a] giving greater weight in said pixe l weighting function to scene content containing newly appearing features; [b] giving lesser weight in said pixel weighting function to scene content containing newly appearing features; [c] giving greater weight in said pixel weighting function to scene content from a selected extraction region; and [d] giving lesser weight in said pixel weighting function to scene content from a selected extraction region; and [3] Transforming the luminance and chromaticity of said preferred ambient broadcast from said rendered color space to a second rendered color space ( R'G'B') so formed as to allow driving said ambient light source.
2. The method of claim 1, wherein said chromaticity perceptual rule locks out a selected chromaticity in response to an explicit indi cated user preference.
3. The method of claim 1, wherein said chromaticity perceptual rule changes the weight given in said pixel weighting function to a selected chromaticity in response to an explicit indicated user preference.
4. The method of claim 1, wherein said chromaticity perceptual rule comprises using said simple chromaticity transform to change the saturation of said chromaticity in response to an explicit indicated user preference.
5. The method of claim 1, wherein said pixel wei ghting function is so formulated to provide darkness support by: [4] assessing said video content to establish that a scene brightness in said scene content is low; and [5] performing any of: [a] using said pixel weighting function so further formulated to reduce weighting of bright pixels; and [b] broadcasting a dominant color obtained using reduced luminance relative to that which would otherwise be produced; and wherein the extent to which step [5] is executed is changeable in response to an explicit ind icated user preference.
6. The method of claim 1, wherein said pixel weighting function is so formulated to provide color support by [6] assessing said video content to establish that a scene brightness in said scene content is high; and [7] performi ng any of: [a] using said pixel weighting function so further formulated to reduce weighting of bright pixels; and [b] performing claim 1 step [II] [c]; and wherein the extent to which step [7] is executed is changeable in response to an explicit indicated user preference.
7. The method of claim 1, wherein said temporal delivery perceptual rule comprises storing video meta data ( VMD) from a video signal ( AVS) used to provide, at least in part, said video content.
8. The method of claim 1, wherein said spatial extraction perceptual rule comprises assigning said selected extraction region to be one of a center region ( C) and a border region.
9. The method of claim 1, wherein said extended dominant color extraction is repeated individually for different scene features (38, VlIl, V999) in said video content, forming a plurality of dominant colors ( DCl, DC2, DC3) and: [8] claim 1 step [1] is repeated where each of said plurality of dominant colors is designated as a pixel chromaticity; and wherein th e extent to which step [8] is executed is changeable in response to an explicit indicated user preference.
10. The method of claim 1, wherein said method comprises, prior to step [1], quantizing at least some pixel chromaticities ( Cp) from said video content in said rendered color space to form a distribution of assigned colors ( AC), and during step [1], obtaining at least some of said pixel chromaticities from said distribution of assigned colors.
11. The method of claim 10, wherein said quantizi ng comprises binning said pixel chromaticities into at least one superpixel ( XP).
12. The method of claim 10, wherein at least one of said assigned colors is a regional color vector (V) that is not necessarily in said rendered color space.
13. The method of claim 10, additionally comprising establishing at least one color of interest (COI) in said distribution of assigned colors and extracting pixel chromaticities assigned thereto to derive a true dominant color ( TDC) to be designated as said domi nant color.
14. The method of claim 1, wherein said explicit indicated user preference is indicated by any of: [1] repeated up and down varying of a value selected by a user -operated control (RC); [2] an extreme value selected by a user -operated control; [3] a high rate of change in a value selected by a user -operated control; [4] light received by a light sensor (LS) in an ambient space; [5] sound received by a sound sensor ( SS) in an ambient space; [6] vibration received by a vibration sensor ( VS) in an ambient space; [7] a choice made in a graphical user interface ( GUI); [8] a choice made on a user -operated control; [9] a sustained actuation call on a user -operated control; [10] repeated actuation calls on a user-operated control; [11] pressure sensi ng by a pressure sensor ( 155) inside a user- operated control device; [12] motion sensing by a motion sensor ( MS) inside a user- operated control device; and [13] any of meta -data, auxiliary data, or sub -code data associated with an audio -video signal (AVS) associated with said video content.
15. A method for dominant color extraction from video content encoded in a rendered color space (RGB) to produce, using perceptual rules in accordance with a user preference, a dominant color ( DC) for emulation by a n ambient light source ( 88), comprising:
[0] Quantizing at least some pixel chromaticities ( Cp) from said video content in said rendered color space to form a distribution of assigned colors ( AC);
[I] Performing dominant color extraction from said distrib ution of assigned colors to produce a dominant color by extracting any of: [a] a mode of said distribution of assigned colors; [b] a median of said distribution of assigned colors; [c] a weighted average by chromaticity of said distribution of assigned col ors; [d] a weighted average of said distribution of assigned colors using a pixel weighting function ( W) that is a function of any of pixel position ( i, j), chromaticity (x, y, R), and luminance ( L); [2] Further deriving at least one of the luminance, the chromaticity, the temporal delivery, and the spatial extraction of said dominant color in accordance with respective perceptual rules to produce a preferred ambient broadcast, and where said respective perceptual rules are varied in character and effect by at least one of a plurality of possible explicit indicated user preferences; and where said respective perceptual rules comprise at least one of: [I] a luminance perceptual rule ( LPR) chosen from any of: [a] a luminance increase; [b] a luminance decrease; [c] a luminance floor; and [4] a luminance ceiling;
[II] a chromaticity perceptual rule chosen from at least one of: [a] a simple chromaticity transform (SCT); [b] a weighted average using said pixel weighting function ( PF8) so further formulated as to exhibit an influence from scene content that is obtained by assessing any of chromaticity and luminance for a plurality of pixels in said video content; [c] an extended dominant color extraction ( EE8) using a weighted average where said pixel weighting function is formulated as a function of scene content that is obtained by assessing any of chromaticity and luminance for a plurality of pixels in said video content, with said pixel weighting function further formulated such that weighting is at least reduced for majority pixels ( MP);
[III] a temporal delivery perceptual rule ( TDPR) chosen from at least one of: [a] a decrease in the rate of change in at least one of luminance and chromaticity of said dominant color; [b] an increase in the rate of change in at I east one of luminance and chromaticity of said dominant color;
[IV] a spatial extraction perceptual rule ( SEPR) chosen from at least one of: [a] giving greater weight in said pixel weighting function to scene content containing newly appearing features; [b ] giving lesser weight in said pixel weighting function to scene content containing newly appearing features; [c] giving greater weight in said pixel weighting function to scene content from a selected extraction region; and [d] giving lesser weight in said pixel weighting function to scene content from a selected extraction region;
[3] Transforming the luminance and chromaticity of said preferred ambient broadcast from said rendered color space to a second rendered color space ( R'G'B') so formed as to allow driving said ambient light source.
16. The method of claim 15, wherein said chromaticity perceptual rule changes the weight given in said pixel weighting function to a selected chromaticity in response to an explicit indicated user preference.
17. The method of claim 15, wherein said pixel weighting function is so formulated to provide darkness support by: [4] assessing said video content to establish that a scene brightness in said scene content is low; and [5] performing any of: [a] using said pixel weighting function so further formulated to reduce weighting of bright pixels; and [b] broadcasting a dominant color obtained using reduced luminance relative to that which would otherwise be produced; and wherein the extent to which step [5] is exe cuted is changeable in response to an explicit indicated user preference.
18. The method of claim 15, wherein said pixel weighting function is so formulated to provide color support by [6] assessing said video content to establish that a scene brightness in said scene content is high; and [7] performing any of: [a] using said pixel weighting function so further formulated to reduce weighting of bright pixels; and [b] performing claim 1 step [II] [c]; and wherein the extent to which step [7] is executed is changeable in response to an explicit indicated user preference.
19. The method of claim 15, wherein said extended dominant color extraction is repeated individually for different scene features ( 38, VlIl, V999) in said video content, forming a plurality of dominant colors ( DCl, DC2, DC3) and: [8] claim 1 step [1] is repeated where each of said plurality of dominant colors is designated as a pixel chromaticity; and wherein the extent to which step [8] is executed is changeable in response to an exp licit indicated user preference.
20. A method for dominant color extraction from video content encoded in a rendered color space (RGB) to produce, using perceptual rules in accordance with a user preference, a dominant color ( DC) for emulation by an a mbient light source ( 88), comprising: [0] Quantizing at least some pixel chromaticities ( Cp) from said video content in said rendered color space to form a distribution of assigned colors ( AC);
[1] Performing dominant color extraction from said distributi on of assigned colors to produce a dominant color by extracting any of: [a] a mode of said distribution of assigned colors; [b] a median of said distribution of assigned colors; [c] a weighted average by chromaticity of said distribution of assigned colors ; [d] a weighted average of said distribution of assigned colors using a pixel weighting function ( W) that is a function of any of pixel position ( i, j), chromaticity (x, y, R), and luminance ( L); [2] Further deriving at least one of the luminance, the chr omaticity, the temporal delivery, and the spatial extraction of said dominant color in accordance with respective perceptual rules to produce a preferred ambient broadcast, and where said respective perceptual rules are varied in character and effect by at least one of a plurality of possible explicit indicated user preferences; and where said respective perceptual rules comprise at least one of:
[I] a luminance perceptual rule ( LPR) chosen from any of: [a] a luminance increase; [b] a luminance decrease; [c] a luminance floor; and [4] a luminance ceiling; [II] a chromaticity perceptual rule chosen from at least one of: [a] a simple chromaticity transform (SCT); [b] a weighted average using said pixel weighting function ( PF8) so further formulated as to exhibit an influence from scene content that is obtained by assessing any of chromaticity and luminance for a plurality of pixels in said video content; [c] an extended dominant color extraction ( EE8) using a weighted average where said pixel weighting function is formulated as a function of scene content that is obtained by assessing any of chromaticity and luminance for a plurality of pixels in said video content, with said pixel weighting function further formulated such that weighting is at least reduced for majority pixels ( MP); [III] a temporal delivery perceptual rule ( TDPR) chosen from at least one of: [a] a decrease in the rate of change in at least one of luminance and chromaticity of said dominant color; [b] an increase in the rate of change in at leas t one of luminance and chromaticity of said dominant color;
[IV] a spatial extraction perceptual rule ( SEPR) chosen from at least one of: [a] giving greater weight in said pixel weighting function to scene content containing newly appearing features; [b] g iving lesser weight in said pixel weighting function to scene content containing newly appearing features; [c] giving greater weight in said pixel weighting function to scene content from a selected extraction region; and [d] giving lesser weight in said p ixel weighting function to scene content from a selected extraction region; [3a] Transforming said dominant color from said rendered color space to an unrendered color space (XYZ);
[3b] Transforming said dominant color from said unrendered color space to said second rendered color space, assisted by
[3c] matrix transformations of primaries ( RGB, R'G'B") of said rendered color space and second rendered color space to said unrendered color space using first and second tristimulus primary matrices ( Mi, M2);and deriving a transformation of said color information into said second rendered color space ( R'G'B') by matrix multiplication of said primaries of said rendered color space, said first tristimulus matrix, and the inverse of said second tristimulus matrix ( M2)"1.
EP05756729A 2004-06-30 2005-06-28 Ambient lighting derived from video content and with broadcast influenced by perceptual rules and user preferences Withdrawn EP1763974A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US58419804P 2004-06-30 2004-06-30
US68501605P 2005-05-26 2005-05-26
PCT/IB2005/052152 WO2006003624A1 (en) 2004-06-30 2005-06-28 Ambient lighting derived from video content and with broadcast influenced by perceptual rules and user preferences

Publications (1)

Publication Number Publication Date
EP1763974A1 true EP1763974A1 (en) 2007-03-21

Family

ID=34971979

Family Applications (1)

Application Number Title Priority Date Filing Date
EP05756729A Withdrawn EP1763974A1 (en) 2004-06-30 2005-06-28 Ambient lighting derived from video content and with broadcast influenced by perceptual rules and user preferences

Country Status (3)

Country Link
EP (1) EP1763974A1 (en)
JP (1) JP2008505384A (en)
WO (1) WO2006003624A1 (en)

Families Citing this family (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI280054B (en) * 2005-08-08 2007-04-21 Compal Electronics Inc Method for simulating the scenes of the image signals
BRPI0708430A2 (en) * 2006-03-01 2011-05-31 Koninkl Philips Electronics Nv method and device for controlling an ambient lighting element
JP2009531825A (en) * 2006-03-31 2009-09-03 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Combined ambient lighting control based on video and audio
BRPI0712894A2 (en) * 2006-06-13 2012-10-09 Koninkl Philips Electronics Nv fingerprint for a video stream, operable device for generating a fingerprint, methods for generating a fingerprint and for synchronizing a secondary media with a video stream on a device, software, data structure for an ambilight script, use of a fingerprint of a video stream, signal to communicate the identity of a video stream
ATE473618T1 (en) 2006-10-05 2010-07-15 Koninkl Philips Electronics Nv COLOR TRANSITION PROCESS FOR AN ENVIRONMENTAL OR GENERAL LIGHTING SYSTEM
CN101554059B (en) * 2006-12-07 2011-06-08 皇家飞利浦电子股份有限公司 An ambient lighting system for a display device
RU2468401C2 (en) * 2006-12-08 2012-11-27 Конинклейке Филипс Электроникс Н.В. Ambient illumination
CN101569207A (en) * 2006-12-21 2009-10-28 皇家飞利浦电子股份有限公司 Ergonomic lighting system
CN101606438A (en) * 2007-02-13 2009-12-16 Nxp股份有限公司 Vision display system and be used for the method for display video signal
WO2008146235A2 (en) * 2007-05-29 2008-12-04 Koninklijke Philips Electronics N.V. An ambience lighting system for a display device and a method of operating the ambience lighting system
TW200925491A (en) 2007-11-06 2009-06-16 Koninkl Philips Electronics Nv Light control system and method for automatically rendering a lighting atmosphere
KR20110022658A (en) * 2008-06-04 2011-03-07 코닌클리케 필립스 일렉트로닉스 엔.브이. Ambient illumination system, display device and method of generating an illumination variation and method of providing a data service
JP5323413B2 (en) * 2008-07-25 2013-10-23 シャープ株式会社 Additional data generation system
EP2514274A1 (en) * 2009-12-15 2012-10-24 TP Vision Holding B.V. Dynamic ambience lighting system
CN102770905B (en) 2010-02-22 2015-05-20 杜比实验室特许公司 System and method for adjusting display based on detected environment
HUP1000183D0 (en) * 2010-04-07 2010-06-28 Naturen Kft Controlling multicolor lighting based on image colors
DE102011111054A1 (en) 2011-08-24 2013-02-28 Deutsche Telekom Ag Method for controlling an optical output device
RU2488233C1 (en) * 2011-12-28 2013-07-20 Федеральное государственное бюджетное образовательное учреждение высшего профессионального образования Марийский государственный технический университет Method of generating external backlighting signal when viewing electronic image
FR2991545A1 (en) * 2012-06-05 2013-12-06 Fivefive Lighting device for computer peripherals, has digital terminal connected in wireless manner to Internet, where digital terminal includes memory, and luminaire and terminal are able to exchange information in partially wireless manner
WO2014083472A1 (en) 2012-11-27 2014-06-05 Koninklijke Philips N.V. Use of ambience light for removing black bars next to video content displayed on a screen
JP6334552B2 (en) 2012-11-27 2018-05-30 フィリップス ライティング ホールディング ビー ヴィ A method for generating ambient lighting effects based on data derived from stage performance
RU2015125551A (en) 2012-11-27 2017-01-11 Конинклейке Филипс Н.В. USE OF THE ENVIRONMENT FOR PROTECTION AGAINST COPYING VIDEO CONTENT DISPLAYED ON THE SCREEN
US9645395B2 (en) 2013-03-15 2017-05-09 Mark Bolas Dynamic field of view throttling as a means of improving user experience in head mounted virtual environments
US9293079B2 (en) 2013-03-15 2016-03-22 University Of Southern California Control of ambient and stray lighting in a head mounted display
US9628783B2 (en) 2013-03-15 2017-04-18 University Of Southern California Method for interacting with virtual environment using stereoscope attached to computing device and modifying view of virtual environment based on user input in order to be displayed on portion of display
US9581993B2 (en) 2014-02-11 2017-02-28 Honeywell International Inc. Ambient display for industrial operator consoles
CN106406504B (en) * 2015-07-27 2019-05-07 常州市武进区半导体照明应用技术研究院 The atmosphere rendering system and method for human-computer interaction interface
CN105611694B (en) * 2015-12-30 2018-10-16 北京经纬恒润科技有限公司 A kind of color configuration method and apparatus of atmosphere lamp
US10609794B2 (en) 2016-03-22 2020-03-31 Signify Holding B.V. Enriching audio with lighting
GB2557884A (en) * 2016-06-24 2018-07-04 Sony Interactive Entertainment Inc Device control apparatus and method
JP7080400B2 (en) * 2018-11-01 2022-06-03 シグニファイ ホールディング ビー ヴィ Choosing a method for extracting colors for light effects from video content
JP7080399B2 (en) * 2018-11-01 2022-06-03 シグニファイ ホールディング ビー ヴィ Determining light effects based on video and audio information depending on video and audio weights
CN112415922B (en) * 2020-10-21 2023-05-12 深圳供电局有限公司 Substation monitoring system based on streaming media
CN113542869A (en) * 2021-06-24 2021-10-22 北京小米移动软件有限公司 Display control method, display control device, and storage medium
CN117812789A (en) * 2023-12-29 2024-04-02 广州视声智能科技有限公司 Indoor illumination control system, method and control device based on Internet of things

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS63164582A (en) * 1986-12-25 1988-07-07 Nec Corp Dimmer
JPH02253503A (en) * 1989-03-28 1990-10-12 Matsushita Electric Works Ltd Image staging lighting device
JPH06267664A (en) * 1993-03-10 1994-09-22 Toshiba Lighting & Technol Corp Lighting system for television set
JP4652691B2 (en) * 2002-02-06 2011-03-16 フィリップス ソリッド−ステート ライティング ソリューションズ インコーポレイテッド Method and apparatus for controlled light emission
GB0211898D0 (en) * 2002-05-23 2002-07-03 Koninkl Philips Electronics Nv Controlling ambient light
EP1522187B1 (en) * 2002-07-04 2010-03-31 Koninklijke Philips Electronics N.V. Method of and system for controlling an ambient light and lighting unit

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2006003624A1 *

Also Published As

Publication number Publication date
WO2006003624A1 (en) 2006-01-12
JP2008505384A (en) 2008-02-21

Similar Documents

Publication Publication Date Title
US7894000B2 (en) Dominant color extraction using perceptual rules to produce ambient light derived from video content
US8063992B2 (en) Dominant color extraction for ambient light derived from video content mapped through unrendered color space
EP1763974A1 (en) Ambient lighting derived from video content and with broadcast influenced by perceptual rules and user preferences
US7932953B2 (en) Ambient light derived from video content by mapping transformations through unrendered color space
US7859595B2 (en) Flicker-free adaptive thresholding for ambient light derived from video content mapped through unrendered color space
US20070091111A1 (en) Ambient light derived by subsampling video content and mapped through unrendered color space
EP1704729B1 (en) Ambient light script command encoding
WO2007026283A2 (en) Ambient lighting derived from video content using adaptive extraction regions
Laine et al. Illumination-adaptive control of color appearance: a multimedia home platform application

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20070130

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU MC NL PL PT RO SE SI SK TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20070521