WO2021242325A1 - Interactive remote audience projection system - Google Patents

Interactive remote audience projection system Download PDF

Info

Publication number
WO2021242325A1
WO2021242325A1 PCT/US2020/070074 US2020070074W WO2021242325A1 WO 2021242325 A1 WO2021242325 A1 WO 2021242325A1 US 2020070074 W US2020070074 W US 2020070074W WO 2021242325 A1 WO2021242325 A1 WO 2021242325A1
Authority
WO
WIPO (PCT)
Prior art keywords
audience
remote
reaction
event
site
Prior art date
Application number
PCT/US2020/070074
Other languages
French (fr)
Inventor
Tae Hong PARK
Original Assignee
Sei Consult Llc
STAACK, Christian
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sei Consult Llc, STAACK, Christian filed Critical Sei Consult Llc
Priority to PCT/US2020/070074 priority Critical patent/WO2021242325A1/en
Publication of WO2021242325A1 publication Critical patent/WO2021242325A1/en

Links

Classifications

    • EFIXED CONSTRUCTIONS
    • E04BUILDING
    • E04HBUILDINGS OR LIKE STRUCTURES FOR PARTICULAR PURPOSES; SWIMMING OR SPLASH BATHS OR POOLS; MASTS; FENCING; TENTS OR CANOPIES, IN GENERAL
    • E04H3/00Buildings or groups of buildings for public or similar purposes; Institutions, e.g. infirmaries or prisons
    • E04H3/10Buildings or groups of buildings for public or similar purposes; Institutions, e.g. infirmaries or prisons for meetings, entertainments, or sports
    • E04H3/14Gymnasiums; Other sporting buildings
    • EFIXED CONSTRUCTIONS
    • E04BUILDING
    • E04HBUILDINGS OR LIKE STRUCTURES FOR PARTICULAR PURPOSES; SWIMMING OR SPLASH BATHS OR POOLS; MASTS; FENCING; TENTS OR CANOPIES, IN GENERAL
    • E04H3/00Buildings or groups of buildings for public or similar purposes; Institutions, e.g. infirmaries or prisons
    • E04H3/10Buildings or groups of buildings for public or similar purposes; Institutions, e.g. infirmaries or prisons for meetings, entertainments, or sports
    • E04H3/22Theatres; Concert halls; Studios for broadcasting, cinematography, television or similar purposes

Definitions

  • Audience participation in private and public event spaces including sports events, recitals, socio-political events, conferences, concerts, religious gatherings, and social gatherings comprise the building blocks of social interaction, community building, cultural evolution, education, and entertainment as shown in example sports event in FIG. 1.
  • participation is difficult - e.g. geographical inaccessibility, limited seating, ticket costs, government restrictions - a system that enables the creation and production of a dynamic remote audience participation event environment has benefits for both audiences and event performers.
  • the realization of interactive remote-audience/onsite- performer interaction systems have been shown to be beneficial in aforementioned situational examples.
  • the invention described here can offer much needed interactive remote audience participation in augmenting event experiences for both viewers and performers alike.
  • An example embodiment of the current invention reduces fundamental gaps that exist between physical and remote audience participation in events such as sports events. It reduces these gaps for both audiences and performers alike - the audience participating and reacting remotely and performers performing and reacting at the event site, where bi-directional dynamic reactions by both parties elevate event-experiencing.
  • the current invention enables projection of remote audience reactions into event sites, whereby, for example, a normally robustly attended stadium (FIG. 1) absent of audience members but fully present with performers (FIG. 2), is transformed into a vibrant interactive audience player stage as shown in FIG. 3.
  • These remotely located audience reactions - independent of time zones and geographical locations (FIG. 4) - are projected sonically, visually, and kinetically into the stadium thereby further closing the gap between physical and virtual audience-performer interaction and dynamics.
  • One embodiment of the invention realizes remote audience reaction projection at the event site coordinated and transceived between at least one remote site module, at least one cloud module, and at least one event site module.
  • the remote site module pertains to sites with remote audiences;
  • cloud pertains to server-side technology through one or more physical custom servers or cloud services or both;
  • event site pertains to one or more venue sites where performances take place and are broadcast to a wide variety of viewers as shown in FIG. 5.
  • One aspect of the system realizes remote audience reaction capture of at least one of the following mediums: sound, movement and gesture, temperature, humidity, electrodermal measurements, and sentiment. If more than one medium is involved, henceforth it will be referred to as multi medium.
  • Another aspect of the system generates remote audience reaction data streams at the remote site module that (1) streams continuously while mitigating privacy concerns, (2) reduces network bandwidth usage commonly affecting audiovisual state-of-the-art conferencing systems on the market, and (3) processes audio to mitigate audio feedback using standard echo-cancelation techniques exploiting situational conditions of "crowd noise" and remote audience sound that are robustly distinguishable.
  • Another aspect of the present system and methods realizes capturing and recreating the "feel" of performer-audience interaction and experience through quasi-natural or “natural”, interactive, instantaneous, uninterrupted exchange of audience-performer multi-medium reactions between a plurality of remote audience sites, event sites, and its performers.
  • Another embodiment of the system realizes additional computation of remote audience reaction data streams transmitted from at least one remote site module at a cloud module.
  • One aspect of [0007] is the computation of audience reaction data streams as a function of event site characteristics including, but not limited to, seating layout, venue size, number of audio channels, sound reinforcement specifications, and other elements such as onsite fixtures.
  • Another aspect of the invention is the event side module that receives the audience reaction data streams via the cloud module: it feeds these data streams to the remote audience reaction projection module that projects audience reactions at the event site sonically, visually, and kinetically or a combination thereof.
  • One aspect of the embodiment of [0009] realizes remote site audience reaction capture focusing on sound and corresponding audience reaction sound projections at the performance event site.
  • Another aspect of the embodiment of [0009] realizes remote site audience reaction capture and analysis focusing on non-sounds (e.g. movement, temperature, humidity, electrodermal measurements, sentiment) and corresponding audience reaction visual projection at the performance event site.
  • non-sounds e.g. movement, temperature, humidity, electrodermal measurements, sentiment
  • the scenario in [0003] is accomplished partially via remote site environmental analysis and synthesis, where synthesized multi-medium audience reaction data is streamed from one or a plurality of associated remote sites to the cloud module.
  • Each remote site involves at least one computing device including, but not limited to, smartphones, tablets, laptops, single board computers, standalone devices, or media devices.
  • Such devices analyze, process, and transmit data to the cloud for further event site-specific processing and data transmission to the event site where remote audience reactions are transduced via sound, movement, and other media rendering sonic, visual, and kinetic projections as shown in FIG. 7.
  • Another aspect of the system realizes at least one or a plurality of event site module sub-systems that receive at least one reaction data stream channel including, but not limited to, reconstructed remote audience sound and movement data streamed to the event site module.
  • the data stream can then be processed, amplified, and projected at the event site via the audience reaction projection system comprised of at least of one or more sound projection or visual projection sub-systems that transduces audience reactions to sound, visual, and kinetic projections or a combination thereof as shown in FIG. 9.
  • audience reaction data streams generated at the remote site modules are transmitted and projected via the reaction projection system at the event site, according to virtual-to-physical onsite seating assignments or virtual-to-virtual seating assignments or a combination of virtual and physical seating assignments as shown in FIG. 10.
  • Another aspect of the current invention realizes systems and methods for synthesizing captured remote site environmental multi-mediums including, but not limited to, sound and movement, where remote site-specific mediums that are rendered are at least one of raw, partially, or entirely synthesized; partially, fully filtered, or unfiltered; partially or fully indexically encoded; partially, fully obfuscated or rendered without obfuscation; non- amplified, partially or fully amplified, or a combination of two or more of the aforementioned processes and methods as shown in FIG. 11 and FIG.12.
  • Another aspect of the system realizes an audience sentiment analysis module to analyze audience sentiment states as a function of at least one of the following: vocalizations, word classification and analytics, voice dynamic range and spectral characteristics, voice fundamental frequency estimation, ambient temperature, skin temperature, ambient humidity, and movement/gesture characteristics and changes as shown in FIG. 13.
  • the audience reaction data streams rendered between the remote site module, cloud module, and event site module are comprised of at least one remote site audience reaction data stream or a plurality of audience reaction data streams (FIG. 4).
  • These can be, for example, voice medium data streams from additional remote sites as shown in FIG. 10.
  • This combined voice data stream is rendered as a resynthesized model of the changing nature of remote audience sound reactions that is then projected at the event site - e.g. a sports game, political gathering, classroom or lecture setting.
  • non-voice data streams from one or more remote sites are rendered as resynthesized models of non sound reactions (such as gestures) and projected at the event site.
  • the event site audience reaction visual projection system is based on flexible material, such as plastic, metal, fabric or textile that covers fully or partially a single seat, groups of seats, or is installed on other fixtures at the event site such as lamp posts and hand rails rendering a remote audience reaction transducer module as shown in FIG. 15 and FIG. 16.
  • the material vibrates, changes visually, changes in form and shape responding to at least one data stream channel that transduces remote audience reaction data streams to kinetic energy resulting in shape changes of the flexible material.
  • the response can be scaled across the seating area projecting visual remote audience reaction at the event site - e.g. the cumulative roaring sound of a soccer match goal energizing a stadium sonically, graphically, and kinetically.
  • the remote audience reaction transducer module is based on an electrical coil system to transduce electrical energy to kinetic energy that is driven by remote audience reaction data streams (FIG. 15).
  • the remote audience reaction transducer module is based on an electrical coil system that is driven by amplified sound as described in [0018] to [0020] where it is a sinusoid (1702) with carrier frequency fc modulated (1703) with one or more modulator signals (1701) selected from the multi-medium audience reaction data stream.
  • the modulator signal is at least one of sound, gesture and movement, temperature, humidity, or sentiment (1701) as shown in FIG. 17 where transduced reactions are projected to event site (1704).
  • the remote audience reaction transducer module is based on a configurable flag, banner, projection screen, balloon, mannequin, or other lightweight material attached to a pole. This installation is modulated by the remote audience reaction data stream, which, in one example, is audience sound or audience movement reactions that are projected according to a specific audience-to-seat matrix.
  • each seat is configured with the remote audience reaction transducer module that is not driven by an electrical signal but rather generates an electrical signal when contact is made with an object or configured with another sensor to detect object and surface contact - transduction of kinetic to electrical energy.
  • This embodiment detects objects, where for example, a baseball landing on seats after a homerun is struck, whereby, the seat associated with the remote audience member is mailed the baseball as a souvenir and/or an alert is sent, along with other credits, to the appropriate recipient or recipients electronically.
  • audience ambiance streams are generated partially or fully and are dynamically projected at the event site to minimize “shapes” or “silhouettes” of sound, movement, or gesture discontinuities during unexpected, awkward, and unwanted silences while enabling smooth maintenance of ambience to make natural cyber-physical event experiencing as close as possible to physical event experiencing - in the case of a network discontinuity as shown in FIG. 14 where the original frame 1401 experiences dropout (1402) is replaced with a synthesized ambience multi-medium stream (1403).
  • music performed at the event site by the musicians is projected at the event site as per standard practices. At the same time, it is analyzed, decomposed, and simultaneously transmitted to remote site modules whereby only high-level data such as note number, velocity, duration, and instrument type is sent.
  • the music is resynthesized instantaneously either in aggregate as one audio stream or disaggregate and modulated according to the location of each assigned seat of the remote audience member or members, or based on any chosen location at the event site enabling synchronized remote audience interaction at all remote and event sites, such as chanting, singing along, or clapping in synchrony beyond space and time- zones as shown in FIG. 28.
  • the instantaneous receipt of the music performed by the musician enables the collective and synchronized reaction of remote audience members during or at the end of a musical phrase where in one example, the audience collectively punctuates with a loud scream in an interactive manner.
  • the remote site audience member or a plurality of remote audience members can select virtual or physical seats according to (1) proximity to specific groups or individuals (e.g. team A fans), (2) physical location at the event site, and (3) virtual location at the event site that is not part of its physical seating layout that will result in sonic and/ or kinetic impact and contribution of remote audience's reaction projection to the site.
  • groups or individuals e.g. team A fans
  • virtual location at the event site that is not part of its physical seating layout that will result in sonic and/ or kinetic impact and contribution of remote audience's reaction projection to the site.
  • An additional aspect of the current invention includes seat pinpointing methods and systems utilizing visual spectrum outside visible light to uniquely tag each seat, which in turn is used to project reactionary audiovisual outputs with seat-level precision.
  • each seat is tagged with paint, for example, invisible to the naked eye but visible to sensors such as infrared cameras.
  • cameras and visual projectors are automatically calibrated to pinpoint each seat in the event space such as a stadium, and in turn, controlled according to audience reaction associated with a seat-assigned ticketholder, or a plurality of ticket holders assigned to more than one seat.
  • This system provides an additional method and design to dynamically project audience reactions driven by remote reaction data streams or by the event site automatic audience reaction analysis system driven by capturing the ebb-and-flow of sports events, for example.
  • remote audiences and event site audiences in a particular location are jointly rendered in the event site location, enabling joint or independent real-time responses of both event site and remote audiences, allowing, for example, independent interaction of event moderators with on-premise and remote audiences.
  • Remote audiences can choose or be assigned virtual seats corresponding to their affectionate preference, such as co-located with audience of similar fan-attitude. The co- location of event site audience and remote audience by affectionate preference is thus reflected in the event side audio-visual projections.
  • a custom remote site system is configured at a designated space at a remote site where at least one and optionally more members are congregated within the designated space to engage and react to a broadcasted program at an event site.
  • the remote site module is configured to identify virtual ticket holders verified via near-field communication systems, for example Bluetooth, wherein members’ MAC address that is broadcast over Bluetooth is utilized to register and enable participation at the remote venue.
  • near-field communication systems for example Bluetooth
  • a user’s ticket is valid for a set period of time as determined by the ticket vendor.
  • the designated space is internally configured to include at least one of microphone, video camera, pressure sensing floor mats, and other sensors to capture audience reaction inside the designated space.
  • the example room is additionally configured with at least one microphone and camera external to the designated space to remove external reactions from outside of designated space from the audio reaction data stream generated at the remote site module.
  • the remote space’s walls are configured whereby the inner space surfaces are projected with virtual audience reactions (1) spatially organized according to virtual seating assignments and (2) audio-visually driven by remote audience reaction data streams from at least one or a plurality of data streams.
  • These remote audience reaction data streams can be modulated such that the reaction experience reflects the attitude composition of the audience at the remote site, the attitude composition at the event site, or any combination of (e.g. team-fan) attitudes not necessarily present in either location.
  • FIG. 1 illustrates a robust full stadium with audience and performers.
  • FIG. 2 illustrates an empty stadium with only performers and virtual seat assignments above / behind regular stadium seats (indicated by boxes with rounded corners).
  • FIG. 3 illustrates an example of the invention’s remote audience reaction projection system that projects remote audience reactions.
  • FIG. 4 illustrates audiences remotely participating an event site from around the world.
  • FIG. 5 illustrates a simplified overview of invention’s main modules.
  • FIG. 6 further illustrates a more detailed summary of the IRAPS system.
  • FIG. 7 illustrates a summary of the remote site module.
  • FIG. 8 illustrates a summary of the cloud module.
  • FIG. 9 illustrates a summary of the event site module and the multi-medium remote audience reaction projection system.
  • FIG. 10 illustrates remote audiences’ seat assignment to physical and virtual seats.
  • FIG. 11 illustrates obfuscation of vocalizations of the word hello.
  • FIG. 12 illustrates obfuscation of a hand gesture.
  • FIG. 13 illustrates the sentiment analysis block using AI.
  • FIG. 14 illustrates network dropout and automatic audience ambiance generation.
  • FIG. 15 illustrates the remote audience reaction transducer and electric coil core.
  • FIG. 16 illustrates the remote audience reaction transducer system attached to seats at an event site.
  • FIG. 17 illustrates the remote audience reaction transducer system projection of non-sound based audience reactions into the event site.
  • FIG. 18 illustrates the remote audience reaction transducer system configured to a seat.
  • FIG. 19 illustrates the pole type remote audience reaction transducer system.
  • FIG. 20 illustrates the pole type remote audience reaction transducer system configured to seats.
  • FIG. 21 illustrates the remote audience reaction transducer system with transducer stabilizer and offline / online switch on foldable seats in side as well as a front view.
  • FIG. 22-25 illustrates remote audience reaction transducer system that is retractable and expandable.
  • FIG. 26 illustrates remote audience reaction transducer system setup as a banner configuration.
  • FIG. 27 illustrates remote audience reaction transducer system installed on a window.
  • FIG. 28 illustrates music performance capture system. DETAILED DESCRIPTION OF THE INVENTION
  • FIG. 6 illustrates a summary of the four main modules operating in concert (1) remote site module: audience reaction analysis and synthesis, (2) cloud module: spatialization, data stream processing and management, (3) event site module: spatio-temporal multi-medium remote audience reaction projection, and (4) broadcasting infrastructure that broadcasts media content across a diversity of web-based and cable-based network receivers such as televisions, computers, or other devices.
  • remote site module audience reaction analysis and synthesis
  • cloud module spatialization, data stream processing and management
  • event site module spatio-temporal multi-medium remote audience reaction projection
  • broadcasting infrastructure that broadcasts media content across a diversity of web-based and cable-based network receivers such as televisions, computers, or other devices.
  • the remote site module system in part operates as an edge compute station to analyze remote audience reactions and consists of at least two sub- modules: (1) a multi-medium analysis module and (2) a multi-medium synthesis and modulation module rendering low latency audience reaction data streams.
  • the analysis module analyzes remote site environmental sounds including voice, clapping, jumping, hand-waving, prespiration, and other types of common audience reactions.
  • the target medium e.g. sound
  • the target medium is synthesized via at least one of the following units: (a) obfuscation unit, (b) sentiment analysis unit, (c) automatic medium classification unit, and (d) signal processing unit. These units are then used alone or in combination as prescribed through system settings for specific medium types including, but not limited to, sound, gesture, movement, temperature, humidity, and sentiment as shown in FIG. 7.
  • remote audience reactions such as vocalizations
  • vocalizations are subject to sound obfuscation or "sound blurring” whereby words and sentences are obfuscated and made unintelligible while simultaneously preserving the sonic and temporal characteristics of the vocalization streams.
  • This example realizes preservation of the flow of audience sound feedback while masking vocalization meaning. It renders a continuous encoded audio stream that is sonically reflective of the vocalization but removes word articulation structures addressing inadvertent transmission of "problematic" words via spectro-temporal signal processing and modulation - e.g. obscenities or private and sensitive information as shown in FIG. 11.
  • the obfuscation ratio is optionally controlled allowing, for example, 100% medium obfuscation where audio is maximally obfuscated, to 0% medium obfuscation, where medium is minimally obfuscated or no obfuscation is applied.
  • the input voice is analyzed for its closest timbral match from a significantly large indexed voice sound building block template set - for example, from 0 to 9999.
  • the remote site module does not transmit any audio data per se but rather transmits an index (between 0-9999 for example) of the corresponding closest sound to the cloud or directly to the event site module where the voice sound is reconstructed with extremely small latency as only index numbers representing the audio rather than raw audio signals are transmitted.
  • the voice is then reconstructed using the index value at the event site. This method carefully considers the need to preserve the sound characteristics while at the same time considers the masking effect that will render the resulting audio unintelligible and non-transcribeable.
  • Each building block template index as outlined in [0068] and [0069] consists of a particular sound building block template that can be combined, sequenced, and juxtaposed to resynthesize a close approximation of the original sound to render extreme low bandwidth data transmission.
  • voice medium is subject to the voice classification unit where sensitive words - obscenities, for example - are automatically detected and replaced by obfuscated versions using either by spectro-temporal signal processing and modulation or through building block template reconstruction.
  • voice mediums is subject to a voice classification unit, where the system analyzes vocalizations and only transmits data streams that are non-texts but vocal expressions such as ooh, ah, ugh, etc.
  • the system analyzes and recognizes non-vocal sounds such as clapping, stomping, table tapping, drumming sounds, horn sounds, and the like that are audience reactionary responses commonly heard during sports events.
  • non-vocal sounds such as clapping, stomping, table tapping, drumming sounds, horn sounds, and the like that are audience reactionary responses commonly heard during sports events.
  • the remote audience sound environment is subject to automatic voice classification where the remote site module only reacts to a specific audience member, namely a member with a ticket and assigned seat of a specific event.
  • remote audience medium gesture and motion is obfuscated and processed using image distortion techniques as shown in FIG. 12.
  • the methods as outlined from [0068] to [0070] are rendered with movement, where the building block templates are building block templates representing physical gesture and movement building block templates.
  • the building block templates are building block templates representing physical gesture and movement building block templates.
  • sensors such as cameras, motion detection sensors, or game controllers commonly found in homes can capture movement and gesture.
  • the cloud module FIG. 1
  • one or a plurality of remote site audience medium data streams that are, for example, audience sound reactions are first processed at the remote site module and then further processed on the cloud module to render at least one or more reaction data streams to audio-visually fill the event site through the audience reaction projection system utilizing custom and commonly available event site infrastructures including, but not limited to, sound reinforcement systems, audio channels/loudspeaker configurations, and seats.
  • the cloud module system generates audio data streams that are sono- spatially placed within two or a plurality of audio channels that in turn are projected via the remote site module.
  • the sono-spatial processing is conducted with metadata specific to each event site’s specifications in order to spatially place audience sound according to their ticketed seating assignments or other spatial placement configuration that is predetermined or dynamically adjusted during the event.
  • the encoded audio stream rendered at the remote site module is transmitted to the cloud for dynamic and multichannel processing considering (1) one or a plurality of remote site audio encoded streams and (2) specifications of the event space as a function of at least seating arrangement, channel and loudspeaker configuration, and event space dimensions as shown in FIG. 8.
  • the cloud module is distributed between at least two cloud instances where remote audience data is divided and managed between at least two cloud instances. This mitigates data interruptions between cloud and event sites.
  • audience reaction data streams are received by the event site module and are projected to the event site venue via the audience reaction projection system consisting of (1) audience reaction sound projection and (2) audience reaction visualization projection modules where sonic, visual, and kinetic projection or a combination thereof is enabled as shown in FIG. 9.
  • the remote audience reaction data streams received by the event site module are processed and filtered and distributed via the remote audience reaction projection system to (1) a specific seat corresponding to the remote site audience ticket assignment, (2) groups of seats corresponding to seat configuration and seat assignment with associated medium channels, or (3) other event site fixtures such as hand rails and light posts.
  • the audience reaction data stream is a sound medium and is projected via the audience reaction sound projection system.
  • the remote audience sound reactions received and controlled by the event site module are projected through the event site sound reinforcement system reflecting in part or fully the desired virtual sono-spatial location at the event site.
  • the event site module further employs audio processing to augment spatial characteristics of the outputted audio signal to the event site where, for example, the multichannel audio rendered at the cloud module is projected at the event site according to seating associations of the remote audience member via the event site sound projection system.
  • the remote reaction data stream received at the event module is of multi-medium type including at least one that is not of sound including gesture, movement, temperature, humidity, electrodermal measurements, and sentiment.
  • the remote audience nonsound reactions are projected and mapped via the event site’s remote audience reaction sound and visual projection systems that are combined and configured with respect to ticket and corresponding event site seating assignments.
  • the audience reaction transducer system enables audience reaction visual projection implementation driven by either sound or other mediums such as movement, gesture, temperature, humidity, or sentiment.
  • the audience reaction transducer system comprises of single or multiple arrangements of transducer electrical coils configured on surfaces including fabrics and textile materials enabling the transduction of audience reaction data from electrical energy to kinetic energy resulting in visual and physical changes (FIG. 15.)
  • the audience reaction transducer system is excited by remote audience data streams of non- sound type providing visual remote audience feedback to the event site.
  • the audience reaction transducer system in [0087] is associated with seating assignments wherein audience data streams are projected to one or a group of associated seats in the event space (FIG. 16).
  • the remote audience reaction data stream is filtered and amplified to maximize transduction of electrical energy to kinetic energy and shape disfiguration via the audience reaction transducer system, where for example, filtering, including but not limited to, such as a low pass filter, is configured to maximally induce material disfiguration correlating to the audience reaction energy.
  • the example material is, for example, a fabric type
  • the material is treated via color variations, fabric texture variants, photoluminescent components and shapes that dynamically and visually reflect varied audience reactions captured from one or more remote sites.
  • This embodiment models visual dynamicity and diversity of the audiences from a visual perspective as commonly experienced on television or live media event programming situations and is driven by the audience reaction transducer system.
  • the audience reaction transducer system is treated with thermochromatic ink to color module in accordance to remote audience environmental medium changes that will in turn change the color of the seat, seating area, or other elements of the event site.
  • the audience reaction transducer system treated with color changing features, is programmable according to a group, groups, or individual's color requests that in part can be used to form teams or can be used to project messages to event viewers and is driven by the gesture transducer system.
  • audience reaction transducer system includes a securing mechanism with a fastener or a hook system to attach to seats with an added stabilizing connector to secure the transducer against unwanted displacement while enabling maximal flexibility for changing shape and vibration as shown in FIG. 18 (e.g. to withstand gusts of wind, for example).
  • a pole is configured on a seat or a plurality of seats or placed elsewhere at the event site and is wound in part, or fully, with electrical coil for transduction as shown in FIG. 19 and FIG. 20.
  • the audience reaction transducer system is an attachment to folding seats, wherein the audience reaction transducer system is attached at the bottom surface part of a seat and secured at the upper side of the seat as well as the lower side of the seat, where for the bottom side, a stabilizer (2103) is included to (1) keep the transducer system from dropping when the seat is unfolded (2101) and (2) keep the transducer system in place, when folded and in its upright position (2102), while allowing flexibility and margins for material to change shape when driven by audience reaction data streams as shown in FIG. 21.
  • [0097] in another aspect of [0097] is configured to turn off when used by a physical audience member when switch is not pressed (2104) and turned on (2101) when chairs are not being used and in its upright position and switch pressed as shown in 2106.
  • a retractable and expandable system controls the height of the audience reaction transducer system to maximize visibility when fully extended as shown in FIG. 22.
  • the audience reaction transducer system (1) is attached to the bottom side of a folding seat in its retracted state (2301); (2) is extended (2303) when the seat is in its upright (folded) position; (3) is secured further at the bottom of the lower side of the bottom side of the seat with a stabilizer (2304) such as, but not limited to, a line, rope, elastic band; as shown in FIG. 23 and 2305 illustrating a front view of the system when extended.
  • a stabilizer such as, but not limited to, a line, rope, elastic band
  • the system in [0099] is that the extendable (2401) / retractable (2402) system is secured to the back of a seat or multiple seats rather than under a seat with an optional turn (2405) on (2403)/off (2404) switching system as shown in FIG. 24 where turning the transducer towards the stage area and away from the stage area will automatically turn on/off the system.
  • the extendable/retractable system is wound partially, or fully, with electrical coil for transduction where objects such as balloons, cardboards, lighting, and mannequins can be attached to be driven and put in motion via the remote audience data streams, transducing electrical energy to kinetic energy as shown in FIG. 25.
  • the audience reaction transducer system is attached to at least two extendable/retractable units, where for example, two seats next to each other or with at least one or more seats in between result in a "banner” configuration that span across the seats between the outlier seats as shown in FIG. 26.
  • seat surfaces including foldable seat surfaces are configured to act as soundboards to transduce cumulative remote site sound characteristics with seat level spatial sound projection accuracy.
  • the audience reaction transducer system is reusable, detachable, and re- attachable to other objects such as windows at the event site, for example, that are treated with paint and other materials that respond to remote audience reaction data streams as shown in FIG. 27.
  • each seat is configured with the audience reaction transducer system or alternatively with another sensor to detect solid objects such as a baseball landing on a seat after a homerun is struck, whereby, for example, the seat associated with the remote audience member is mailed the baseball as a souvenir.
  • the system determines baseball to seat assignment by selecting the seat that last triggers object detection. For example, a bouncing and triggering of seats 4, 10, and 12 where either one of the seats 4, 10 or 12 and associated remote client will be recipient of the ball or associated credits, as chosen by the event organizers.
  • the event site module consists of an automatic audience reaction synthesizer system that is automatically or manually enabled during temporary network blackout windows, for example.
  • the dynamic audience synthesizer is triggered in accordance to highlight events such as real-time score changes, penalties, real-time event commentaries, and likewise, according to non-highlight events such as between point transitions and breaks between play.
  • the audience sound that is projected is dynamically reconstructed using prior audience reaction medium types including but not limited to sound, movement, gesture, and sentiment data types from the event in question or other similar events stored in the cloud and shared with event site modules.
  • audience reaction data triggers are automatically and dynamically generated through an event analysis system trained through machine learning from past and present audience audio, video and medium ground truth datasets that reflect the reaction of historical audience interactions as a function of sound and visuals from performers, other audience members, and the event space itself.
  • the audience reaction data is dynamically generated through an event analysis system trained from professional events such as major league sports events, concerts or other high-end professional events that is then scaled and projected to different types and sizes including but not limited to small scale events, such as school sports events, concerts and theater performances.
  • the audience reactions are calibrated prior to event start or during the event where in the case of prior event calibration, "fans” supporting team A, then “fans” supporting team B, etc. are asked to express their support at their remote locations.
  • the system captures the audience reaction, analyzes and synthesizes it, and automatically generates various typical audience reactions driven by events at the performance including, but not limited to, score changes, penalties, breaks, and highlight events that are analyzed automatically in real-time.
  • the synthesized data at the event site is balanced with streamed remote audience reaction data streams and broadcast to viewers and audience live or at a later time.
  • This aspect will mitigate network and broadcasting delays dynamically in a full-duplex data transmission loop that pertains to live events such as football games.
  • the music performed by the musician is instantaneously shared with remote sites where only musical note information including but limited to note number, velocity, duration, and instrument type is transmitted to, and received by, remote site modules.
  • the musical metadata is used at the remote site to resynthesize the music via a sound synthesizer minimizing latency issues, synchronicity issues, audience reaction and timing issues, facilitating remote audience collaborative engagement - including chanting and singing - in real time, so that it can be projected back to the event site.

Landscapes

  • Engineering & Computer Science (AREA)
  • Architecture (AREA)
  • Civil Engineering (AREA)
  • Structural Engineering (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure describes an interactive remote audience projection system. The system enables projection of remote audience reactions into events sites, whereby, for example, a football stadium absent of audience members, but fully present with players, is transformed sonically, visually, and kinetically into a vibrant interactive audience-player stage.

Description

TITLE: INTERACTIVE REMOTE AUDIENCE PROJECTION SYSTEM
BACKGROUND
[0001] Audience participation in private and public event spaces including sports events, recitals, socio-political events, conferences, concerts, religious gatherings, and social gatherings comprise the building blocks of social interaction, community building, cultural evolution, education, and entertainment as shown in example sports event in FIG. 1. In circumstances where participation is difficult - e.g. geographical inaccessibility, limited seating, ticket costs, government restrictions - a system that enables the creation and production of a dynamic remote audience participation event environment has benefits for both audiences and event performers. With such "cyber-physical'1 technological infrastructures constantly in development, the realization of interactive remote-audience/onsite- performer interaction systems have been shown to be beneficial in aforementioned situational examples. In particular, during global pandemics where social distancing measures can limit physical gatherings, the invention described here can offer much needed interactive remote audience participation in augmenting event experiences for both viewers and performers alike.
SUMMARY OF THE INVENTION
[0002] An example embodiment of the current invention reduces fundamental gaps that exist between physical and remote audience participation in events such as sports events. It reduces these gaps for both audiences and performers alike - the audience participating and reacting remotely and performers performing and reacting at the event site, where bi-directional dynamic reactions by both parties elevate event-experiencing. The current invention enables projection of remote audience reactions into event sites, whereby, for example, a normally robustly attended stadium (FIG. 1) absent of audience members but fully present with performers (FIG. 2), is transformed into a vibrant interactive audience player stage as shown in FIG. 3. These remotely located audience reactions - independent of time zones and geographical locations (FIG. 4) - are projected sonically, visually, and kinetically into the stadium thereby further closing the gap between physical and virtual audience-performer interaction and dynamics.
[0003] One embodiment of the invention realizes remote audience reaction projection at the event site coordinated and transceived between at least one remote site module, at least one cloud module, and at least one event site module. The remote site module pertains to sites with remote audiences; cloud pertains to server-side technology through one or more physical custom servers or cloud services or both; and event site pertains to one or more venue sites where performances take place and are broadcast to a wide variety of viewers as shown in FIG. 5. [0004] One aspect of the system realizes remote audience reaction capture of at least one of the following mediums: sound, movement and gesture, temperature, humidity, electrodermal measurements, and sentiment. If more than one medium is involved, henceforth it will be referred to as multi medium.
[0005] Another aspect of the system generates remote audience reaction data streams at the remote site module that (1) streams continuously while mitigating privacy concerns, (2) reduces network bandwidth usage commonly affecting audiovisual state-of-the-art conferencing systems on the market, and (3) processes audio to mitigate audio feedback using standard echo-cancelation techniques exploiting situational conditions of "crowd noise" and remote audience sound that are robustly distinguishable.
[0006] Another aspect of the present system and methods realizes capturing and recreating the "feel" of performer-audience interaction and experience through quasi-natural or "natural”, interactive, instantaneous, uninterrupted exchange of audience-performer multi-medium reactions between a plurality of remote audience sites, event sites, and its performers.
[0007] Another embodiment of the system realizes additional computation of remote audience reaction data streams transmitted from at least one remote site module at a cloud module.
[0008] One aspect of [0007] is the computation of audience reaction data streams as a function of event site characteristics including, but not limited to, seating layout, venue size, number of audio channels, sound reinforcement specifications, and other elements such as onsite fixtures.
[0009] Another aspect of the invention is the event side module that receives the audience reaction data streams via the cloud module: it feeds these data streams to the remote audience reaction projection module that projects audience reactions at the event site sonically, visually, and kinetically or a combination thereof.
[0010] One aspect of the embodiment of [0009] realizes remote site audience reaction capture focusing on sound and corresponding audience reaction sound projections at the performance event site.
[0011] Another aspect of the embodiment of [0009] realizes remote site audience reaction capture and analysis focusing on non-sounds (e.g. movement, temperature, humidity, electrodermal measurements, sentiment) and corresponding audience reaction visual projection at the performance event site.
[0012] In one example, the scenario in [0003] is accomplished partially via remote site environmental analysis and synthesis, where synthesized multi-medium audience reaction data is streamed from one or a plurality of associated remote sites to the cloud module. Each remote site involves at least one computing device including, but not limited to, smartphones, tablets, laptops, single board computers, standalone devices, or media devices. Such devices analyze, process, and transmit data to the cloud for further event site-specific processing and data transmission to the event site where remote audience reactions are transduced via sound, movement, and other media rendering sonic, visual, and kinetic projections as shown in FIG. 7.
[0013] Another aspect of the system realizes at least one or a plurality of event site module sub-systems that receive at least one reaction data stream channel including, but not limited to, reconstructed remote audience sound and movement data streamed to the event site module. The data stream can then be processed, amplified, and projected at the event site via the audience reaction projection system comprised of at least of one or more sound projection or visual projection sub-systems that transduces audience reactions to sound, visual, and kinetic projections or a combination thereof as shown in FIG. 9.
[0014] In another aspect of the system, audience reaction data streams generated at the remote site modules are transmitted and projected via the reaction projection system at the event site, according to virtual-to-physical onsite seating assignments or virtual-to-virtual seating assignments or a combination of virtual and physical seating assignments as shown in FIG. 10.
[0015] Another aspect of the current invention realizes systems and methods for synthesizing captured remote site environmental multi-mediums including, but not limited to, sound and movement, where remote site-specific mediums that are rendered are at least one of raw, partially, or entirely synthesized; partially, fully filtered, or unfiltered; partially or fully indexically encoded; partially, fully obfuscated or rendered without obfuscation; non- amplified, partially or fully amplified, or a combination of two or more of the aforementioned processes and methods as shown in FIG. 11 and FIG.12.
[0016] Another aspect of the system realizes an audience sentiment analysis module to analyze audience sentiment states as a function of at least one of the following: vocalizations, word classification and analytics, voice dynamic range and spectral characteristics, voice fundamental frequency estimation, ambient temperature, skin temperature, ambient humidity, and movement/gesture characteristics and changes as shown in FIG. 13.
[0017] Another aspect of the system is that the audience reaction data streams rendered between the remote site module, cloud module, and event site module are comprised of at least one remote site audience reaction data stream or a plurality of audience reaction data streams (FIG. 4). These can be, for example, voice medium data streams from additional remote sites as shown in FIG. 10. This combined voice data stream is rendered as a resynthesized model of the changing nature of remote audience sound reactions that is then projected at the event site - e.g. a sports game, political gathering, classroom or lecture setting. Analogously, non-voice data streams from one or more remote sites are rendered as resynthesized models of non sound reactions (such as gestures) and projected at the event site.
[0018] In another embodiment, the event site audience reaction visual projection system is based on flexible material, such as plastic, metal, fabric or textile that covers fully or partially a single seat, groups of seats, or is installed on other fixtures at the event site such as lamp posts and hand rails rendering a remote audience reaction transducer module as shown in FIG. 15 and FIG. 16.
[0019] In another example as described in the [0018] embodiment, the material vibrates, changes visually, changes in form and shape responding to at least one data stream channel that transduces remote audience reaction data streams to kinetic energy resulting in shape changes of the flexible material. The response can be scaled across the seating area projecting visual remote audience reaction at the event site - e.g. the cumulative roaring sound of a soccer match goal energizing a stadium sonically, graphically, and kinetically.
[0020] In one embodiment, the remote audience reaction transducer module is based on an electrical coil system to transduce electrical energy to kinetic energy that is driven by remote audience reaction data streams (FIG. 15).
[0021] In another embodiment, the remote audience reaction transducer module is based on an electrical coil system that is driven by amplified sound as described in [0018] to [0020] where it is a sinusoid (1702) with carrier frequency fc modulated (1703) with one or more modulator signals (1701) selected from the multi-medium audience reaction data stream. The modulator signal is at least one of sound, gesture and movement, temperature, humidity, or sentiment (1701) as shown in FIG. 17 where transduced reactions are projected to event site (1704).
[0022] In another embodiment, the remote audience reaction transducer module is based on a configurable flag, banner, projection screen, balloon, mannequin, or other lightweight material attached to a pole. This installation is modulated by the remote audience reaction data stream, which, in one example, is audience sound or audience movement reactions that are projected according to a specific audience-to-seat matrix.
[0023] In another embodiment, each seat is configured with the remote audience reaction transducer module that is not driven by an electrical signal but rather generates an electrical signal when contact is made with an object or configured with another sensor to detect object and surface contact - transduction of kinetic to electrical energy. This embodiment detects objects, where for example, a baseball landing on seats after a homerun is struck, whereby, the seat associated with the remote audience member is mailed the baseball as a souvenir and/or an alert is sent, along with other credits, to the appropriate recipient or recipients electronically.
[0024] Another embodiment of the current invention is the rendering of natural or artificially created audience ambiance streams, generated as artificial ambiance streams modeled from remote audience sound and non-sound related ambient analysis - sound, gesture, and movement, for example. Audience ambiance streams are generated partially or fully and are dynamically projected at the event site to minimize “shapes" or "silhouettes" of sound, movement, or gesture discontinuities during unexpected, awkward, and unwanted silences while enabling smooth maintenance of ambiance to make natural cyber-physical event experiencing as close as possible to physical event experiencing - in the case of a network discontinuity as shown in FIG. 14 where the original frame 1401 experiences dropout (1402) is replaced with a synthesized ambiance multi-medium stream (1403).
[0025] In another embodiment where musicians such as organists or event bands constitute key members of sports events like baseball games, music performed at the event site by the musicians is projected at the event site as per standard practices. At the same time, it is analyzed, decomposed, and simultaneously transmitted to remote site modules whereby only high-level data such as note number, velocity, duration, and instrument type is sent. At the remote audience site, the music is resynthesized instantaneously either in aggregate as one audio stream or disaggregate and modulated according to the location of each assigned seat of the remote audience member or members, or based on any chosen location at the event site enabling synchronized remote audience interaction at all remote and event sites, such as chanting, singing along, or clapping in synchrony beyond space and time- zones as shown in FIG. 28.
[0026] In another example as described in [0025], the instantaneous receipt of the music performed by the musician enables the collective and synchronized reaction of remote audience members during or at the end of a musical phrase where in one example, the audience collectively punctuates with a loud scream in an interactive manner.
[0027] In another embodiment, the remote site audience member or a plurality of remote audience members can select virtual or physical seats according to (1) proximity to specific groups or individuals (e.g. team A fans), (2) physical location at the event site, and (3) virtual location at the event site that is not part of its physical seating layout that will result in sonic and/ or kinetic impact and contribution of remote audience's reaction projection to the site.
[0028] An additional aspect of the current invention includes seat pinpointing methods and systems utilizing visual spectrum outside visible light to uniquely tag each seat, which in turn is used to project reactionary audiovisual outputs with seat-level precision. In this embodiment each seat is tagged with paint, for example, invisible to the naked eye but visible to sensors such as infrared cameras.
[0029] In the system associated with [0028], cameras and visual projectors are automatically calibrated to pinpoint each seat in the event space such as a stadium, and in turn, controlled according to audience reaction associated with a seat-assigned ticketholder, or a plurality of ticket holders assigned to more than one seat. This system provides an additional method and design to dynamically project audience reactions driven by remote reaction data streams or by the event site automatic audience reaction analysis system driven by capturing the ebb-and-flow of sports events, for example.
[0030] In another embodiment, remote audiences and event site audiences in a particular location, such as a seating area preferred by the supporters of the home or guest team, are jointly rendered in the event site location, enabling joint or independent real-time responses of both event site and remote audiences, allowing, for example, independent interaction of event moderators with on-premise and remote audiences. Remote audiences can choose or be assigned virtual seats corresponding to their affectionate preference, such as co-located with audience of similar fan-attitude. The co- location of event site audience and remote audience by affectionate preference is thus reflected in the event side audio-visual projections.
[0031] In another embodiment of the invention, a custom remote site system is configured at a designated space at a remote site where at least one and optionally more members are congregated within the designated space to engage and react to a broadcasted program at an event site.
[0032] In the embodiment described in [0031] where the designated space is a room at remote site such as a bar, restaurant, or alternate (remote) stadium, for example, the remote site module is configured to identify virtual ticket holders verified via near-field communication systems, for example Bluetooth, wherein members’ MAC address that is broadcast over Bluetooth is utilized to register and enable participation at the remote venue.
[0033] In another embodiment example, a user’s ticket is valid for a set period of time as determined by the ticket vendor.
[0034] In the embodiment described in [0031] the designated space is internally configured to include at least one of microphone, video camera, pressure sensing floor mats, and other sensors to capture audience reaction inside the designated space.
[0035] In the embodiment described in [0031] and [0034], the example room is additionally configured with at least one microphone and camera external to the designated space to remove external reactions from outside of designated space from the audio reaction data stream generated at the remote site module.
[0036] In another embodiment of [0031] the remote space’s walls are configured whereby the inner space surfaces are projected with virtual audience reactions (1) spatially organized according to virtual seating assignments and (2) audio-visually driven by remote audience reaction data streams from at least one or a plurality of data streams. These remote audience reaction data streams can be modulated such that the reaction experience reflects the attitude composition of the audience at the remote site, the attitude composition at the event site, or any combination of (e.g. team-fan) attitudes not necessarily present in either location. BRIEF DESCRIPTION OF THE DRAWINGS
[0037] FIG. 1 illustrates a robust full stadium with audience and performers.
[0038] FIG. 2 illustrates an empty stadium with only performers and virtual seat assignments above / behind regular stadium seats (indicated by boxes with rounded corners).
[0039] FIG. 3 illustrates an example of the invention’s remote audience reaction projection system that projects remote audience reactions.
[0040] FIG. 4 illustrates audiences remotely participating an event site from around the world.
[0041] FIG. 5 illustrates a simplified overview of invention’s main modules.
[0042] FIG. 6 further illustrates a more detailed summary of the IRAPS system.
[0043] FIG. 7 illustrates a summary of the remote site module.
[0044] FIG. 8 illustrates a summary of the cloud module.
[0045] FIG. 9 illustrates a summary of the event site module and the multi-medium remote audience reaction projection system.
[0046] FIG. 10 illustrates remote audiences’ seat assignment to physical and virtual seats.
[0047] FIG. 11 illustrates obfuscation of vocalizations of the word hello.
[0048] FIG. 12 illustrates obfuscation of a hand gesture.
[0049] FIG. 13 illustrates the sentiment analysis block using AI.
[0050] FIG. 14 illustrates network dropout and automatic audience ambiance generation.
[0051] FIG. 15 illustrates the remote audience reaction transducer and electric coil core.
[0052] FIG. 16 illustrates the remote audience reaction transducer system attached to seats at an event site.
[0053] FIG. 17 illustrates the remote audience reaction transducer system projection of non-sound based audience reactions into the event site.
[0054] FIG. 18 illustrates the remote audience reaction transducer system configured to a seat.
[0055] FIG. 19 illustrates the pole type remote audience reaction transducer system.
[0056] FIG. 20 illustrates the pole type remote audience reaction transducer system configured to seats.
[0057] FIG. 21 illustrates the remote audience reaction transducer system with transducer stabilizer and offline / online switch on foldable seats in side as well as a front view.
[0058] FIG. 22-25 illustrates remote audience reaction transducer system that is retractable and expandable.
[0059] FIG. 26 illustrates remote audience reaction transducer system setup as a banner configuration.
[0060] FIG. 27 illustrates remote audience reaction transducer system installed on a window.
[0061] FIG. 28 illustrates music performance capture system. DETAILED DESCRIPTION OF THE INVENTION
[0062] A more detailed description of example embodiments of the invention follows.
[0063] FIG. 6 illustrates a summary of the four main modules operating in concert (1) remote site module: audience reaction analysis and synthesis, (2) cloud module: spatialization, data stream processing and management, (3) event site module: spatio-temporal multi-medium remote audience reaction projection, and (4) broadcasting infrastructure that broadcasts media content across a diversity of web-based and cable-based network receivers such as televisions, computers, or other devices.
[0064] The remote site module system in part operates as an edge compute station to analyze remote audience reactions and consists of at least two sub- modules: (1) a multi-medium analysis module and (2) a multi-medium synthesis and modulation module rendering low latency audience reaction data streams.
[0065] In one embodiment, the analysis module analyzes remote site environmental sounds including voice, clapping, jumping, hand-waving, prespiration, and other types of common audience reactions. In the multi-medium synthesis module of the remote site module, the target medium (e.g. sound) is synthesized via at least one of the following units: (a) obfuscation unit, (b) sentiment analysis unit, (c) automatic medium classification unit, and (d) signal processing unit. These units are then used alone or in combination as prescribed through system settings for specific medium types including, but not limited to, sound, gesture, movement, temperature, humidity, and sentiment as shown in FIG. 7.
[0066] In one example of the remote site module embodiment, remote audience reactions such as vocalizations, for example, are subject to sound obfuscation or "sound blurring” whereby words and sentences are obfuscated and made unintelligible while simultaneously preserving the sonic and temporal characteristics of the vocalization streams. This example realizes preservation of the flow of audience sound feedback while masking vocalization meaning. It renders a continuous encoded audio stream that is sonically reflective of the vocalization but removes word articulation structures addressing inadvertent transmission of "problematic" words via spectro-temporal signal processing and modulation - e.g. obscenities or private and sensitive information as shown in FIG. 11.
[0067] In another example case where medium obfuscation is enabled, the obfuscation ratio is optionally controlled allowing, for example, 100% medium obfuscation where audio is maximally obfuscated, to 0% medium obfuscation, where medium is minimally obfuscated or no obfuscation is applied.
[0068] In another example of [0066] where medium is sound, the input voice is analyzed for its closest timbral match from a significantly large indexed voice sound building block template set - for example, from 0 to 9999. In this example, the remote site module does not transmit any audio data per se but rather transmits an index (between 0-9999 for example) of the corresponding closest sound to the cloud or directly to the event site module where the voice sound is reconstructed with extremely small latency as only index numbers representing the audio rather than raw audio signals are transmitted. The voice is then reconstructed using the index value at the event site. This method carefully considers the need to preserve the sound characteristics while at the same time considers the masking effect that will render the resulting audio unintelligible and non-transcribeable.
[0069] In another example of [0066], a scenario with a 1000 sized articulation index list requires 10 bits resulting in transmission of approximately 10,000/sec indexes for a network with 28.8 kbps bandwidth. As one of the goals of invention is not to transmit precise articulations of words, sentences, and verbal communication but rather capture and transmit the sonic reaction of remote audiences instantaneously, data transmission is achieved at significantly lower rates compared to state-of-the art audio compressors.
[0070] Each building block template index as outlined in [0068] and [0069] consists of a particular sound building block template that can be combined, sequenced, and juxtaposed to resynthesize a close approximation of the original sound to render extreme low bandwidth data transmission.
[0071] In another example aspect of medium obfuscation, voice medium is subject to the voice classification unit where sensitive words - obscenities, for example - are automatically detected and replaced by obfuscated versions using either by spectro-temporal signal processing and modulation or through building block template reconstruction.
[0072] In another example aspect of medium obfuscation, voice mediums is subject to a voice classification unit, where the system analyzes vocalizations and only transmits data streams that are non-texts but vocal expressions such as ooh, ah, ugh, etc.
[0073] Yet in another aspect of the voice classification unit, the system analyzes and recognizes non-vocal sounds such as clapping, stomping, table tapping, drumming sounds, horn sounds, and the like that are audience reactionary responses commonly heard during sports events.
[0074] Yet in another aspect of the voice classification unit, the remote audience sound environment is subject to automatic voice classification where the remote site module only reacts to a specific audience member, namely a member with a ticket and assigned seat of a specific event.
[0075] In another embodiment of audience reaction obfuscation, remote audience medium gesture and motion is obfuscated and processed using image distortion techniques as shown in FIG. 12.
[0076] In another embodiment of codifying audience reaction into building block templates, the methods as outlined from [0068] to [0070] are rendered with movement, where the building block templates are building block templates representing physical gesture and movement building block templates. In this example, sensors such as cameras, motion detection sensors, or game controllers commonly found in homes can capture movement and gesture. In one embodiment of the cloud module (FIG. 6 and 8), one or a plurality of remote site audience medium data streams that are, for example, audience sound reactions are first processed at the remote site module and then further processed on the cloud module to render at least one or more reaction data streams to audio-visually fill the event site through the audience reaction projection system utilizing custom and commonly available event site infrastructures including, but not limited to, sound reinforcement systems, audio channels/loudspeaker configurations, and seats.
[0077] In another aspect, where the rendered remote audience reaction is sound, the cloud module system generates audio data streams that are sono- spatially placed within two or a plurality of audio channels that in turn are projected via the remote site module. The sono-spatial processing is conducted with metadata specific to each event site’s specifications in order to spatially place audience sound according to their ticketed seating assignments or other spatial placement configuration that is predetermined or dynamically adjusted during the event. The encoded audio stream rendered at the remote site module is transmitted to the cloud for dynamic and multichannel processing considering (1) one or a plurality of remote site audio encoded streams and (2) specifications of the event space as a function of at least seating arrangement, channel and loudspeaker configuration, and event space dimensions as shown in FIG. 8.
[0078] In another aspect of the current invention, the cloud module is distributed between at least two cloud instances where remote audience data is divided and managed between at least two cloud instances. This mitigates data interruptions between cloud and event sites.
[0079] In one embodiment of the event site module, audience reaction data streams are received by the event site module and are projected to the event site venue via the audience reaction projection system consisting of (1) audience reaction sound projection and (2) audience reaction visualization projection modules where sonic, visual, and kinetic projection or a combination thereof is enabled as shown in FIG. 9.
[0080] In another embodiment, the remote audience reaction data streams received by the event site module are processed and filtered and distributed via the remote audience reaction projection system to (1) a specific seat corresponding to the remote site audience ticket assignment, (2) groups of seats corresponding to seat configuration and seat assignment with associated medium channels, or (3) other event site fixtures such as hand rails and light posts.
[0081] In one example, the audience reaction data stream is a sound medium and is projected via the audience reaction sound projection system.
[0082] In the example as described in [0082], the remote audience sound reactions received and controlled by the event site module are projected through the event site sound reinforcement system reflecting in part or fully the desired virtual sono-spatial location at the event site. [0083] In another example of [0082], the event site module further employs audio processing to augment spatial characteristics of the outputted audio signal to the event site where, for example, the multichannel audio rendered at the cloud module is projected at the event site according to seating associations of the remote audience member via the event site sound projection system.
[0084] In another example case the remote reaction data stream received at the event module is of multi-medium type including at least one that is not of sound including gesture, movement, temperature, humidity, electrodermal measurements, and sentiment.
[0085] In an aspect of the example as described in [0080], the remote audience nonsound reactions are projected and mapped via the event site’s remote audience reaction sound and visual projection systems that are combined and configured with respect to ticket and corresponding event site seating assignments.
[0086] In one embodiment, the audience reaction transducer system enables audience reaction visual projection implementation driven by either sound or other mediums such as movement, gesture, temperature, humidity, or sentiment.
[0087] In one example of the embodiment described in [0087], the audience reaction transducer system comprises of single or multiple arrangements of transducer electrical coils configured on surfaces including fabrics and textile materials enabling the transduction of audience reaction data from electrical energy to kinetic energy resulting in visual and physical changes (FIG. 15.)
[0088] In another example of the embodiment described in [0087], the audience reaction transducer system is excited by remote audience data streams of non- sound type providing visual remote audience feedback to the event site.
[0089] In another example, the audience reaction transducer system in [0087] is associated with seating assignments wherein audience data streams are projected to one or a group of associated seats in the event space (FIG. 16).
[0090] In another example, as described in [0087] the remote audience reaction data stream is filtered and amplified to maximize transduction of electrical energy to kinetic energy and shape disfiguration via the audience reaction transducer system, where for example, filtering, including but not limited to, such as a low pass filter, is configured to maximally induce material disfiguration correlating to the audience reaction energy.
[0091] In another embodiment, where the example material is, for example, a fabric type, the material is treated via color variations, fabric texture variants, photoluminescent components and shapes that dynamically and visually reflect varied audience reactions captured from one or more remote sites. This embodiment models visual dynamicity and diversity of the audiences from a visual perspective as commonly experienced on television or live media event programming situations and is driven by the audience reaction transducer system. [0092] In another aspect of the current invention, the audience reaction transducer system is treated with thermochromatic ink to color module in accordance to remote audience environmental medium changes that will in turn change the color of the seat, seating area, or other elements of the event site.
[0093] In another aspect of the current invention, the audience reaction transducer system, treated with color changing features, is programmable according to a group, groups, or individual's color requests that in part can be used to form teams or can be used to project messages to event viewers and is driven by the gesture transducer system.
[0094] Another embodiment of audience reaction transducer system includes a securing mechanism with a fastener or a hook system to attach to seats with an added stabilizing connector to secure the transducer against unwanted displacement while enabling maximal flexibility for changing shape and vibration as shown in FIG. 18 (e.g. to withstand gusts of wind, for example).
[0095] In another aspect of the audience reaction transducer system, a pole is configured on a seat or a plurality of seats or placed elsewhere at the event site and is wound in part, or fully, with electrical coil for transduction as shown in FIG. 19 and FIG. 20.
[0096] In another embodiment, the audience reaction transducer system is an attachment to folding seats, wherein the audience reaction transducer system is attached at the bottom surface part of a seat and secured at the upper side of the seat as well as the lower side of the seat, where for the bottom side, a stabilizer (2103) is included to (1) keep the transducer system from dropping when the seat is unfolded (2101) and (2) keep the transducer system in place, when folded and in its upright position (2102), while allowing flexibility and margins for material to change shape when driven by audience reaction data streams as shown in FIG. 21.
[0097] In another aspect of [0097] is configured to turn off when used by a physical audience member when switch is not pressed (2104) and turned on (2101) when chairs are not being used and in its upright position and switch pressed as shown in 2106.
[0098] In another embodiment of the audience reaction transducer system, a retractable and expandable system controls the height of the audience reaction transducer system to maximize visibility when fully extended as shown in FIG. 22.
[0100] In another embodiment of the invention, the audience reaction transducer system (1) is attached to the bottom side of a folding seat in its retracted state (2301); (2) is extended (2303) when the seat is in its upright (folded) position; (3) is secured further at the bottom of the lower side of the bottom side of the seat with a stabilizer (2304) such as, but not limited to, a line, rope, elastic band; as shown in FIG. 23 and 2305 illustrating a front view of the system when extended. [0101] In another embodiment of the invention, the system in [0099] is that the extendable (2401) / retractable (2402) system is secured to the back of a seat or multiple seats rather than under a seat with an optional turn (2405) on (2403)/off (2404) switching system as shown in FIG. 24 where turning the transducer towards the stage area and away from the stage area will automatically turn on/off the system.
[0102] In another aspect of [0099], the extendable/retractable system is wound partially, or fully, with electrical coil for transduction where objects such as balloons, cardboards, lighting, and mannequins can be attached to be driven and put in motion via the remote audience data streams, transducing electrical energy to kinetic energy as shown in FIG. 25.
[0103] In another embodiment of the invention, the audience reaction transducer system is attached to at least two extendable/retractable units, where for example, two seats next to each other or with at least one or more seats in between result in a "banner” configuration that span across the seats between the outlier seats as shown in FIG. 26.
[0104] In another embodiment of the audience reaction transducer system, seat surfaces including foldable seat surfaces are configured to act as soundboards to transduce cumulative remote site sound characteristics with seat level spatial sound projection accuracy.
[0105] In another embodiment of the audience reaction transducer system, the audience reaction transducer system is reusable, detachable, and re- attachable to other objects such as windows at the event site, for example, that are treated with paint and other materials that respond to remote audience reaction data streams as shown in FIG. 27.
[0106] In another example of the audience reaction projection, each seat is configured with the audience reaction transducer system or alternatively with another sensor to detect solid objects such as a baseball landing on a seat after a homerun is struck, whereby, for example, the seat associated with the remote audience member is mailed the baseball as a souvenir.
[0107] In the example [0106] the system determines baseball to seat assignment by selecting the seat that last triggers object detection. For example, a bouncing and triggering of seats 4, 10, and 12 where either one of the seats 4, 10 or 12 and associated remote client will be recipient of the ball or associated credits, as chosen by the event organizers.
[0108] In another aspect of the current invention, the event site module consists of an automatic audience reaction synthesizer system that is automatically or manually enabled during temporary network blackout windows, for example. The dynamic audience synthesizer is triggered in accordance to highlight events such as real-time score changes, penalties, real-time event commentaries, and likewise, according to non-highlight events such as between point transitions and breaks between play. The audience sound that is projected is dynamically reconstructed using prior audience reaction medium types including but not limited to sound, movement, gesture, and sentiment data types from the event in question or other similar events stored in the cloud and shared with event site modules.
[0109] In another aspect and associated with [0108], audience reaction data triggers are automatically and dynamically generated through an event analysis system trained through machine learning from past and present audience audio, video and medium ground truth datasets that reflect the reaction of historical audience interactions as a function of sound and visuals from performers, other audience members, and the event space itself.
[0110] In another aspect of the automatic audience reaction synthesizer the audience reaction data is dynamically generated through an event analysis system trained from professional events such as major league sports events, concerts or other high-end professional events that is then scaled and projected to different types and sizes including but not limited to small scale events, such as school sports events, concerts and theater performances.
[0111] In another aspect of the automatic audience reaction synthesizer, the audience reactions are calibrated prior to event start or during the event where in the case of prior event calibration, "fans” supporting team A, then "fans” supporting team B, etc. are asked to express their support at their remote locations. The system in turn captures the audience reaction, analyzes and synthesizes it, and automatically generates various typical audience reactions driven by events at the performance including, but not limited to, score changes, penalties, breaks, and highlight events that are analyzed automatically in real-time.
[0112] In another aspect of automatic audience reaction synthesizer the synthesized data at the event site is balanced with streamed remote audience reaction data streams and broadcast to viewers and audience live or at a later time. This aspect will mitigate network and broadcasting delays dynamically in a full-duplex data transmission loop that pertains to live events such as football games.
[0113] In another embodiment where musicians such as organists are key elements of sports events like baseball games, music performed at the event site by the musicians is sonically projected at the event site and simultaneously transmitted to remote audiences where the music is instantaneously resynthesized enabling synchronized remote audience interaction (FIG. 28).
[0114] In an example situation of [0113], the music performed by the musician is instantaneously shared with remote sites where only musical note information including but limited to note number, velocity, duration, and instrument type is transmitted to, and received by, remote site modules. The musical metadata is used at the remote site to resynthesize the music via a sound synthesizer minimizing latency issues, synchronicity issues, audience reaction and timing issues, facilitating remote audience collaborative engagement - including chanting and singing - in real time, so that it can be projected back to the event site.

Claims

CLAIMS What is claimed is:
1. A system for automatically capturing, synthesizing, streaming, and projecting remote audiences' reactions comprising of: a remote site module to dynamically analyze audience reactions and generate continuous audience reaction data streams; a cloud module to transceive audience reaction data streams for additional event site and remote site informed processing; and an event site module to project synthesized audience reactions via sonic, visual, and kinetic projections.
2. The system of Claim 1 wherein audience reaction medium includes at least one of the following: sound, movement, gesture, temperature, humidity, electrodermal measurements, and sentiment.
3. The system of Claim 1 wherein the event site is any event space or a plurality of concurrent event spaces that can include audience participation of an event remotely, physically at the event site, or a mixture of both.
4. The system of Claim 1 wherein one or a plurality of remote site modules automatically capture, compute, encode, and transmit remote audience reaction data streams to a cloud module.
5. The system of Claim 1 wherein a cloud module further processes, aggregates, and combines data streams captured from a single or plurality of remote audience modules, to spatio-temporally project a single or multichannel medium stream via the remote audience reaction projection system at the event site as a function of at least one of the following: event site seating layout, remote audiences seating assignments, event site sound reinforcement layout, and event site lighting system layout.
6. The system of Claim 1 wherein the event site module can automatically project remote audience reactions via the audience reaction projection system that produces sonic, visual, and kinetic projections.
7. The system of Claim 1 wherein the event site module can automatically project remote audience reactions whereby remote audience reactions are projected according to remote audience seating assignments at the event site.
8. The system of Claim 1 wherein the projections at the event site is based on the actual audience size of all or part of the participating remote site audiences that can optionally be assigned to virtual seats that reflect the layout of physical seats and fixtures at the event site and additional virtual seats that are placed virtually in, or outside, the physical event space.
9. The system of Claim 1 wherein remote audience reactions projected at the event site via the remote audience projection system are at least one of the following: (1) single- or multi-medium remote audience reaction data streams including, but not limited to sound, movement, gesture, humidity, electrodermal measurements, and sentiment (2) projections of audience reactions that are single- or multi-medium including sounds and non-sounds, (3) projections that are amplified through sound reinforcement systems, (4) projections that are kinetic visualizations, (5) projections that are graphical visualizations, and (6) event program broadcasted via standard live broadcasting systems.
10. The system of Claim 4, wherein the remote site module captures, analyzes and synthesizes target single- or multi-mediums to generate remote audience reaction data streams based on at least one of the following: (a) obfuscation unit, (b) sentiment analysis unit, (c) automatic medium classification unit, and (d) signal processing unit.
11. The system of Claim 4, wherein the remote site module captures and analyzes audience reaction including voice, clapping, jumping, hand-waving, hand-raising, standing up, sitting down, and other types of common audience reactions.
12. The system of Claim 4 wherein remote audience reaction data is obfuscated to mask potential sensitive information exposure including, but not limited to, disclosure of private information such as names, faces, gestures, profanities; inappropriate languages for general viewership in order to deliver and project remote audience reactions in a safe manner while simultaneously capturing the essence, energy, dynamicity, and interactive feel of action-and-response between audiences and performers.
13. The system of Claim 4 and Claim 12 wherein obfuscation ratio is optionally controlled allowing, for example, 100% obfuscation to 0% obfuscation, where medium is minimally obfuscated, or no obfuscation is applied.
14. The obfuscation system of Claim 12 wherein remote audience medium is sound that is obfuscated and processed in the time and frequency domain via spectral distortion, spectral reshaping, formant modulation, temporal envelope modulation, and temporal distortion.
15. The system of Claim 12 wherein remote audience medium is gesture and movement that is obfuscated and processed using image processing techniques.
16. The system of Claim 12 wherein remote audience reactions are indexically categorized and encoded as a sequence of symbolic indexical numbers that model and represent building block templates of remote audience reactions; wherein the resulting automatic index best-match is accomplished at one or more cloud modules or event site modules without transmitting raw media data
- for example, sound, video, movement, temperature, humidity, and sentiment - between remote site, cloud module, and event site depending on system configuration.
17. The system of Claim 16 wherein the matching building block template indexes and their associated timestamps are processed to render a reconstruction of remote audience reactions via at least one of the following: vocalizations, environmental sounds, movement, reaction, sentiment; where the resulting signal of at least one reaction data channel stream is sonically, visually, or physically positioned according to the event site seating assignments and sound reinforcement and lighting system resources and layouts.
18. The system of Claim 12 wherein multi-medium obfuscation is implemented, voice mediums are subject to the voice classification unit where sensitive words
- obscenities, for example - are automatically detected and replaced by obfuscated versions using either spectro-temporal signal processing and modulation or through building block template reconstruction.
19. The system of Claim 18 wherein the system analyzes vocalizations and only transmits data streams that are non-text words but vocal expressions such as ooh, ah, ugh, etc.
20. The system of Claim 4 wherein the remote audience sound environment is subject to automatic voice and face classification where the remote site module only reacts to a specific audience member, namely a member with a ticket and assigned seat of a specific event.
21. The system of Claim 20 wherein the remote audience voice classification unit is trained using devices such as smartphones to enable automatic ground truth data generation, labeling, AI algorithm training, and performance improvement.
22. The system of Claim 20 wherein an automatic voice identification system is accomplished where spoken words and articulations are automatically segmented via automatic voice event detection and used to (1) create a dynamically growing ground truth training dataset and (2) train and recognize the audience member’s voice.
23. The system of Claim 4 wherein the remote audience member is subject to automatic movement/gesture classification where the remote site module captures audience gestures commonly found in audience attended events including, but not limited to, hand waiving, fist-pumping, clapping, face/head covering in disbelief, and harm /hand raising in celebration.
24. The system of Claim 4 wherein remote audience is subject to sentiment analysis where remote audience reaction sentiment data streams are generated considering remote site’s vocalization and language, temperature, gesture, movement, and humidity patterns.
25. The system of Claim 4, wherein the remote site audience member or a plurality of remote audience members select virtual or physical seats according to proximity to specific groups or individuals (e.g. A fans).
26. The system of Claim 4, wherein the remote site audience member or a plurality of remote audience members select virtual or physical locations at the event site according to (1) virtual seats that enable limited artificial modulation, filtering, amplification or augmentation and (2) virtual location at the event site that is not part of its physical seating layout that will result in a controllable sonic, visual, or kinetic impact contribution to remote audience reaction projections.
27. The system in Claim 4 wherein a custom remote site system is configured at a designated space at a remote site (e.g. room in a private residence, bar, restaurant, movie theatre or alternate stadium) where at least one or more persons are congregated and within the designated space to engage and react to a program at an event site.
28. The system in Claim 27 wherein the custom remote space system within the designated space is configured to recognize virtual ticket holders verified via near-field communication systems, for example Bluetooth, wherein members’ unique identifiers including MAC addresses broadcast over Bluetooth is analyzed to register and enable participation at the remote venue.
29. The system in Claims 4 and 27 where a user's ticket is valid for a set period of time as determined by ticket vendor.
30. The system in Claim 27 wherein the designated space is internally configured to include at least one of microphone, video camera, pressure-sensing floor mats, temperature sensor, humidity sensor to capture audience reaction inside the designated space while simultaneously configured with at least one microphone and camera external to the designated space to remove external reactions from outside of designated space.
31. The system in Claim 27 wherein the remote space’s surfaces are configured whereby the inner surfaces are projected with virtual audience reactions (1) spatially organized according to virtual seating assignments and (2) audio- visually driven by remote audience reaction data streams from at least one or a plurality of remote site modules.
32. The system in Claim 5 wherein remote audience reaction data received from the remote sites is processed at a cloud module to render spatio-temporally accurate audience reactions via single data streams, multichannel audience reaction data streams, multi-medium, or medium-specific reaction data streams including, but not limited to sound, movement, temperature, humidity, electrodermal measurements, and sentiment data streams projected at event sites.
33. The system in Claim 5 wherein the method of spatiotemporal processing at the cloud module is rendered as a function of event site seating layout, size, building characteristics, and audiovisual projection layout.
34. The system in Claim 5 wherein the cloud module is divided into two or more sub-cloud modules where processing is divided amongst the sub-cloud modules that are standalone, synchronized, physical custom servers, or cloud server systems.
35. The system in Claims 5 wherein remote audience reaction data streams generated from the cloud sub-modules are combined to reconstruct and render at least one medium data channel and processed to event site projection according to event site seating layout, size, building characteristics, and audiovisual projection configurations.
36. The system in Claim 5 wherein the cloud module renders a single or multichannel, single or multi-medium data stream that is transmitted to the event site module that is synchronized at the event site module and remote audience reaction projection system to render a combination of sonic, visual, and kinetic projections.
37. The system of Claim 7 wherein the audiovisual projection at the event site is a program such as a live sports events and other events that are not sports events, with or without event site physical audiences present, is broadcast over industry standard broadcasting systems.
38. The system of Claim 6 wherein remote audience reaction data streams are further processed on the cloud module to deliver a single or multichannel reaction data streams that is projected at the event site according to seat layout, associated remote audience member, and spatiotemporal processing to further acoustically and visually place remote audience member at the event site.
39. The system of Claim 6 wherein at least one audiovisual change at the event site is driven by remote audience reaction data streams consisting of at least one of: audience reaction transducer system, a lighting transducer system (including light outside of the visual spectrum), or a sound reinforcement system.
40. The system of Claim 6, wherein audience reaction data streams are projected via the audience reaction projection system which consists of (1) audience reaction sound projection module and (2) audience reaction visual projection module where the projection system outputs sonically, visually, kinetically or a combination thereof.
41. The system of Claim 6 wherein the remote audience reaction data stream transduces electrical energy to mechanical energy that can be projected kinetically, visually, sonically, or a combination thereof.
42. The system of Claim 6 wherein the audiovisual projection is dynamically and automatically triggered and generated via the automatic audience reaction synthesizer in reaction to performance events and highlights such as scoring changes, event commentators’ voice analytics including, but not limited to, word analysis (with at least one of voice pitch, word count, voice spectral shape, and high-to-low frequency ratio), breaks between points, point penalties, and point celebrations.
43. The system of Claim 42 wherein the audience reaction projection at the event site is manually, semi-automatically, or automatically triggered in reaction to performance events and highlights such as during a concert, when a song ends and/or begins, or in situations where a guitar solo or other highlights in a musical performance occurs.
44. The system of Claim 42 where the manually, semi-automatically, or automatically triggered and generated audience reaction data stream is that of synthetic audience sound or multi-medium patterns including movement, temperature, or humidity type computed from historical event data resulting in adaptive multi-level response to event situations ranging in high resolution increments from low to high intensity outputs.
45. The system of Claim 42 and system wherein audience reaction data is automatically triggered and generated through an event analysis system trained through machine learning from past and present audience audio, video and medium ground truth datasets that reflect the reaction of historical audience interactions from professional events that are scaled and projected to different types and sizes of events, such as school sports events, concerts and theater performances
46. The system of Claim 6 where the manually, semi-automatically, or automatically triggered audience reaction data is in online mode during communication blackouts between cloud module and event module and in offline mode when network is stable.
47. The system of Claim 6 wherein the automatically triggered audience reaction data stream is combined with "live” remote site audience reaction streams to mitigate perception of delay for audiences at the remote sites or audiences at the event site as may occur in undesirable situations including, but not limited to, network latency or unstable network periods.
48. The system of Claim 6 wherein remote audience reaction is analyzed by a audience calibration method where remote audiences react to via an audience calibration system whereby audiences are asked to, for example, cheer for team A at least one time, cheer for team B at least one time.
49. The system of Claim 6 and system in Claim 42 where audience multi-medium reactions in support of team A and team B are analyzed, modeled, and used as audience reaction source to automatically trigger synthesized audience reactions at the event site during, for example, a soccer match when team A scores a goal where cheering sounds of fans associated with team A are focused on and projected at the event site in accordance with seating assignments, such as fan blocks.
50. The system of Claim 48 wherein audience calibration occurs before, during, and/or after the event.
51. The system of Claim 6 wherein artificial ambient streams are generated in part or fully, and dynamically projected at the event site to minimize discontinuities between unexpected, sudden, or unwanted audience inactivity including but not limited to silence, motionlessness, or network outages; and to enable smooth continuation of audiovisual experience at the event site.
52. The system of Claim 51 wherein rendering of artificial ambiance stream is generated partially or fully, and dynamically projected at the event site during periods where audience sounds, movements, and gestures are not permitted as in the case, for example, of tennis matches, where audiences are encouraged to reserve vocalization during serves and between periods where a point has started and when a point ends.
53. The system of Claim 52 wherein rendering of artificial ambiance streams that are attenuated while simultaneously blocking remote audience sound projection to help in program production.
54. The system of Claim 12 wherein audience reaction is projected at the event site, through the remote audience reaction projection system outputting audience reaction data streams, is not intelligible due to full obfuscation, or partial obfuscation, or no obfuscation while preserving essential audience reaction characteristics.
55. The system of Claim 54 wherein the reaction data stream is a sound medium where essential characteristics including dynamic level, spectral shape, and other timbral characteristics are preserved to reflect the sonic "feeling," "silhouette," "shape" of the remote audience member by obfuscating actual expressions and reactions being articulated.
56. The system of Claim 55 wherein no sound is lost or removed but automatically obfuscated and modulated by selectively obfuscating a predetermined list of sensitive words such as expletives enabling generation of continuous, uninterrupted audience reaction audio stream at the event site preserving voice characteristics including dynamic level, spectral shape, and other timbral characteristics to reflect the feeling of the remote audience member.
57. The system of Claim 6 wherein the event site audience reaction projection system and the remote audience reaction transducer system projects, for example, at least one of physical audience movement or gesture or non-sound mediums that are associated with one or a plurality of seats that are associated with tickets and seats assigned to remote audiences.
58. The system of Claim 6 wherein the remote audience reaction transducer system comprises of single or multiple arrangements of electrical transducer coils configured on sheet surfaces including but not limited to, fabrics, textile, plastic, vinyl, and other materials that can be configured for Claim 6.
59. The system of Claims 58 wherein the electrical coils are excited by remote audience reaction signals to drive the remote audience reaction transducer system providing visual remote audience feedback to the event site.
60. The system of Claims 58 wherein the remote audience reaction data streams received at the event site module are processed via signal processing filters to maximally induce physical change on the remote audience reaction transducer system where for example, a low pass filter is applied to the signal to maximally induce change on the material correlating to the audience reaction dynamics.
61. The system of Claims 58 wherein the remote audience reaction transducer system is driven by amplified carrier sinusoid with frequency fc modulated with one or more modulator signals that is one or a mixture of multi-medium audience reaction data streams; the modulator signal being at least one of sound, gesture and movement, temperature, humidity, or sentiment.
62. The system of Claims 58, where the remote audience reaction data streams received by the event site module are processed and filtered and projected via the remote audience reaction projection system to at least one of (1) a specific seat corresponding to the remote site audience ticket assignment, (2) groups of seats corresponding to seat configuration and seat assignment with associated audio channels, or (3) other event site fixtures such as hand rails and light posts.
63. The system of Claims 58 wherein the audience reaction transducer system material is, for example, of fabric type, the material is treated with color variations, fabric texture variants, photoluminescent components and shapes that dynamically and visually react to audience reactions captured in real-time from one or more remote sites.
64. The system of Claim 58 wherein the audience reaction transducer system material is treated with thermochromatic ink to dynamically change color in reaction to remote audience reactional changes that will in turn change the color of the seat, seating area, or other elements of the event audience areas.
65. The system of Claim 58 wherein the audience reaction transducer system is treated with color changing features that are programmable according to team, group, or individual color preferences that can partially, or fully be used to form teams or can be used to project messages to viewers.
66. The system of Claim 58 wherein the audience reaction transducer system is secured to fixtures such as seats, including as a seat cover or a hook system to with an added stabilizing connector to secure the transducer against unwanted displacement while enabling maximal flexibility for changing shape and vibration.
67. The system of Claim 58 wherein the audience reaction transducer system takes the shape of a pole which is configured on a seat or a plurality of seats or placed elsewhere and is wound in part, or fully, with electrical coil for transduction.
68. The system of Claim 58 wherein the audience reaction transducer system is attached to folding seats, wherein the audience reaction transducer system is attached on the bottom surface part of a seat and secured at the upper side of seat as well as the lower side of the seat, where for the bottom side, an optionally adjustable stabilizer is included to (1) keep the transducer system from dropping when seat is unfolded and (2) keep the transducer system in place, when folded and in its upright position, while allowing flexibility and margins for material to change shape when driven by audience reaction data streams.
69. The system of Claim 68 is configured to turn off when seat is used by a physical audience member and turned on when chairs are not being used and folded in their default upright position.
70. A retractable and expandable system to control the height of the audience reaction transducer system to configure visibility.
71. The system of Claim 70 wherein the retractable/expandable system is configured partially or fully, with electrical coil windings for transduction to act as an alternative audience reaction transducer system where objects such as balloons, cardboards, mannequins, or custom shapes can be attached.
72. The system of Claim 68 wherein the audience reaction transducer system (1) is attached on the bottom side of folding seat in its retracted state, (2) is extended when the seat is in its upright position, and (3) is secured further at the bottom of the lower side of the bottom side of the seat with a stabilizing connector that is, for example, an extendable, retractable, or fixed rope, line, or elastic band.
73. The system of Claim 68 and 69 wherein the extendable/retractable system is secured to the back of a seat or multiple seats with an optional turn switch system to turn system on when transducer faces the stage and turns off when away from the stage.
74. A system wherein the audience reaction transducer system is attached to at least a pair of extendable/retractable systems of Claim 70 or pole units of Claim 67, where for example, positioned at two seats either adjacent to each other or with one or more seats in between, the transducer system results in a "banner” configuration that spans across the seats.
75. The system of Claim 58 wherein seat surfaces including foldable seat surfaces are configured to act as soundboards to transduce at least one multi-medium remote audience reactions with seat-level spatial sound projection accuracy.
76. The system of Claim 58 wherein the audience reaction transducer system is reusable, detachable, and re-attachable as a single or as multiple units to other objects such as windows at the event site, for example, that are optionally treated with paint and other materials that respond to remote audience reaction data streams.
77. The system of Claim 6, wherein each seat is configured with a sensor unit to detect object contact is implemented, for example, when using the same audience reaction transducer system that transduces mechanical energy to electrical energy corresponding to the transient ball contact with the coil and seat that is equipped with a modified gesture projection unit.
78. The system of Claim 77 wherein the system determines seat contact identification according to first or last seat contact with object, or any seat that had contact with object in between.
79. The system of Claim 6 wherein music performance data is transmitted to remote audience site modules in encoded metadata form where the music - translated by decoding note number, velocity, duration, and instrument type - is resynthesized instantaneously and spatiotemporally at the remote sites, based on the location of each assigned seat of the remote audience member or members, or based on any chosen location at the event site enabling synchronized remote audience interaction at all remote sites.
80. The system of Claim 6 wherein seat pinpointing methods and systems utilizing visual spectrum or spectrum outside visible light to uniquely tag each seat, which in turn is used to project reactionary audiovisual outputs with seat-level precision.
81. The system wherein each seat is tagged with paint that can be optionally invisible to the naked eye but visible to cameras such as infrared cameras.
PCT/US2020/070074 2020-05-23 2020-05-23 Interactive remote audience projection system WO2021242325A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2020/070074 WO2021242325A1 (en) 2020-05-23 2020-05-23 Interactive remote audience projection system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2020/070074 WO2021242325A1 (en) 2020-05-23 2020-05-23 Interactive remote audience projection system

Publications (1)

Publication Number Publication Date
WO2021242325A1 true WO2021242325A1 (en) 2021-12-02

Family

ID=78745156

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/070074 WO2021242325A1 (en) 2020-05-23 2020-05-23 Interactive remote audience projection system

Country Status (1)

Country Link
WO (1) WO2021242325A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210377615A1 (en) * 2020-06-01 2021-12-02 Timothy DeWitt System and method for simulating a live audience at sporting events

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7121621B1 (en) * 2005-04-04 2006-10-17 Scot A Starheim Information placard holder for a stadium seat
US20090186759A1 (en) * 2008-01-22 2009-07-23 Ping-Kun Lin Thermochromic material
US20100187372A1 (en) * 2006-11-06 2010-07-29 Andrei Smirnov Monitor support apparatus
WO2011031932A1 (en) * 2009-09-10 2011-03-17 Home Box Office, Inc. Media control and analysis based on audience actions and reactions
US20120051579A1 (en) * 2003-03-10 2012-03-01 Cohen Daniel E Sound and Vibration Transmission Pad and System
US20140007147A1 (en) * 2012-06-27 2014-01-02 Glen J. Anderson Performance analysis for combining remote audience responses
US20150046824A1 (en) * 2013-06-16 2015-02-12 Jammit, Inc. Synchronized display and performance mapping of musical performances submitted from remote locations
US20150054727A1 (en) * 2013-08-23 2015-02-26 Immersion Corporation Haptically enabled viewing of sporting events
US20150070516A1 (en) * 2012-12-14 2015-03-12 Biscotti Inc. Automatic Content Filtering
EP2930671A1 (en) * 2014-04-11 2015-10-14 Microsoft Technology Licensing, LLC Dynamically adapting a virtual venue
US20160198223A1 (en) * 2012-12-26 2016-07-07 Livingrid Ltd. A method and system for providing and managing a social platform that visualizes virtual crowd
US20190090020A1 (en) * 2017-09-19 2019-03-21 Sony Corporation Calibration system for audience response capture and analysis of media content
US20200045396A1 (en) * 2018-08-02 2020-02-06 Igt Electronic gaming machine and method with selectable sound beams

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120051579A1 (en) * 2003-03-10 2012-03-01 Cohen Daniel E Sound and Vibration Transmission Pad and System
US7121621B1 (en) * 2005-04-04 2006-10-17 Scot A Starheim Information placard holder for a stadium seat
US20100187372A1 (en) * 2006-11-06 2010-07-29 Andrei Smirnov Monitor support apparatus
US20090186759A1 (en) * 2008-01-22 2009-07-23 Ping-Kun Lin Thermochromic material
WO2011031932A1 (en) * 2009-09-10 2011-03-17 Home Box Office, Inc. Media control and analysis based on audience actions and reactions
US20140007147A1 (en) * 2012-06-27 2014-01-02 Glen J. Anderson Performance analysis for combining remote audience responses
US20150070516A1 (en) * 2012-12-14 2015-03-12 Biscotti Inc. Automatic Content Filtering
US20160198223A1 (en) * 2012-12-26 2016-07-07 Livingrid Ltd. A method and system for providing and managing a social platform that visualizes virtual crowd
US20150046824A1 (en) * 2013-06-16 2015-02-12 Jammit, Inc. Synchronized display and performance mapping of musical performances submitted from remote locations
US20150054727A1 (en) * 2013-08-23 2015-02-26 Immersion Corporation Haptically enabled viewing of sporting events
EP2930671A1 (en) * 2014-04-11 2015-10-14 Microsoft Technology Licensing, LLC Dynamically adapting a virtual venue
US20190090020A1 (en) * 2017-09-19 2019-03-21 Sony Corporation Calibration system for audience response capture and analysis of media content
US20200045396A1 (en) * 2018-08-02 2020-02-06 Igt Electronic gaming machine and method with selectable sound beams

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210377615A1 (en) * 2020-06-01 2021-12-02 Timothy DeWitt System and method for simulating a live audience at sporting events

Similar Documents

Publication Publication Date Title
CN101803336B (en) Technique for allowing the modification of the audio characteristics of items appearing in an interactive video using RFID tags
CN106465008B (en) Terminal audio mixing system and playing method
JP2003533235A (en) Virtual production device and method
US9979766B2 (en) System and method for reproducing source information
Connelly Digital radio production
WO2021242325A1 (en) Interactive remote audience projection system
Rossetti et al. Live Electronics, Audiovisual Compositions, and Telematic Performance: Collaborations During the Pandemic
WO2021246104A1 (en) Control method and control system
Mulder Making things louder: Amplified music and multimodality
WO2022163137A1 (en) Information processing device, information processing method, and program
Miller Indeterminacy and Performance Practice in Cage's" Variations"
JP6951610B1 (en) Speech processing system, speech processor, speech processing method, and speech processing program
WO2021131326A1 (en) Information processing device, information processing method, and computer program
WO2021124680A1 (en) Information processing device and information processing method
JP7442979B2 (en) karaoke system
US20220109911A1 (en) Method and apparatus for determining aggregate sentiments
US11696088B1 (en) Method and apparatus to generate a six dimensional audio dataset
WO2021157638A1 (en) Server device, terminal device, simultaneous interpretation voice transmission method, multiplexed voice reception method, and recording medium
US20220264193A1 (en) Program production apparatus, program production method, and recording medium
Botteldooren et al. Modifying and co-creating the urban soundscape through digital technologies
Maejima et al. Automatic Mapping Media to Device Algorithm that Considers Affective Effect
Filimowicz An audiovisual colocation display system
Chantler No Such Array: Developing a material and practice for electronic music performance
WO2022026425A1 (en) System and method for aggregating audiovisual content
Klein Is classical music ‘boring’? A discussion of fidelity, virtuosity and performance in classical music recording

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20937963

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20937963

Country of ref document: EP

Kind code of ref document: A1