WO2021242325A1 - Système interactif de projection de public à distance - Google Patents

Système interactif de projection de public à distance Download PDF

Info

Publication number
WO2021242325A1
WO2021242325A1 PCT/US2020/070074 US2020070074W WO2021242325A1 WO 2021242325 A1 WO2021242325 A1 WO 2021242325A1 US 2020070074 W US2020070074 W US 2020070074W WO 2021242325 A1 WO2021242325 A1 WO 2021242325A1
Authority
WO
WIPO (PCT)
Prior art keywords
audience
remote
reaction
event
site
Prior art date
Application number
PCT/US2020/070074
Other languages
English (en)
Inventor
Tae Hong PARK
Original Assignee
Sei Consult Llc
STAACK, Christian
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sei Consult Llc, STAACK, Christian filed Critical Sei Consult Llc
Priority to PCT/US2020/070074 priority Critical patent/WO2021242325A1/fr
Publication of WO2021242325A1 publication Critical patent/WO2021242325A1/fr

Links

Classifications

    • EFIXED CONSTRUCTIONS
    • E04BUILDING
    • E04HBUILDINGS OR LIKE STRUCTURES FOR PARTICULAR PURPOSES; SWIMMING OR SPLASH BATHS OR POOLS; MASTS; FENCING; TENTS OR CANOPIES, IN GENERAL
    • E04H3/00Buildings or groups of buildings for public or similar purposes; Institutions, e.g. infirmaries or prisons
    • E04H3/10Buildings or groups of buildings for public or similar purposes; Institutions, e.g. infirmaries or prisons for meetings, entertainments, or sports
    • E04H3/14Gymnasiums; Other sporting buildings
    • EFIXED CONSTRUCTIONS
    • E04BUILDING
    • E04HBUILDINGS OR LIKE STRUCTURES FOR PARTICULAR PURPOSES; SWIMMING OR SPLASH BATHS OR POOLS; MASTS; FENCING; TENTS OR CANOPIES, IN GENERAL
    • E04H3/00Buildings or groups of buildings for public or similar purposes; Institutions, e.g. infirmaries or prisons
    • E04H3/10Buildings or groups of buildings for public or similar purposes; Institutions, e.g. infirmaries or prisons for meetings, entertainments, or sports
    • E04H3/22Theatres; Concert halls; Studios for broadcasting, cinematography, television or similar purposes

Definitions

  • Audience participation in private and public event spaces including sports events, recitals, socio-political events, conferences, concerts, religious gatherings, and social gatherings comprise the building blocks of social interaction, community building, cultural evolution, education, and entertainment as shown in example sports event in FIG. 1.
  • participation is difficult - e.g. geographical inaccessibility, limited seating, ticket costs, government restrictions - a system that enables the creation and production of a dynamic remote audience participation event environment has benefits for both audiences and event performers.
  • the realization of interactive remote-audience/onsite- performer interaction systems have been shown to be beneficial in aforementioned situational examples.
  • the invention described here can offer much needed interactive remote audience participation in augmenting event experiences for both viewers and performers alike.
  • An example embodiment of the current invention reduces fundamental gaps that exist between physical and remote audience participation in events such as sports events. It reduces these gaps for both audiences and performers alike - the audience participating and reacting remotely and performers performing and reacting at the event site, where bi-directional dynamic reactions by both parties elevate event-experiencing.
  • the current invention enables projection of remote audience reactions into event sites, whereby, for example, a normally robustly attended stadium (FIG. 1) absent of audience members but fully present with performers (FIG. 2), is transformed into a vibrant interactive audience player stage as shown in FIG. 3.
  • These remotely located audience reactions - independent of time zones and geographical locations (FIG. 4) - are projected sonically, visually, and kinetically into the stadium thereby further closing the gap between physical and virtual audience-performer interaction and dynamics.
  • One embodiment of the invention realizes remote audience reaction projection at the event site coordinated and transceived between at least one remote site module, at least one cloud module, and at least one event site module.
  • the remote site module pertains to sites with remote audiences;
  • cloud pertains to server-side technology through one or more physical custom servers or cloud services or both;
  • event site pertains to one or more venue sites where performances take place and are broadcast to a wide variety of viewers as shown in FIG. 5.
  • One aspect of the system realizes remote audience reaction capture of at least one of the following mediums: sound, movement and gesture, temperature, humidity, electrodermal measurements, and sentiment. If more than one medium is involved, henceforth it will be referred to as multi medium.
  • Another aspect of the system generates remote audience reaction data streams at the remote site module that (1) streams continuously while mitigating privacy concerns, (2) reduces network bandwidth usage commonly affecting audiovisual state-of-the-art conferencing systems on the market, and (3) processes audio to mitigate audio feedback using standard echo-cancelation techniques exploiting situational conditions of "crowd noise" and remote audience sound that are robustly distinguishable.
  • Another aspect of the present system and methods realizes capturing and recreating the "feel" of performer-audience interaction and experience through quasi-natural or “natural”, interactive, instantaneous, uninterrupted exchange of audience-performer multi-medium reactions between a plurality of remote audience sites, event sites, and its performers.
  • Another embodiment of the system realizes additional computation of remote audience reaction data streams transmitted from at least one remote site module at a cloud module.
  • One aspect of [0007] is the computation of audience reaction data streams as a function of event site characteristics including, but not limited to, seating layout, venue size, number of audio channels, sound reinforcement specifications, and other elements such as onsite fixtures.
  • Another aspect of the invention is the event side module that receives the audience reaction data streams via the cloud module: it feeds these data streams to the remote audience reaction projection module that projects audience reactions at the event site sonically, visually, and kinetically or a combination thereof.
  • One aspect of the embodiment of [0009] realizes remote site audience reaction capture focusing on sound and corresponding audience reaction sound projections at the performance event site.
  • Another aspect of the embodiment of [0009] realizes remote site audience reaction capture and analysis focusing on non-sounds (e.g. movement, temperature, humidity, electrodermal measurements, sentiment) and corresponding audience reaction visual projection at the performance event site.
  • non-sounds e.g. movement, temperature, humidity, electrodermal measurements, sentiment
  • the scenario in [0003] is accomplished partially via remote site environmental analysis and synthesis, where synthesized multi-medium audience reaction data is streamed from one or a plurality of associated remote sites to the cloud module.
  • Each remote site involves at least one computing device including, but not limited to, smartphones, tablets, laptops, single board computers, standalone devices, or media devices.
  • Such devices analyze, process, and transmit data to the cloud for further event site-specific processing and data transmission to the event site where remote audience reactions are transduced via sound, movement, and other media rendering sonic, visual, and kinetic projections as shown in FIG. 7.
  • Another aspect of the system realizes at least one or a plurality of event site module sub-systems that receive at least one reaction data stream channel including, but not limited to, reconstructed remote audience sound and movement data streamed to the event site module.
  • the data stream can then be processed, amplified, and projected at the event site via the audience reaction projection system comprised of at least of one or more sound projection or visual projection sub-systems that transduces audience reactions to sound, visual, and kinetic projections or a combination thereof as shown in FIG. 9.
  • audience reaction data streams generated at the remote site modules are transmitted and projected via the reaction projection system at the event site, according to virtual-to-physical onsite seating assignments or virtual-to-virtual seating assignments or a combination of virtual and physical seating assignments as shown in FIG. 10.
  • Another aspect of the current invention realizes systems and methods for synthesizing captured remote site environmental multi-mediums including, but not limited to, sound and movement, where remote site-specific mediums that are rendered are at least one of raw, partially, or entirely synthesized; partially, fully filtered, or unfiltered; partially or fully indexically encoded; partially, fully obfuscated or rendered without obfuscation; non- amplified, partially or fully amplified, or a combination of two or more of the aforementioned processes and methods as shown in FIG. 11 and FIG.12.
  • Another aspect of the system realizes an audience sentiment analysis module to analyze audience sentiment states as a function of at least one of the following: vocalizations, word classification and analytics, voice dynamic range and spectral characteristics, voice fundamental frequency estimation, ambient temperature, skin temperature, ambient humidity, and movement/gesture characteristics and changes as shown in FIG. 13.
  • the audience reaction data streams rendered between the remote site module, cloud module, and event site module are comprised of at least one remote site audience reaction data stream or a plurality of audience reaction data streams (FIG. 4).
  • These can be, for example, voice medium data streams from additional remote sites as shown in FIG. 10.
  • This combined voice data stream is rendered as a resynthesized model of the changing nature of remote audience sound reactions that is then projected at the event site - e.g. a sports game, political gathering, classroom or lecture setting.
  • non-voice data streams from one or more remote sites are rendered as resynthesized models of non sound reactions (such as gestures) and projected at the event site.
  • the event site audience reaction visual projection system is based on flexible material, such as plastic, metal, fabric or textile that covers fully or partially a single seat, groups of seats, or is installed on other fixtures at the event site such as lamp posts and hand rails rendering a remote audience reaction transducer module as shown in FIG. 15 and FIG. 16.
  • the material vibrates, changes visually, changes in form and shape responding to at least one data stream channel that transduces remote audience reaction data streams to kinetic energy resulting in shape changes of the flexible material.
  • the response can be scaled across the seating area projecting visual remote audience reaction at the event site - e.g. the cumulative roaring sound of a soccer match goal energizing a stadium sonically, graphically, and kinetically.
  • the remote audience reaction transducer module is based on an electrical coil system to transduce electrical energy to kinetic energy that is driven by remote audience reaction data streams (FIG. 15).
  • the remote audience reaction transducer module is based on an electrical coil system that is driven by amplified sound as described in [0018] to [0020] where it is a sinusoid (1702) with carrier frequency fc modulated (1703) with one or more modulator signals (1701) selected from the multi-medium audience reaction data stream.
  • the modulator signal is at least one of sound, gesture and movement, temperature, humidity, or sentiment (1701) as shown in FIG. 17 where transduced reactions are projected to event site (1704).
  • the remote audience reaction transducer module is based on a configurable flag, banner, projection screen, balloon, mannequin, or other lightweight material attached to a pole. This installation is modulated by the remote audience reaction data stream, which, in one example, is audience sound or audience movement reactions that are projected according to a specific audience-to-seat matrix.
  • each seat is configured with the remote audience reaction transducer module that is not driven by an electrical signal but rather generates an electrical signal when contact is made with an object or configured with another sensor to detect object and surface contact - transduction of kinetic to electrical energy.
  • This embodiment detects objects, where for example, a baseball landing on seats after a homerun is struck, whereby, the seat associated with the remote audience member is mailed the baseball as a souvenir and/or an alert is sent, along with other credits, to the appropriate recipient or recipients electronically.
  • audience ambiance streams are generated partially or fully and are dynamically projected at the event site to minimize “shapes” or “silhouettes” of sound, movement, or gesture discontinuities during unexpected, awkward, and unwanted silences while enabling smooth maintenance of ambience to make natural cyber-physical event experiencing as close as possible to physical event experiencing - in the case of a network discontinuity as shown in FIG. 14 where the original frame 1401 experiences dropout (1402) is replaced with a synthesized ambience multi-medium stream (1403).
  • music performed at the event site by the musicians is projected at the event site as per standard practices. At the same time, it is analyzed, decomposed, and simultaneously transmitted to remote site modules whereby only high-level data such as note number, velocity, duration, and instrument type is sent.
  • the music is resynthesized instantaneously either in aggregate as one audio stream or disaggregate and modulated according to the location of each assigned seat of the remote audience member or members, or based on any chosen location at the event site enabling synchronized remote audience interaction at all remote and event sites, such as chanting, singing along, or clapping in synchrony beyond space and time- zones as shown in FIG. 28.
  • the instantaneous receipt of the music performed by the musician enables the collective and synchronized reaction of remote audience members during or at the end of a musical phrase where in one example, the audience collectively punctuates with a loud scream in an interactive manner.
  • the remote site audience member or a plurality of remote audience members can select virtual or physical seats according to (1) proximity to specific groups or individuals (e.g. team A fans), (2) physical location at the event site, and (3) virtual location at the event site that is not part of its physical seating layout that will result in sonic and/ or kinetic impact and contribution of remote audience's reaction projection to the site.
  • groups or individuals e.g. team A fans
  • virtual location at the event site that is not part of its physical seating layout that will result in sonic and/ or kinetic impact and contribution of remote audience's reaction projection to the site.
  • An additional aspect of the current invention includes seat pinpointing methods and systems utilizing visual spectrum outside visible light to uniquely tag each seat, which in turn is used to project reactionary audiovisual outputs with seat-level precision.
  • each seat is tagged with paint, for example, invisible to the naked eye but visible to sensors such as infrared cameras.
  • cameras and visual projectors are automatically calibrated to pinpoint each seat in the event space such as a stadium, and in turn, controlled according to audience reaction associated with a seat-assigned ticketholder, or a plurality of ticket holders assigned to more than one seat.
  • This system provides an additional method and design to dynamically project audience reactions driven by remote reaction data streams or by the event site automatic audience reaction analysis system driven by capturing the ebb-and-flow of sports events, for example.
  • remote audiences and event site audiences in a particular location are jointly rendered in the event site location, enabling joint or independent real-time responses of both event site and remote audiences, allowing, for example, independent interaction of event moderators with on-premise and remote audiences.
  • Remote audiences can choose or be assigned virtual seats corresponding to their affectionate preference, such as co-located with audience of similar fan-attitude. The co- location of event site audience and remote audience by affectionate preference is thus reflected in the event side audio-visual projections.
  • a custom remote site system is configured at a designated space at a remote site where at least one and optionally more members are congregated within the designated space to engage and react to a broadcasted program at an event site.
  • the remote site module is configured to identify virtual ticket holders verified via near-field communication systems, for example Bluetooth, wherein members’ MAC address that is broadcast over Bluetooth is utilized to register and enable participation at the remote venue.
  • near-field communication systems for example Bluetooth
  • a user’s ticket is valid for a set period of time as determined by the ticket vendor.
  • the designated space is internally configured to include at least one of microphone, video camera, pressure sensing floor mats, and other sensors to capture audience reaction inside the designated space.
  • the example room is additionally configured with at least one microphone and camera external to the designated space to remove external reactions from outside of designated space from the audio reaction data stream generated at the remote site module.
  • the remote space’s walls are configured whereby the inner space surfaces are projected with virtual audience reactions (1) spatially organized according to virtual seating assignments and (2) audio-visually driven by remote audience reaction data streams from at least one or a plurality of data streams.
  • These remote audience reaction data streams can be modulated such that the reaction experience reflects the attitude composition of the audience at the remote site, the attitude composition at the event site, or any combination of (e.g. team-fan) attitudes not necessarily present in either location.
  • FIG. 1 illustrates a robust full stadium with audience and performers.
  • FIG. 2 illustrates an empty stadium with only performers and virtual seat assignments above / behind regular stadium seats (indicated by boxes with rounded corners).
  • FIG. 3 illustrates an example of the invention’s remote audience reaction projection system that projects remote audience reactions.
  • FIG. 4 illustrates audiences remotely participating an event site from around the world.
  • FIG. 5 illustrates a simplified overview of invention’s main modules.
  • FIG. 6 further illustrates a more detailed summary of the IRAPS system.
  • FIG. 7 illustrates a summary of the remote site module.
  • FIG. 8 illustrates a summary of the cloud module.
  • FIG. 9 illustrates a summary of the event site module and the multi-medium remote audience reaction projection system.
  • FIG. 10 illustrates remote audiences’ seat assignment to physical and virtual seats.
  • FIG. 11 illustrates obfuscation of vocalizations of the word hello.
  • FIG. 12 illustrates obfuscation of a hand gesture.
  • FIG. 13 illustrates the sentiment analysis block using AI.
  • FIG. 14 illustrates network dropout and automatic audience ambiance generation.
  • FIG. 15 illustrates the remote audience reaction transducer and electric coil core.
  • FIG. 16 illustrates the remote audience reaction transducer system attached to seats at an event site.
  • FIG. 17 illustrates the remote audience reaction transducer system projection of non-sound based audience reactions into the event site.
  • FIG. 18 illustrates the remote audience reaction transducer system configured to a seat.
  • FIG. 19 illustrates the pole type remote audience reaction transducer system.
  • FIG. 20 illustrates the pole type remote audience reaction transducer system configured to seats.
  • FIG. 21 illustrates the remote audience reaction transducer system with transducer stabilizer and offline / online switch on foldable seats in side as well as a front view.
  • FIG. 22-25 illustrates remote audience reaction transducer system that is retractable and expandable.
  • FIG. 26 illustrates remote audience reaction transducer system setup as a banner configuration.
  • FIG. 27 illustrates remote audience reaction transducer system installed on a window.
  • FIG. 28 illustrates music performance capture system. DETAILED DESCRIPTION OF THE INVENTION
  • FIG. 6 illustrates a summary of the four main modules operating in concert (1) remote site module: audience reaction analysis and synthesis, (2) cloud module: spatialization, data stream processing and management, (3) event site module: spatio-temporal multi-medium remote audience reaction projection, and (4) broadcasting infrastructure that broadcasts media content across a diversity of web-based and cable-based network receivers such as televisions, computers, or other devices.
  • remote site module audience reaction analysis and synthesis
  • cloud module spatialization, data stream processing and management
  • event site module spatio-temporal multi-medium remote audience reaction projection
  • broadcasting infrastructure that broadcasts media content across a diversity of web-based and cable-based network receivers such as televisions, computers, or other devices.
  • the remote site module system in part operates as an edge compute station to analyze remote audience reactions and consists of at least two sub- modules: (1) a multi-medium analysis module and (2) a multi-medium synthesis and modulation module rendering low latency audience reaction data streams.
  • the analysis module analyzes remote site environmental sounds including voice, clapping, jumping, hand-waving, prespiration, and other types of common audience reactions.
  • the target medium e.g. sound
  • the target medium is synthesized via at least one of the following units: (a) obfuscation unit, (b) sentiment analysis unit, (c) automatic medium classification unit, and (d) signal processing unit. These units are then used alone or in combination as prescribed through system settings for specific medium types including, but not limited to, sound, gesture, movement, temperature, humidity, and sentiment as shown in FIG. 7.
  • remote audience reactions such as vocalizations
  • vocalizations are subject to sound obfuscation or "sound blurring” whereby words and sentences are obfuscated and made unintelligible while simultaneously preserving the sonic and temporal characteristics of the vocalization streams.
  • This example realizes preservation of the flow of audience sound feedback while masking vocalization meaning. It renders a continuous encoded audio stream that is sonically reflective of the vocalization but removes word articulation structures addressing inadvertent transmission of "problematic" words via spectro-temporal signal processing and modulation - e.g. obscenities or private and sensitive information as shown in FIG. 11.
  • the obfuscation ratio is optionally controlled allowing, for example, 100% medium obfuscation where audio is maximally obfuscated, to 0% medium obfuscation, where medium is minimally obfuscated or no obfuscation is applied.
  • the input voice is analyzed for its closest timbral match from a significantly large indexed voice sound building block template set - for example, from 0 to 9999.
  • the remote site module does not transmit any audio data per se but rather transmits an index (between 0-9999 for example) of the corresponding closest sound to the cloud or directly to the event site module where the voice sound is reconstructed with extremely small latency as only index numbers representing the audio rather than raw audio signals are transmitted.
  • the voice is then reconstructed using the index value at the event site. This method carefully considers the need to preserve the sound characteristics while at the same time considers the masking effect that will render the resulting audio unintelligible and non-transcribeable.
  • Each building block template index as outlined in [0068] and [0069] consists of a particular sound building block template that can be combined, sequenced, and juxtaposed to resynthesize a close approximation of the original sound to render extreme low bandwidth data transmission.
  • voice medium is subject to the voice classification unit where sensitive words - obscenities, for example - are automatically detected and replaced by obfuscated versions using either by spectro-temporal signal processing and modulation or through building block template reconstruction.
  • voice mediums is subject to a voice classification unit, where the system analyzes vocalizations and only transmits data streams that are non-texts but vocal expressions such as ooh, ah, ugh, etc.
  • the system analyzes and recognizes non-vocal sounds such as clapping, stomping, table tapping, drumming sounds, horn sounds, and the like that are audience reactionary responses commonly heard during sports events.
  • non-vocal sounds such as clapping, stomping, table tapping, drumming sounds, horn sounds, and the like that are audience reactionary responses commonly heard during sports events.
  • the remote audience sound environment is subject to automatic voice classification where the remote site module only reacts to a specific audience member, namely a member with a ticket and assigned seat of a specific event.
  • remote audience medium gesture and motion is obfuscated and processed using image distortion techniques as shown in FIG. 12.
  • the methods as outlined from [0068] to [0070] are rendered with movement, where the building block templates are building block templates representing physical gesture and movement building block templates.
  • the building block templates are building block templates representing physical gesture and movement building block templates.
  • sensors such as cameras, motion detection sensors, or game controllers commonly found in homes can capture movement and gesture.
  • the cloud module FIG. 1
  • one or a plurality of remote site audience medium data streams that are, for example, audience sound reactions are first processed at the remote site module and then further processed on the cloud module to render at least one or more reaction data streams to audio-visually fill the event site through the audience reaction projection system utilizing custom and commonly available event site infrastructures including, but not limited to, sound reinforcement systems, audio channels/loudspeaker configurations, and seats.
  • the cloud module system generates audio data streams that are sono- spatially placed within two or a plurality of audio channels that in turn are projected via the remote site module.
  • the sono-spatial processing is conducted with metadata specific to each event site’s specifications in order to spatially place audience sound according to their ticketed seating assignments or other spatial placement configuration that is predetermined or dynamically adjusted during the event.
  • the encoded audio stream rendered at the remote site module is transmitted to the cloud for dynamic and multichannel processing considering (1) one or a plurality of remote site audio encoded streams and (2) specifications of the event space as a function of at least seating arrangement, channel and loudspeaker configuration, and event space dimensions as shown in FIG. 8.
  • the cloud module is distributed between at least two cloud instances where remote audience data is divided and managed between at least two cloud instances. This mitigates data interruptions between cloud and event sites.
  • audience reaction data streams are received by the event site module and are projected to the event site venue via the audience reaction projection system consisting of (1) audience reaction sound projection and (2) audience reaction visualization projection modules where sonic, visual, and kinetic projection or a combination thereof is enabled as shown in FIG. 9.
  • the remote audience reaction data streams received by the event site module are processed and filtered and distributed via the remote audience reaction projection system to (1) a specific seat corresponding to the remote site audience ticket assignment, (2) groups of seats corresponding to seat configuration and seat assignment with associated medium channels, or (3) other event site fixtures such as hand rails and light posts.
  • the audience reaction data stream is a sound medium and is projected via the audience reaction sound projection system.
  • the remote audience sound reactions received and controlled by the event site module are projected through the event site sound reinforcement system reflecting in part or fully the desired virtual sono-spatial location at the event site.
  • the event site module further employs audio processing to augment spatial characteristics of the outputted audio signal to the event site where, for example, the multichannel audio rendered at the cloud module is projected at the event site according to seating associations of the remote audience member via the event site sound projection system.
  • the remote reaction data stream received at the event module is of multi-medium type including at least one that is not of sound including gesture, movement, temperature, humidity, electrodermal measurements, and sentiment.
  • the remote audience nonsound reactions are projected and mapped via the event site’s remote audience reaction sound and visual projection systems that are combined and configured with respect to ticket and corresponding event site seating assignments.
  • the audience reaction transducer system enables audience reaction visual projection implementation driven by either sound or other mediums such as movement, gesture, temperature, humidity, or sentiment.
  • the audience reaction transducer system comprises of single or multiple arrangements of transducer electrical coils configured on surfaces including fabrics and textile materials enabling the transduction of audience reaction data from electrical energy to kinetic energy resulting in visual and physical changes (FIG. 15.)
  • the audience reaction transducer system is excited by remote audience data streams of non- sound type providing visual remote audience feedback to the event site.
  • the audience reaction transducer system in [0087] is associated with seating assignments wherein audience data streams are projected to one or a group of associated seats in the event space (FIG. 16).
  • the remote audience reaction data stream is filtered and amplified to maximize transduction of electrical energy to kinetic energy and shape disfiguration via the audience reaction transducer system, where for example, filtering, including but not limited to, such as a low pass filter, is configured to maximally induce material disfiguration correlating to the audience reaction energy.
  • the example material is, for example, a fabric type
  • the material is treated via color variations, fabric texture variants, photoluminescent components and shapes that dynamically and visually reflect varied audience reactions captured from one or more remote sites.
  • This embodiment models visual dynamicity and diversity of the audiences from a visual perspective as commonly experienced on television or live media event programming situations and is driven by the audience reaction transducer system.
  • the audience reaction transducer system is treated with thermochromatic ink to color module in accordance to remote audience environmental medium changes that will in turn change the color of the seat, seating area, or other elements of the event site.
  • the audience reaction transducer system treated with color changing features, is programmable according to a group, groups, or individual's color requests that in part can be used to form teams or can be used to project messages to event viewers and is driven by the gesture transducer system.
  • audience reaction transducer system includes a securing mechanism with a fastener or a hook system to attach to seats with an added stabilizing connector to secure the transducer against unwanted displacement while enabling maximal flexibility for changing shape and vibration as shown in FIG. 18 (e.g. to withstand gusts of wind, for example).
  • a pole is configured on a seat or a plurality of seats or placed elsewhere at the event site and is wound in part, or fully, with electrical coil for transduction as shown in FIG. 19 and FIG. 20.
  • the audience reaction transducer system is an attachment to folding seats, wherein the audience reaction transducer system is attached at the bottom surface part of a seat and secured at the upper side of the seat as well as the lower side of the seat, where for the bottom side, a stabilizer (2103) is included to (1) keep the transducer system from dropping when the seat is unfolded (2101) and (2) keep the transducer system in place, when folded and in its upright position (2102), while allowing flexibility and margins for material to change shape when driven by audience reaction data streams as shown in FIG. 21.
  • [0097] in another aspect of [0097] is configured to turn off when used by a physical audience member when switch is not pressed (2104) and turned on (2101) when chairs are not being used and in its upright position and switch pressed as shown in 2106.
  • a retractable and expandable system controls the height of the audience reaction transducer system to maximize visibility when fully extended as shown in FIG. 22.
  • the audience reaction transducer system (1) is attached to the bottom side of a folding seat in its retracted state (2301); (2) is extended (2303) when the seat is in its upright (folded) position; (3) is secured further at the bottom of the lower side of the bottom side of the seat with a stabilizer (2304) such as, but not limited to, a line, rope, elastic band; as shown in FIG. 23 and 2305 illustrating a front view of the system when extended.
  • a stabilizer such as, but not limited to, a line, rope, elastic band
  • the system in [0099] is that the extendable (2401) / retractable (2402) system is secured to the back of a seat or multiple seats rather than under a seat with an optional turn (2405) on (2403)/off (2404) switching system as shown in FIG. 24 where turning the transducer towards the stage area and away from the stage area will automatically turn on/off the system.
  • the extendable/retractable system is wound partially, or fully, with electrical coil for transduction where objects such as balloons, cardboards, lighting, and mannequins can be attached to be driven and put in motion via the remote audience data streams, transducing electrical energy to kinetic energy as shown in FIG. 25.
  • the audience reaction transducer system is attached to at least two extendable/retractable units, where for example, two seats next to each other or with at least one or more seats in between result in a "banner” configuration that span across the seats between the outlier seats as shown in FIG. 26.
  • seat surfaces including foldable seat surfaces are configured to act as soundboards to transduce cumulative remote site sound characteristics with seat level spatial sound projection accuracy.
  • the audience reaction transducer system is reusable, detachable, and re- attachable to other objects such as windows at the event site, for example, that are treated with paint and other materials that respond to remote audience reaction data streams as shown in FIG. 27.
  • each seat is configured with the audience reaction transducer system or alternatively with another sensor to detect solid objects such as a baseball landing on a seat after a homerun is struck, whereby, for example, the seat associated with the remote audience member is mailed the baseball as a souvenir.
  • the system determines baseball to seat assignment by selecting the seat that last triggers object detection. For example, a bouncing and triggering of seats 4, 10, and 12 where either one of the seats 4, 10 or 12 and associated remote client will be recipient of the ball or associated credits, as chosen by the event organizers.
  • the event site module consists of an automatic audience reaction synthesizer system that is automatically or manually enabled during temporary network blackout windows, for example.
  • the dynamic audience synthesizer is triggered in accordance to highlight events such as real-time score changes, penalties, real-time event commentaries, and likewise, according to non-highlight events such as between point transitions and breaks between play.
  • the audience sound that is projected is dynamically reconstructed using prior audience reaction medium types including but not limited to sound, movement, gesture, and sentiment data types from the event in question or other similar events stored in the cloud and shared with event site modules.
  • audience reaction data triggers are automatically and dynamically generated through an event analysis system trained through machine learning from past and present audience audio, video and medium ground truth datasets that reflect the reaction of historical audience interactions as a function of sound and visuals from performers, other audience members, and the event space itself.
  • the audience reaction data is dynamically generated through an event analysis system trained from professional events such as major league sports events, concerts or other high-end professional events that is then scaled and projected to different types and sizes including but not limited to small scale events, such as school sports events, concerts and theater performances.
  • the audience reactions are calibrated prior to event start or during the event where in the case of prior event calibration, "fans” supporting team A, then “fans” supporting team B, etc. are asked to express their support at their remote locations.
  • the system captures the audience reaction, analyzes and synthesizes it, and automatically generates various typical audience reactions driven by events at the performance including, but not limited to, score changes, penalties, breaks, and highlight events that are analyzed automatically in real-time.
  • the synthesized data at the event site is balanced with streamed remote audience reaction data streams and broadcast to viewers and audience live or at a later time.
  • This aspect will mitigate network and broadcasting delays dynamically in a full-duplex data transmission loop that pertains to live events such as football games.
  • the music performed by the musician is instantaneously shared with remote sites where only musical note information including but limited to note number, velocity, duration, and instrument type is transmitted to, and received by, remote site modules.
  • the musical metadata is used at the remote site to resynthesize the music via a sound synthesizer minimizing latency issues, synchronicity issues, audience reaction and timing issues, facilitating remote audience collaborative engagement - including chanting and singing - in real time, so that it can be projected back to the event site.

Landscapes

  • Engineering & Computer Science (AREA)
  • Architecture (AREA)
  • Civil Engineering (AREA)
  • Structural Engineering (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

La présente invention concerne un système interactif de projection de public à distance. Le système permet la projection de réactions du public à distance sur des sites d'événements, moyennant quoi, par exemple, un stade de football duquel le public est absent, mais où tous les joueurs sont présents, est transformé de manière sonore, visuelle et cinétique en une scène interactive vivante entre le public et les joueurs.
PCT/US2020/070074 2020-05-23 2020-05-23 Système interactif de projection de public à distance WO2021242325A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2020/070074 WO2021242325A1 (fr) 2020-05-23 2020-05-23 Système interactif de projection de public à distance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2020/070074 WO2021242325A1 (fr) 2020-05-23 2020-05-23 Système interactif de projection de public à distance

Publications (1)

Publication Number Publication Date
WO2021242325A1 true WO2021242325A1 (fr) 2021-12-02

Family

ID=78745156

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/070074 WO2021242325A1 (fr) 2020-05-23 2020-05-23 Système interactif de projection de public à distance

Country Status (1)

Country Link
WO (1) WO2021242325A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210377615A1 (en) * 2020-06-01 2021-12-02 Timothy DeWitt System and method for simulating a live audience at sporting events

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7121621B1 (en) * 2005-04-04 2006-10-17 Scot A Starheim Information placard holder for a stadium seat
US20090186759A1 (en) * 2008-01-22 2009-07-23 Ping-Kun Lin Thermochromic material
US20100187372A1 (en) * 2006-11-06 2010-07-29 Andrei Smirnov Monitor support apparatus
WO2011031932A1 (fr) * 2009-09-10 2011-03-17 Home Box Office, Inc. Gestion et analyse de contenus multimédia basées sur les actions et les réactions du public
US20120051579A1 (en) * 2003-03-10 2012-03-01 Cohen Daniel E Sound and Vibration Transmission Pad and System
US20140007147A1 (en) * 2012-06-27 2014-01-02 Glen J. Anderson Performance analysis for combining remote audience responses
US20150046824A1 (en) * 2013-06-16 2015-02-12 Jammit, Inc. Synchronized display and performance mapping of musical performances submitted from remote locations
US20150054727A1 (en) * 2013-08-23 2015-02-26 Immersion Corporation Haptically enabled viewing of sporting events
US20150070516A1 (en) * 2012-12-14 2015-03-12 Biscotti Inc. Automatic Content Filtering
EP2930671A1 (fr) * 2014-04-11 2015-10-14 Microsoft Technology Licensing, LLC Adaptation dynamique d'un lieu virtuel
US20160198223A1 (en) * 2012-12-26 2016-07-07 Livingrid Ltd. A method and system for providing and managing a social platform that visualizes virtual crowd
US20190090020A1 (en) * 2017-09-19 2019-03-21 Sony Corporation Calibration system for audience response capture and analysis of media content
US20200045396A1 (en) * 2018-08-02 2020-02-06 Igt Electronic gaming machine and method with selectable sound beams

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120051579A1 (en) * 2003-03-10 2012-03-01 Cohen Daniel E Sound and Vibration Transmission Pad and System
US7121621B1 (en) * 2005-04-04 2006-10-17 Scot A Starheim Information placard holder for a stadium seat
US20100187372A1 (en) * 2006-11-06 2010-07-29 Andrei Smirnov Monitor support apparatus
US20090186759A1 (en) * 2008-01-22 2009-07-23 Ping-Kun Lin Thermochromic material
WO2011031932A1 (fr) * 2009-09-10 2011-03-17 Home Box Office, Inc. Gestion et analyse de contenus multimédia basées sur les actions et les réactions du public
US20140007147A1 (en) * 2012-06-27 2014-01-02 Glen J. Anderson Performance analysis for combining remote audience responses
US20150070516A1 (en) * 2012-12-14 2015-03-12 Biscotti Inc. Automatic Content Filtering
US20160198223A1 (en) * 2012-12-26 2016-07-07 Livingrid Ltd. A method and system for providing and managing a social platform that visualizes virtual crowd
US20150046824A1 (en) * 2013-06-16 2015-02-12 Jammit, Inc. Synchronized display and performance mapping of musical performances submitted from remote locations
US20150054727A1 (en) * 2013-08-23 2015-02-26 Immersion Corporation Haptically enabled viewing of sporting events
EP2930671A1 (fr) * 2014-04-11 2015-10-14 Microsoft Technology Licensing, LLC Adaptation dynamique d'un lieu virtuel
US20190090020A1 (en) * 2017-09-19 2019-03-21 Sony Corporation Calibration system for audience response capture and analysis of media content
US20200045396A1 (en) * 2018-08-02 2020-02-06 Igt Electronic gaming machine and method with selectable sound beams

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210377615A1 (en) * 2020-06-01 2021-12-02 Timothy DeWitt System and method for simulating a live audience at sporting events

Similar Documents

Publication Publication Date Title
CN101803336B (zh) 用于对视频进行选择性音频修改的方法和***
CN106465008B (zh) 终端混音***和播放方法
JP2003533235A (ja) 仮想演出装置及び方法
US9979766B2 (en) System and method for reproducing source information
Connelly Digital radio production
WO2021242325A1 (fr) Système interactif de projection de public à distance
Rossetti et al. Live Electronics, Audiovisual Compositions, and Telematic Performance: Collaborations During the Pandemic
WO2021246104A1 (fr) Procédé de commande et système de commande
Mulder Making things louder: Amplified music and multimodality
WO2022163137A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
Miller Indeterminacy and Performance Practice in Cage's" Variations"
JP6951610B1 (ja) 音声処理システム、音声処理装置、音声処理方法、及び音声処理プログラム
WO2021131326A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme informatique
WO2021124680A1 (fr) Dispositif de traitement d'informations et procédé de traitement d'informations
JP7442979B2 (ja) カラオケシステム
US20220109911A1 (en) Method and apparatus for determining aggregate sentiments
US11696088B1 (en) Method and apparatus to generate a six dimensional audio dataset
WO2021157638A1 (fr) Dispositif serveur, équipement terminal, procédé de transmission vocale à interprétation simultanée, procédé de réception vocale multiplexée et support d'enregistrement
US20220264193A1 (en) Program production apparatus, program production method, and recording medium
Botteldooren et al. Modifying and co-creating the urban soundscape through digital technologies
Maejima et al. Automatic Mapping Media to Device Algorithm that Considers Affective Effect
Filimowicz An audiovisual colocation display system
Chantler No Such Array: Developing a material and practice for electronic music performance
WO2022026425A1 (fr) Système et procédé pour agréger un contenu audiovisuel
Klein Is classical music ‘boring’? A discussion of fidelity, virtuosity and performance in classical music recording

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20937963

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20937963

Country of ref document: EP

Kind code of ref document: A1