US10410491B2 - Method to generate a sound illusion profile to replicate a quantity of resources - Google Patents

Method to generate a sound illusion profile to replicate a quantity of resources Download PDF

Info

Publication number
US10410491B2
US10410491B2 US15/848,185 US201715848185A US10410491B2 US 10410491 B2 US10410491 B2 US 10410491B2 US 201715848185 A US201715848185 A US 201715848185A US 10410491 B2 US10410491 B2 US 10410491B2
Authority
US
United States
Prior art keywords
incident
severity level
sound
profile
illusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US15/848,185
Other versions
US20190188985A1 (en
Inventor
Scott G. Potter
Arthur E. Petela
Steven Gilmore
Anthony M. Kakiel
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Motorola Solutions Inc
Original Assignee
Motorola Solutions Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Motorola Solutions Inc filed Critical Motorola Solutions Inc
Priority to US15/848,185 priority Critical patent/US10410491B2/en
Assigned to MOTOROLA SOLUTIONS, INC. reassignment MOTOROLA SOLUTIONS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KAKIEL, ANTHONY M., GILMORE, STEVEN, PETELA, ARTHUR E., POTTER, SCOTT G.
Publication of US20190188985A1 publication Critical patent/US20190188985A1/en
Application granted granted Critical
Publication of US10410491B2 publication Critical patent/US10410491B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B15/00Identifying, scaring or incapacitating burglars, thieves or intruders, e.g. by explosives
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/26Recognition of special voice characteristics, e.g. for use in lie detectors; Recognition of animal voices
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/12Circuits for transducers, loudspeakers or microphones for distributing signals to two or more loudspeakers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/01Satellite radio beacon positioning systems transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/13Receivers
    • G01S19/14Receivers specially adapted for specific applications
    • G01S19/17Emergency applications
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B27/00Alarm systems in which the alarm condition is signalled from a central station to a plurality of substations
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B27/00Alarm systems in which the alarm condition is signalled from a central station to a plurality of substations
    • G08B27/001Signalling to an emergency team, e.g. firemen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2420/00Details of connection covered by H04R, not provided for in its groups
    • H04R2420/07Applications of wireless loudspeakers or wireless microphones

Definitions

  • FIG. 1 depicts an incident scene in accordance with an exemplary embodiment of the present invention.
  • FIG. 2 depicts a flow chart in accordance with an exemplary embodiment of the present invention.
  • An exemplary embodiment of the present invention solves many of the problems associated with the prior art. For example, a single public safety officer can be sent to an incident scene and the single officer can create an illusion that more public safety officers are currently at or arriving shortly to the incident scene.
  • a virtual partner device assesses an officer risk level and automatically invokes the illusion of additional arriving officers.
  • the arriving officer sound illusion is preferably dynamically customized based on context aware information in order to make the sound illusion more realistic and believable.
  • FIG. 1 depicts an incident scene 100 in accordance with an exemplary embodiment of the present invention.
  • Incident scene 100 preferably comprises building 101 , public safety officer 102 , virtual partner 103 , public safety vehicle 104 , person of interest 105 , and speakers 111 - 113 .
  • Building 101 is a structure such as a home or apartment building.
  • building 101 is a residence, but could alternately be any type of structure, such as a garage, a shed, or a warehouse.
  • Public safety officer 102 is a person tasked with protecting public safety.
  • public safety officer 102 is a police officer, but public safety officer 102 can be a paramedic, an EMT, a firefighter, or any other person who could use safety and support while conducting public services in a dangerous situation.
  • An exemplary embodiment also could provide additional support and protection for private citizens as well, such as security guards or other personnel.
  • Virtual partner 103 is a virtual personal assistant that assists public safety officer 102 .
  • Virtual partner 103 works with public safety officer 102 and communication devices used by public safety officer 102 .
  • Virtual partner 103 can also use services such as GPS, access county records, and assess the environment in and about incident scene 100 . This helps virtual partner 103 customize the sound profile and make the sound profile sound as accurate and realistic as possible.
  • Virtual partner 103 also preferably accesses servers and websites to determine, for example, weather such as precipitation and temperature to help in customizing the sound profile.
  • virtual partner 103 is constantly assessing the officer situation and risk level by monitoring cameras and ambient sounds.
  • the virtual partner preferably has the ability to automatically invoke the approaching officer sound illusion if it is determined that it may deescalate a situation.
  • Public safety vehicle 104 is a police car or other vehicle used by public safety officers.
  • Public safety vehicle 104 is preferably enabled with sirens and speakers capable of playing different sound profiles. These sound profiles can include, but are not limited to, a standard car siren, the sound of one or more approaching car sirens, the sound of cars screeching to a stop on pavement or a dirt road, the sound of car door opening and closing, the sound of police talking and shouting commands, and the sound of footsteps on various surfaces.
  • Person of interest 105 is a person that public safety officer 102 is interacting with. Person of interest 105 may be a suspect in a criminal investigation, a person that public safety officer 102 has encountered, a witness to a suspected crime, or a friend or relative of a criminal suspect.
  • Speakers 111 - 113 provide audio amplification of signals received.
  • the signals are received from virtual partner 103 .
  • Speakers 111 - 113 can be located in vehicles, be portable device speakers, or can be wireless speakers. Speakers 111 - 113 can be synched with each other to make a seamless sound illusion while utilizing each speaker's spatial location with respect to other speakers.
  • FIG. 2 depicts a flow chart 200 in accordance with an exemplary embodiment of the present invention.
  • a single public safety officer 102 has exited his car 104 without turning on the siren, for example when responding to a domestic disturbance call inside an apartment building 101 .
  • his virtual partner device 103 is collecting information on the environment, including facts such as that the stairs are carpeted and the hallways have wood floors.
  • public safety officer 102 decides to place a wireless speaker unit, such as wireless speaker 111 , at the top of the interior stairs, “just in case”.
  • virtual partner 103 determines that an incident is escalating from a first severity level to a second severity level, the process continues.
  • public safety officer 102 may arrive at incident scene 100 and engage person of interest 105 with questions. Person of interest 105 is becoming belligerent and the situation is slowly escalating from a first severity level to a second severity level.
  • Virtual partner 103 has been monitoring the situation and based on input to virtual partner 103 a decision is made by virtual partner 103 that this situation would benefit from a sound illusion of approaching additional public safety officers.
  • the determination that the incident has escalated occurs based upon sounds heard at the incident.
  • the sounds can include, without limitation, a tone of voice, predetermined words spoken, the sound of a weapon being drawn.
  • the determination of escalation can be determined by an assessment of the number of distinct voices.
  • the determination of escalation can be determined by video collected by virtual partner 103 at incident scene 100 . The video analysis can use, for example, suspicious facial expressions or video detection of weapons.
  • Virtual partner 103 determines ( 203 ) a quantity of resources needed to deescalate the incident to the first severity level.
  • the quantity of resources can be, for example, the number of responders needed to deescalate the incident to the first severity level. Alternately, the quantity of resources can be the number of vehicles needed to deescalate the incident to the first severity level or the number of weapons or equipment needed to deescalate the incident to the first severity level.
  • Virtual partner 103 generates ( 205 ) a sound illusion profile to replicate a sound associated with the quantity of resources.
  • the sound illusion profile can be generated using a variety of variables, such as the location of the incident or ambient noise parameters at the incident.
  • the ambient parameters can include road conditions, such as whether the roads are paved or unpaved, or weather conditions.
  • the sound illusion profile can be a conversation, preferably a computer-generated conversation.
  • the computer-generated conversation includes, for example, conversations of police officers and a dispatcher dispatching officers to the incident scene.
  • the computer-generated conversation includes a predetermined key word.
  • This key word indicates to public safety officer 102 that this conversation is not an actual conversation between real-life officers, but rather a computer-generated conversation. This helps public safety officer 102 know whether the additional personnel and resources are actually on the scene or whether this chatter is intended to make person of interest 105 that additional personnel and resources are arriving at incident scene 100 .
  • Public safety officer 102 can interact with this computer-generated conversation, and the response from virtual partner 103 will respond to this conversation from public safety officer 102 .
  • virtual partner 103 selects and plays the sound of approaching sirens and can include car tires screeching to a halt, making person of interest 105 think that public safety officer 102 is receiving backup resources, thereby incentivizing person of interest 105 to deescalate the tension from a second severity level to a first, lower severity level.
  • the sound profile can then continue with the sound of the opening and closing of doors and public safety officers exchanging instructions with each other. Sound profiles can be overlaid on each other in any combination to create an illusion such as several cars on scene with more approaching and police surrounding the building.
  • virtual partner 103 can extend the sound profiles to play on other officers' portable communication devices. In this way, an individual supporting officer in a hallway could create a sound illusion of many more officers in the hallway.
  • Sound illusions can be extended onto a separate wireless speaker device that public safety officer 102 may place as he approaches a potential situation.
  • public safety officer 102 may proactively place a wireless speaker device in the hallway of an apartment building before knocking on a door in response to a call.
  • the ambient parameters can also include building structural conditions, such as type of flooring, for example hardwood floors or carpeted floors.
  • Virtual partner 103 sends ( 207 ) a signal to one or more speaker devices in a region surrounding the location of the incident.
  • speakers 111 , 112 , and 113 generate ( 209 ) an output sound based on the sound illusion profile.
  • the car sound profiles are preferably synced with each other to further the illusion.
  • the first public safety vehicle and the second public safety vehicle are on opposite sides of a building, the first public safety vehicle may initially play the sound of an approaching vehicle at a louder volume than the second public safety vehicle. This creates the illusion of the approaching car parking in between first public safety vehicle and second public safety vehicle.
  • the approaching speaker volumes are preferably slowly decreased until they become equal in both first public safety vehicle and second public safety vehicle.
  • a includes . . . a”, “contains . . . a” does not, without more constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises, has, includes, contains the element.
  • the terms “a” and “an” are defined as one or more unless explicitly stated otherwise herein.
  • the terms “substantially”, “essentially”, “approximately”, “about” or any other version thereof, are defined as being close to as understood by one of ordinary skill in the art, and in one non-limiting embodiment the term is defined to be within 10%, in another embodiment within 5%, in another embodiment within 1% and in another embodiment within 0.5%.
  • the term “coupled” as used herein is defined as connected, although not necessarily directly and not necessarily mechanically.
  • a device or structure that is “configured” in a certain way is configured in at least that way, but may also be configured in ways that are not listed.
  • some embodiments may be comprised of one or more generic or specialized electronic processors (or “processing devices”) such as microprocessors, digital signal processors, customized processors and field programmable gate arrays (FPGAs) and unique stored program instructions (including both software and firmware) that control the one or more processors to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of the method and/or apparatus described herein.
  • processors or “processing devices”
  • microprocessors digital signal processors
  • FPGAs field programmable gate arrays
  • unique stored program instructions including both software and firmware
  • an embodiment can be implemented as a computer-readable storage medium having computer readable code stored thereon for programming a computer (e.g., comprising an electronic processor) to perform a method as described and claimed herein.
  • Examples of such computer-readable storage mediums include, but are not limited to, a hard disk, a CD-ROM, an optical storage device, a magnetic storage device, a ROM (Read Only Memory), a PROM (Programmable Read Only Memory), an EPROM (Erasable Programmable Read Only Memory), an EEPROM (Electrically Erasable Programmable Read Only Memory) and a Flash memory.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Tourism & Hospitality (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Development Economics (AREA)
  • Strategic Management (AREA)
  • Computational Linguistics (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Educational Administration (AREA)
  • General Business, Economics & Management (AREA)
  • Otolaryngology (AREA)
  • Theoretical Computer Science (AREA)
  • Child & Adolescent Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Stereophonic System (AREA)

Abstract

A method is provided that generates a sound illusion profile to replicate a quantity of resources arriving at an incident scene. A virtual partner or the like determines that an incident has escalated from a first severity level to a second severity level. The virtual partner determines a quantity of resources needed to deescalate the incident to the first severity level. A sound illusion profile is generated to replicate a sound associated with the quantity of resources. A signal is sent to one or more speaker devices in a region surrounding the location of the incident to generate an output sound based on the sound illusion profile.

Description

BACKGROUND OF THE INVENTION
Situations often occur where a police officer arrives on a scene alone and enters a building to address a situation. If the suspect inside the building believes the officer is alone, the suspect may show a higher level of resistance than he would if he believed there were many other police at, or approaching, the scene. This can put the lone officer at a higher risk for attack and injury.
Unfortunately, it often takes many minutes for additional officers to arrive at the incident scene. While waiting for backup, the police officer is in a potentially perilous situation. The suspect can escalate tensions very quickly, especially if he thinks that the officer is alone and therefore vulnerable.
Therefore a need exists for a way to keep a police officer safe prior to backup arriving.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
The accompanying figures, where like reference numerals refer to identical or functionally similar elements throughout the separate views, which together with the detailed description below are incorporated in and form part of the specification and serve to further illustrate various embodiments of concepts that include the claimed invention, and to explain various principles and advantages of those embodiments.
FIG. 1 depicts an incident scene in accordance with an exemplary embodiment of the present invention.
FIG. 2 depicts a flow chart in accordance with an exemplary embodiment of the present invention.
Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of embodiments of the present invention.
The apparatus and method components have been represented where appropriate by conventional symbols in the drawings, showing only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the disclosure with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.
DETAILED DESCRIPTION OF THE INVENTION
An exemplary embodiment of the present invention solves many of the problems associated with the prior art. For example, a single public safety officer can be sent to an incident scene and the single officer can create an illusion that more public safety officers are currently at or arriving shortly to the incident scene.
Further, a virtual partner device assesses an officer risk level and automatically invokes the illusion of additional arriving officers. In addition, the arriving officer sound illusion is preferably dynamically customized based on context aware information in order to make the sound illusion more realistic and believable. An exemplary embodiment can be better understood with reference to the detailed description of FIGS. 1 and 2 below.
FIG. 1 depicts an incident scene 100 in accordance with an exemplary embodiment of the present invention. Incident scene 100 preferably comprises building 101, public safety officer 102, virtual partner 103, public safety vehicle 104, person of interest 105, and speakers 111-113.
Building 101 is a structure such as a home or apartment building. In an exemplary embodiment, building 101 is a residence, but could alternately be any type of structure, such as a garage, a shed, or a warehouse.
Public safety officer 102 is a person tasked with protecting public safety. In accordance with an exemplary embodiment, public safety officer 102 is a police officer, but public safety officer 102 can be a paramedic, an EMT, a firefighter, or any other person who could use safety and support while conducting public services in a dangerous situation. An exemplary embodiment also could provide additional support and protection for private citizens as well, such as security guards or other personnel.
Virtual partner 103 is a virtual personal assistant that assists public safety officer 102. Virtual partner 103 works with public safety officer 102 and communication devices used by public safety officer 102. Virtual partner 103 can also use services such as GPS, access county records, and assess the environment in and about incident scene 100. This helps virtual partner 103 customize the sound profile and make the sound profile sound as accurate and realistic as possible. Virtual partner 103 also preferably accesses servers and websites to determine, for example, weather such as precipitation and temperature to help in customizing the sound profile.
In accordance with an exemplary embodiment, virtual partner 103 is constantly assessing the officer situation and risk level by monitoring cameras and ambient sounds. The virtual partner preferably has the ability to automatically invoke the approaching officer sound illusion if it is determined that it may deescalate a situation.
Public safety vehicle 104 is a police car or other vehicle used by public safety officers. Public safety vehicle 104 is preferably enabled with sirens and speakers capable of playing different sound profiles. These sound profiles can include, but are not limited to, a standard car siren, the sound of one or more approaching car sirens, the sound of cars screeching to a stop on pavement or a dirt road, the sound of car door opening and closing, the sound of police talking and shouting commands, and the sound of footsteps on various surfaces.
Person of interest 105 is a person that public safety officer 102 is interacting with. Person of interest 105 may be a suspect in a criminal investigation, a person that public safety officer 102 has encountered, a witness to a suspected crime, or a friend or relative of a criminal suspect.
Speakers 111-113 provide audio amplification of signals received. In an exemplary embodiment, the signals are received from virtual partner 103. Speakers 111-113 can be located in vehicles, be portable device speakers, or can be wireless speakers. Speakers 111-113 can be synched with each other to make a seamless sound illusion while utilizing each speaker's spatial location with respect to other speakers.
FIG. 2 depicts a flow chart 200 in accordance with an exemplary embodiment of the present invention. In accordance with this exemplary embodiment, a single public safety officer 102 has exited his car 104 without turning on the siren, for example when responding to a domestic disturbance call inside an apartment building 101. As public safety officer 102 enters building 101, his virtual partner device 103 is collecting information on the environment, including facts such as that the stairs are carpeted and the hallways have wood floors. In one exemplary embodiment, public safety officer 102 decides to place a wireless speaker unit, such as wireless speaker 111, at the top of the interior stairs, “just in case”.
While at incident scene 100, at some point virtual partner 103 determines (201) whether an incident has escalated from a first severity level to a second severity level. If not, the process returns to step 201 to continue to monitor the situation.
If virtual partner 103 determines that an incident is escalating from a first severity level to a second severity level, the process continues. As an example of determining that the incident is escalating, public safety officer 102 may arrive at incident scene 100 and engage person of interest 105 with questions. Person of interest 105 is becoming belligerent and the situation is slowly escalating from a first severity level to a second severity level. Virtual partner 103 has been monitoring the situation and based on input to virtual partner 103 a decision is made by virtual partner 103 that this situation would benefit from a sound illusion of approaching additional public safety officers.
In a first exemplary embodiment, the determination that the incident has escalated occurs based upon sounds heard at the incident. The sounds can include, without limitation, a tone of voice, predetermined words spoken, the sound of a weapon being drawn. In a further exemplary embodiment, the determination of escalation can be determined by an assessment of the number of distinct voices. Still further, the determination of escalation can be determined by video collected by virtual partner 103 at incident scene 100. The video analysis can use, for example, suspicious facial expressions or video detection of weapons.
Virtual partner 103 determines (203) a quantity of resources needed to deescalate the incident to the first severity level. The quantity of resources can be, for example, the number of responders needed to deescalate the incident to the first severity level. Alternately, the quantity of resources can be the number of vehicles needed to deescalate the incident to the first severity level or the number of weapons or equipment needed to deescalate the incident to the first severity level.
Virtual partner 103 generates (205) a sound illusion profile to replicate a sound associated with the quantity of resources. The sound illusion profile can be generated using a variety of variables, such as the location of the incident or ambient noise parameters at the incident. The ambient parameters can include road conditions, such as whether the roads are paved or unpaved, or weather conditions.
The sound illusion profile can be a conversation, preferably a computer-generated conversation. In this scenario, the computer-generated conversation includes, for example, conversations of police officers and a dispatcher dispatching officers to the incident scene.
In accordance with an exemplary embodiment, the computer-generated conversation includes a predetermined key word. This key word indicates to public safety officer 102 that this conversation is not an actual conversation between real-life officers, but rather a computer-generated conversation. This helps public safety officer 102 know whether the additional personnel and resources are actually on the scene or whether this chatter is intended to make person of interest 105 that additional personnel and resources are arriving at incident scene 100. Public safety officer 102 can interact with this computer-generated conversation, and the response from virtual partner 103 will respond to this conversation from public safety officer 102.
In one exemplary embodiment, virtual partner 103 selects and plays the sound of approaching sirens and can include car tires screeching to a halt, making person of interest 105 think that public safety officer 102 is receiving backup resources, thereby incentivizing person of interest 105 to deescalate the tension from a second severity level to a first, lower severity level. The sound profile can then continue with the sound of the opening and closing of doors and public safety officers exchanging instructions with each other. Sound profiles can be overlaid on each other in any combination to create an illusion such as several cars on scene with more approaching and police surrounding the building.
In accordance with an exemplary embodiment, virtual partner 103 can extend the sound profiles to play on other officers' portable communication devices. In this way, an individual supporting officer in a hallway could create a sound illusion of many more officers in the hallway.
Sound illusions can be extended onto a separate wireless speaker device that public safety officer 102 may place as he approaches a potential situation. For example, public safety officer 102 may proactively place a wireless speaker device in the hallway of an apartment building before knocking on a door in response to a call.
The ambient parameters can also include building structural conditions, such as type of flooring, for example hardwood floors or carpeted floors.
Virtual partner 103 sends (207) a signal to one or more speaker devices in a region surrounding the location of the incident.
In accordance with an exemplary embodiment, speakers 111, 112, and 113 generate (209) an output sound based on the sound illusion profile. If multiple public safety vehicles actually are on the scene, the car sound profiles are preferably synced with each other to further the illusion. For example, if the first public safety vehicle and the second public safety vehicle are on opposite sides of a building, the first public safety vehicle may initially play the sound of an approaching vehicle at a louder volume than the second public safety vehicle. This creates the illusion of the approaching car parking in between first public safety vehicle and second public safety vehicle. The approaching speaker volumes are preferably slowly decreased until they become equal in both first public safety vehicle and second public safety vehicle.
In the foregoing specification, specific embodiments have been described. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the invention as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of present teachings. The benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential features or elements of any or all the claims. The invention is defined solely by the appended claims including any amendments made during the pendency of this application and all equivalents of those claims as issued.
Moreover in this document, relational terms such as first and second, top and bottom, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms “comprises,” “comprising,” “has”, “having,” “includes”, “including,” “contains”, “containing” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises, has, includes, contains a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element proceeded by “comprises . . . a”, “has . . . a”, “includes . . . a”, “contains . . . a” does not, without more constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises, has, includes, contains the element. The terms “a” and “an” are defined as one or more unless explicitly stated otherwise herein. The terms “substantially”, “essentially”, “approximately”, “about” or any other version thereof, are defined as being close to as understood by one of ordinary skill in the art, and in one non-limiting embodiment the term is defined to be within 10%, in another embodiment within 5%, in another embodiment within 1% and in another embodiment within 0.5%. The term “coupled” as used herein is defined as connected, although not necessarily directly and not necessarily mechanically. A device or structure that is “configured” in a certain way is configured in at least that way, but may also be configured in ways that are not listed.
It will be appreciated that some embodiments may be comprised of one or more generic or specialized electronic processors (or “processing devices”) such as microprocessors, digital signal processors, customized processors and field programmable gate arrays (FPGAs) and unique stored program instructions (including both software and firmware) that control the one or more processors to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of the method and/or apparatus described herein. Alternatively, some or all functions could be implemented by a state machine that has no stored program instructions, or in one or more application specific integrated circuits (ASICs), in which each function or some combinations of certain of the functions are implemented as custom logic. Of course, a combination of the two approaches could be used.
Moreover, an embodiment can be implemented as a computer-readable storage medium having computer readable code stored thereon for programming a computer (e.g., comprising an electronic processor) to perform a method as described and claimed herein. Examples of such computer-readable storage mediums include, but are not limited to, a hard disk, a CD-ROM, an optical storage device, a magnetic storage device, a ROM (Read Only Memory), a PROM (Programmable Read Only Memory), an EPROM (Erasable Programmable Read Only Memory), an EEPROM (Electrically Erasable Programmable Read Only Memory) and a Flash memory. Further, it is expected that one of ordinary skill, notwithstanding possibly significant effort and many design choices motivated by, for example, available time, current technology, and economic considerations, when guided by the concepts and principles disclosed herein will be readily capable of generating such software instructions and programs and ICs with minimal experimentation.
The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in various embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter.

Claims (16)

We claim:
1. A method comprising:
determining that an incident has escalated from a first severity level to a second severity level;
determining a number of weapons needed to deescalate the incident to the first severity level;
generating a sound illusion profile to replicate a sound associated with the number of weapons; and
sending a signal to one or more speaker devices in a region surrounding the location of the incident to generate an output sound based on the sound illusion profile.
2. The method of claim 1, wherein the step of generating a sound illusion profile further comprises generating the sound illusion as a function of the location of the incident.
3. The method of claim 1, wherein the step of generating a sound illusion profile further comprises generating the sound illusion as a function of ambient parameters at the incident.
4. The method of claim 3, wherein the ambient parameters are road conditions.
5. The method of claim 3, wherein the ambient parameters are weather conditions.
6. The method of claim 3, wherein the ambient parameters are building structural conditions.
7. The method of claim 1, wherein the step of determining that an incident has escalated from a first severity level to a second severity level comprises determining that an incident has escalated from a first severity level to a second severity level based upon sounds heard at the incident.
8. The method of claim 7, wherein the sounds comprise a tone of voice.
9. The method of claim 7, wherein the sounds comprise predetermined words.
10. The method of claim 7, wherein the sounds comprise a detection of weapons being drawn.
11. The method of claim 1, wherein the step of determining that an incident has escalated from a first severity level to a second severity level comprises determining that an incident has escalated from a first severity level to a second severity level based upon an assessment of the number of distinct voices.
12. The method of claim 1, wherein the step of determining that an incident has escalated from a first severity level to a second severity level comprises determining that an incident has escalated from a first severity level to a second severity level based upon video collected at the incident.
13. The method of claim 12, wherein the video comprises suspicious facial expressions.
14. The method of claim 12, wherein the video comprises a detection of weapons.
15. A method comprising:
determining that an incident has escalated from a first severity level to a second severity level;
determining a quantity of resources needed to deescalate the incident to the first severity level;
generating a sound illusion profile to replicate a sound associated with the quantity of resources, wherein the sound illusion profile comprises computer-generated conversations; and
sending a signal to one or more speaker devices in a region surrounding the location of the incident to generate an output sound based on the sound illusion profile.
16. The method of claim 15, the method further comprising the step of inserting a key word into the computer-generated conversation, the key word indicating that the computer-generated conversation is the sound illusion profile.
US15/848,185 2017-12-20 2017-12-20 Method to generate a sound illusion profile to replicate a quantity of resources Active US10410491B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/848,185 US10410491B2 (en) 2017-12-20 2017-12-20 Method to generate a sound illusion profile to replicate a quantity of resources

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US15/848,185 US10410491B2 (en) 2017-12-20 2017-12-20 Method to generate a sound illusion profile to replicate a quantity of resources

Publications (2)

Publication Number Publication Date
US20190188985A1 US20190188985A1 (en) 2019-06-20
US10410491B2 true US10410491B2 (en) 2019-09-10

Family

ID=66813983

Family Applications (1)

Application Number Title Priority Date Filing Date
US15/848,185 Active US10410491B2 (en) 2017-12-20 2017-12-20 Method to generate a sound illusion profile to replicate a quantity of resources

Country Status (1)

Country Link
US (1) US10410491B2 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11195398B1 (en) * 2019-04-17 2021-12-07 Kuna Systems Corporation Preventative and deterring security camera floodlight

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6577738B2 (en) 1996-07-17 2003-06-10 American Technology Corporation Parametric virtual speaker and surround-sound system
US8055245B2 (en) 2008-01-30 2011-11-08 Kyocera Corporation Mobile device with fake communication mode
US8630820B2 (en) 2009-08-24 2014-01-14 Strider, Inc. Methods and systems for threat assessment, safety management, and monitoring of individuals and groups
US9652975B1 (en) * 2014-08-01 2017-05-16 Thomas R. Riley Integrated building occupant protection system for persons and pets

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6577738B2 (en) 1996-07-17 2003-06-10 American Technology Corporation Parametric virtual speaker and surround-sound system
US8055245B2 (en) 2008-01-30 2011-11-08 Kyocera Corporation Mobile device with fake communication mode
US8630820B2 (en) 2009-08-24 2014-01-14 Strider, Inc. Methods and systems for threat assessment, safety management, and monitoring of individuals and groups
US9652975B1 (en) * 2014-08-01 2017-05-16 Thomas R. Riley Integrated building occupant protection system for persons and pets

Also Published As

Publication number Publication date
US20190188985A1 (en) 2019-06-20

Similar Documents

Publication Publication Date Title
US9685071B1 (en) eReceptionist and eNeighborhood watch system for crime prevention and/or verification
US20200082702A1 (en) Geo-location services
US10540884B1 (en) Systems and methods for operating remote presence security
US10977928B2 (en) Security system with cooperative behavior
CN103365260B (en) Without the need to family's automation application of both hands
US10522012B1 (en) Verifying occupancy of a building
CN109548408B (en) Autonomous vehicle providing safety zone to person in distress
US10061990B2 (en) Entity detection
WO2015191722A1 (en) Detecting a premise condition using audio analytics
KR101841882B1 (en) Unmanned Crime Prevention System and Method
US20210004928A1 (en) Novel communications system for motorists
KR101687296B1 (en) Object tracking system for hybrid pattern analysis based on sounds and behavior patterns cognition, and method thereof
KR102292333B1 (en) Techniques of delivery of public address in accordance with decision factor related to monitoring environment
US10410491B2 (en) Method to generate a sound illusion profile to replicate a quantity of resources
CN108806155B (en) Security monitoring method based on Internet of things
CN208506918U (en) A kind of intelligent safety and defence system
US20230072905A1 (en) Managing event notifications
KR102071185B1 (en) Calling system for controlling peripheral device acording to the type of emergency
KR101993207B1 (en) Calling system to perform situation analysis
KR102169797B1 (en) Event detection-based public address system
CN106504466A (en) A kind of early warning system and method based on big data
US20120008751A1 (en) Home star security system
CN105137892A (en) Security system applications for locations to be secured
CN114167353A (en) Population mobility monitoring method, electronic device and storage medium
CN114140957A (en) Face recognition method, electronic device and storage medium

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: MOTOROLA SOLUTIONS, INC., ILLINOIS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:POTTER, SCOTT G.;PETELA, ARTHUR E.;GILMORE, STEVEN;AND OTHERS;SIGNING DATES FROM 20180108 TO 20180109;REEL/FRAME:044765/0305

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4