WO2017049574A1 - Facilitating smart voice routing for phone calls using incompatible operating systems at computing devices - Google Patents

Facilitating smart voice routing for phone calls using incompatible operating systems at computing devices Download PDF

Info

Publication number
WO2017049574A1
WO2017049574A1 PCT/CN2015/090680 CN2015090680W WO2017049574A1 WO 2017049574 A1 WO2017049574 A1 WO 2017049574A1 CN 2015090680 W CN2015090680 W CN 2015090680W WO 2017049574 A1 WO2017049574 A1 WO 2017049574A1
Authority
WO
WIPO (PCT)
Prior art keywords
call
external
audio
local
routing
Prior art date
Application number
PCT/CN2015/090680
Other languages
French (fr)
Inventor
Dujian WU
Xiaolan Huang
Yipeng YANG
Yun Liu
Edwin L. WANG
Original Assignee
Intel Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corporation filed Critical Intel Corporation
Priority to PCT/CN2015/090680 priority Critical patent/WO2017049574A1/en
Publication of WO2017049574A1 publication Critical patent/WO2017049574A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/60Substation equipment, e.g. for use by subscribers including speech amplifiers
    • H04M1/6033Substation equipment, e.g. for use by subscribers including speech amplifiers for providing handsfree use or a loudspeaker mode in telephone sets
    • H04M1/6041Portable telephones adapted for handsfree use
    • H04M1/6058Portable telephones adapted for handsfree use involving the use of a headset accessory device connected to the portable telephone
    • H04M1/6066Portable telephones adapted for handsfree use involving the use of a headset accessory device connected to the portable telephone including a wireless connection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/247Telephone sets including user guidance or feature selection means facilitating their use
    • H04M1/2473Telephone terminals interfacing a personal computer, e.g. using an API (Application Programming Interface)
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/02Details of telephonic subscriber devices including a Bluetooth interface

Definitions

  • Embodiments described herein generally relate to computers. More particularly, embodiments relate to facilitating smart voice routing for phone calls using incompatible operating systems at computing devices.
  • Figure 1 illustrates a computing device employing a smart audio-routing phone mechanism according to one embodiment.
  • Figure 2 illustrates a smart audio-routing phone mechanism according to one embodiment.
  • Figure 3B illustrates a transaction sequence for facilitating audio routing at computing devices having operating systems that lack phone-capabilities according to one embodiment.
  • Figure 3C illustrates an audio codec module for facilitating audio routing for external device calls at computing devices that lack phone-capabilities according to one embodiment.
  • Figure 3D illustrates an audio routing at computing devices having operating systems that lack phone-capabilities according to one embodiment.
  • Figure 4 illustrates a method for facilitating audio routing at computing devices having operating systems that lack phone-capabilities according to one embodiment.
  • Figure 5 illustrates computer environment suitable for implementing embodiments of the present disclosure according to one embodiment.
  • Figure 6 illustrates a method for facilitating dynamic targeting of users and communicating of message according to one embodiment.
  • Embodiments provide for a novel technique for voice/audio routing for different phone calling scenarios for incompatible operating systems, such as those operating systems that do not provide native phone stack support.
  • a phone call voice path from a communication/broadband modem (e.g., third generation (3D) modem, fourth generation (4G) modem, etc. ) may be routed to a computing device’s input/output components, such as speaker and microphone, via an audio codec module ( “audio codec” or simply “codec” ) .
  • a phone call voice from a communication modem may be redirected twice via the audio codec and an application processor (e.g., System on Chip (SoC) ) separately and finally routing it to a Bluetooth headset.
  • SoC System on Chip
  • a voice routing technique for facilitating circuit-switching phone calls on computing devices that may be equipped with communication/broadband modems ( “communication modem” or “broadband modem” ) and installed with an incompatible operating system that does not support phone calling (e.g., desktop version, etc. ) .
  • This novel voice routing technique addresses both the regular calls and Bluetooth calls and further, unlike a soft phone call (such as using Voice over Internet Protocol (VoIP) ) , the novel technique provides for zero voice delay that is conventionally caused by network traffic.
  • VoIP Voice over Internet Protocol
  • embodiments are not limited to any particular number and type of software applications, application services, customized settings, etc., or any particular number and type of computing devices, networks, deployment details, etc. ; however, for the sake of brevity, clarity, and ease of understanding, throughout this document, references are made to user interfaces, software applications, user preferences, customized settings, mobile computers (e.g., smartphones, tablet computers, etc. ) , communication medium/network (e.g., protocols, communication or input/output components, cloud network, proximity network, the Internet, etc. ) , but that embodiments are not limited as such.
  • mobile computers e.g., smartphones, tablet computers, etc.
  • communication medium/network e.g., protocols, communication or input/output components, cloud network, proximity network, the Internet, etc.
  • FIG 1 illustrates a computing device 100 employing a smart audio-routing phone mechanism 110 according to one embodiment.
  • Computing device 100 serves as a host machine for hosting smart audio-routing phone mechanism ( “smart phone mechanism” ) 110 that includes any number and type of components, as illustrated in Figure 2, to facilitate intelligent, dynamic, and automatic voice routing to facilitate phone calling using operating systems that do not support phone calling, as will be further described throughout this document.
  • smart phone mechanism “smart phone mechanism”
  • Figure 2 includes any number and type of components, as illustrated in Figure 2, to facilitate intelligent, dynamic, and automatic voice routing to facilitate phone calling using operating systems that do not support phone calling, as will be further described throughout this document.
  • Computing device 100 may include any number and type of data processing devices, such as large computing systems, such as server computers, desktop computers, etc., and may further include set-top boxes (e.g., Internet-based cable television set-top boxes, etc. ) , global positioning system (GPS) -based devices, etc.
  • Computing device 100 may include mobile computing devices serving as communication devices, such as cellular phones including smartphones, personal digital assistants (PDAs) , tablet computers, laptop computers (e.g., Ultrabook TM system, etc.
  • PDAs personal digital assistants
  • laptop computers e.g., Ultrabook TM system, etc.
  • HMDs head-mounted displays
  • wearable glasses such as glass TM , head-mounted binoculars, gaming displays, military headwear, etc.
  • other wearable devices e.g., smartwatches, bracelets, smartcards, jewelry, clothing items, etc.
  • Computing device 100 may include an operating system (OS) 106 serving as an interface between hardware and/or physical resources of the computer device 100 and a user.
  • OS operating system
  • Computing device 100 further includes one or more processor (s) 102, memory devices 104, network devices, drivers, or the like, as well as input/output (I/O) sources 108, such as touchscreens, touch panels, touch pads, virtual or regular keyboards, virtual or regular mice, etc.
  • I/O input/output
  • job may refer to an individual or a person or a group of individuals or persons using or having access to computing device 100.
  • FIG. 2 illustrates a smart audio-routing phone mechanism 110 according to one embodiment.
  • smart phone mechanism 110 may include any number and type of components, such as (without limitation) : detection/reception logic 201; evaluation/classification logic 203; audio routing logic 205; verification logic 207; application/execution logic 209; and communication/compatibility logic 211.
  • Computing device 100 is further shown as offering user interface 221 and hosting input/output sources 108 having capturing/sensing components 231 (e.g., sensors, detectors, microphones, etc. ) , such as microphone 241, and output sources 233 (e.g., display devices, speakers, etc. ) , such as speaker 243.
  • capturing/sensing components 231 e.g., sensors, detectors, microphones, etc.
  • output sources 233 e.g., display devices, speakers, etc.
  • smart phone mechanism 110 may be hosted by computing device 100, such as a data processing/communication device including a mobile computer (e.g., smartphone, tablet computer, etc. ) , a wearable computer (e.g., wearable glasses, smart bracelets, smartcards, smart watches, HMDs, etc. ) , an Internet of Things (IoT) device (e.g., home security system, thermostat, washer/dryer, light panel, sprinkler system, etc. ) , and/or the like.
  • computing device 100 may be a larger data processing/communication machine, such as a server computer, a desktop computer, a laptop computer, etc.
  • Embodiments provide for a novel audio-routing technique as facilitated by smart phone mechanism 110 to work with such incompatible operation systems in facilitating phone calls without requiring additional hardware or causing voice delays, etc.
  • a novel audio-routing technique as facilitated by smart phone mechanism 110 different audio routing patterns may be selectively incorporated for different phone call scenarios, such as in case of a regular phone call, the phone call voice path from a communication modem (e.g., 3G modem) may be routed to a speaker of output components 233 and microphone of capturing/sensing components 231 via the codec.
  • a communication modem e.g., 3G modem
  • the phone call voice path may be twice redirected from the communication modem via the codec and application processor, separately, and finally rerouted to the Bluetooth headset.
  • an outgoing phone call is placed by a user at computing device 100, whether it be using a phone keypad provided through user interface 221 or dialing using a external device, such as a Bluetooth dialing device, etc.
  • this placement of the phone call by the user is detected by or received at detection/reception logic 201.
  • a phone call to computing device 100 is placed by a user at another device, such as computing device 270, this call may then be regarded as an incoming call and detected at or received by detection/reception logic 201.
  • evaluation/classification logic 203 may then be triggered to evaluate the type of the placed phone call. For example, if the phone call is evaluated to be a local device call (such as using local listening/speaking devices, and not using any external listing/speaking devices, such as a Bluetooth headset, etc. ) , evaluation/classification logic 203 may classify this type of phone call to be a local device-based call. Similarly, if the phone call is evaluated by to be an external device call (such as using external listing/speaking devices, such as a Bluetooth device, and not using local listening/speaking devices) , evaluation/classification logic 203 may classify this type of phone call to be an external device-based call.
  • an external device call such as using external listing/speaking devices, such as a Bluetooth device, and not using local listening/speaking devices
  • audio routing logic 205 may then be triggered to appropriately route the audio being transmitted due to the phone call. In one embodiment, if the phone call is classified as a local device-based call ( “local call” or “local device call” ) (whether the phone call is arriving or being dialed-out) , audio routing logic 205 is triggered to establish an audio path to local I/O sources 108, such as microphone 241 and speaker 243.
  • the audio codec is facilitated by audio routing logic 205 to route an AUDIO_IN signal and an AUDIO_OUT signal (e.g., 3G IN/OUT signals, 4G IN/OUT signals, etc. ) to speaker 243 and microphone 241, respectively, and mix the AUDIO_IN/OUT voice for recording as desired or necessitated.
  • an AUDIO_IN signal and an AUDIO_OUT signal e.g., 3G IN/OUT signals, 4G IN/OUT signals, etc.
  • audio routing logic 205 is triggered to establish an audio path to external device 280 and while any audio path to local microphone 241 and speaker 243 are switched off such that the audio is not needed to be transferred to/from local microphone 241 speaker 243.
  • a communication modem AUDIO_IN signal is directed to external device AUDIO_OUT, while external device AUDIO_IN is routed to external device AUDIO_OUT.
  • application/execution logic 209 may then be used to execute the audio routes and the overall framework to facilitate the phone call, whether it be calling in or calling out.
  • Communication/compatibility logic 211 may be used to facilitate dynamic communication and compatibility between computing devices 100, 270, external device 280 (e.g., Bluetooth headset) , database (s) 265, communication medium 260, etc., and any number and type of other computing devices (such as wearable computing devices, mobile computing devices, desktop computers, server computing devices, etc. ) , processing devices (e.g., central processing unit (CPU) , graphics processing unit (GPU) , etc.
  • CPU central processing unit
  • GPU graphics processing unit
  • network e.g., Cloud network, the Internet, Internet of Things, intranet, cellular network, proximity networks, such as Bluetooth, Bluetooth low energy (BLE) , Bluetooth Smart, Wi-Fi proximity, Radio Frequency Identification (RFID) , Near Field Communication (NFC) , Body Area Network (BAN) , etc. ) , wireless or wired communications and relevant protocols (e.g., WiMAX, Ethernet, etc. ) , connectivity and location management techniques, software applications/websites, (e.g., social and/or business networking websites, business applications, games and other entertainment applications, etc. ) , programming languages, etc., while ensuring compatibility with changing technologies, parameters, protocols, standards, etc.
  • network e.g., Cloud network, the Internet, Internet of Things, intranet, cellular network, proximity networks, such as Bluetooth, Bluetooth low energy (BLE) , Bluetooth Smart, Wi-Fi proximity, Radio Frequency Identification (RFID) , Near Field Communication (NFC) , Body Area Network (BAN) , etc.
  • any use of a particular brand, word, term, phrase, name, and/or acronym such as “phone” , “phone call” , “calling in” , “calling out” , “local device call” , “external device call” , “communication modem” , “Bluetooth headset” , “application processor” , “AUDIO_IN” , “AUDIO_OUT” , “audio routing” , “audio” , “voice” , “speaker” , “microphone” , “user” , “user profile” , “user preference” , “rule” , “policy” , “sender” , “receiver” , “personal device” , “smart device” , “mobile computer” , “wearable device” , “cloud device” , “cloud-based server computer” , “third-party server computer” , “remote processing system” , etc., should not be read to limit embodiments to software or devices that carry that label in
  • FIG 3A illustrates a transaction sequence 300 for facilitating audio routing at computing devices having operating systems that lack phone-capabilities according to one embodiment.
  • Transaction sequence 300 may be performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, etc. ) , software (such as instructions run on a processing device) , or a combination thereof.
  • processing logic may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, etc. ) , software (such as instructions run on a processing device) , or a combination thereof.
  • transaction sequence 300 may be performed by smart phone mechanism 110 of Figures 1-2.
  • the processes of transaction sequence 300 are illustrated in linear sequences for brevity and clarity in presentation; however, it is contemplated that any number of them can be performed in parallel, asynchronously, or in different orders. For brevity, many of the details discussed with reference to the previous Figures 1-2 may not be discussed or repeated hereafter.
  • the illustrated components 241, 243, 301, 303, 305 may be part of or hosted by computing device 100 of Figures 1-2. However, it is contemplated and to be noted that embodiments are not limited to any such components and that one or more components may be added, removed, or modified, as desired or necessitated, and thus, these components 241, 243, 301, 303, 305 are illustrated and discussed herein as examples and that embodiments are not limited as such.
  • 3G-based standard such as 3G-based communication modem, etc.
  • embodiments are not limited to 3G, 4G, or any particular type of communication or broadband standards, communication modems, modules, protocols, etc.
  • smart phone mechanism 110 of Figures 1-2 may be used to route/reroute the audio such that one or more of the local I/O devices, such as microphone 241 and/or speaker 243, at computing device 100 are used for sensing and playing the audio.
  • any recording 321 for call answered and any ringing 323 for call ringing e.g., I2S_CODEC
  • application/execution logic 209 of Figure 2 may be facilitated by application/execution logic 209 of Figure 2 to be performed through one or more components of application processor 301 which may be part of or one of processor (s) 102 of Figure 1.
  • codec 303 may be used to route AUDIO_IN signal 311 (e.g., 3G audio signal, such as I2S_3G) and AUDIO_OUT signal 313 (e.g., 3G audio signal) to the computing device’s speaker 243 and microphone 241, respectively, and mix, through mixer 307, the voices of AUDIO_IN 311/AUDIO_OUT 313 for recording 321 as facilitated by application/execution logic 209 of Figure 2, as desired or necessitated.
  • 3G audio signal such as I2S_3G
  • AUDIO_OUT signal 313 e.g., 3G audio signal
  • the illustrated components 241, 243, 301, 303, 305 may be part of or hosted by computing device 100 of Figures 1-2. However, it is contemplated and to be noted that embodiments are not limited to any such components and that one or more components may be added, removed, or modified, as desired or necessitated, and thus, these components 241, 243, 301, 303, 305 are illustrated and discussed herein as examples and that embodiments are not limited as such.
  • 3G-based standard such as 3G-based communication modem, etc.
  • embodiments are not limited to 3G, 4G, or any particular type of communication or broadband standards, communication modems, modules, protocols, etc.
  • Bluetooth headset is used as an example of external communication device 280 but that embodiments are not limited to Bluetooth or Bluetooth device or any other particular type of external device.
  • audio routing logic 205 of Figure 2 may then be used to facilitate two redirections 351A, 351B of audio streams, such as first redirection 351A at application processor 301 and a second redirection 351B at codec 303 such that any local I/O devices, such as microphone 241 and speaker 243, are switched off 353 and thus prevented from being part of the audio path, such as microphone 241 may be switched off when external device 280 (e.g., Bluetooth headset, etc. ) is connected and used for calling out and similarly, speaker 243 may be switched off 353 as soon as a phone call is answered using external device 280.
  • external device 280 e.g., Bluetooth headset, etc.
  • audio codec 303 is triggered by audio routing logic 205 of Figure 2 to disconnect from microphone 241 and speaker 243 and redirect AUDIO_IN signal 311 and AUDIO_OUT signal 313 such that these signals 311, 313 do not reach microphone 241 and speaker 243 since the call is an external device call using an external communication device, such as external device 280 of Figure 2.
  • microphone is disconnected at connection segment 361 while speaker 243 is disconnected at connection segment 363.
  • the operating system may not be aware of this audio path change and device list may be kept with the original playback/recording device map.
  • audio routing or redirection 370 is performed between one or more recording devices (also referred to as “record devices” ) 371 and one or more playback devices (also referred to as “play devices” or “playing devices” ) 381 which may be part of one or more of computing/communication devices, such as computing devices 100, 270, and/or other communication devices, such as external communication device 280 of Figure 2.
  • recording devices 371, originating at external device microphone 373 e.g., Bluetooth headset microphone
  • audio buffer 1 391 is captured in audio buffer 1 391 and redirected to play devices 381 via built-in speaker 243.
  • audio from built-in microphone 241 is captured in audio buffer 2 393 and redirected over to external device speaker 383.
  • a successful external device call is ensured even on operating systems that do not support phone calling.
  • Method begins with either a calling in at 401 or calling out at 402 of a phone call at a computing device.
  • a phone call is received by a communication modem/module (e.g., 3G communication modem, 4G communication modem, etc. ) at block 403.
  • a trigger is sent from the communication modem to an application processor.
  • a dial panel is opened to make the phone call.
  • an external device e.g., Bluetooth headset
  • a sound e.g., beep
  • an external device e.g., Bluetooth headset
  • the audio codec module at the computing device is triggered to set up an audio channel for the local device call, such as by: a) connecting an on-board speaker, such speaker 243 of Figure 2, to the communication modem’s AUDIO_IN unit; and/or b) connecting an on-board microphone, such microphone 241 of Figure 2, to the communication modem’s AUDIO_OUT unit.
  • audio codec sets up an audio channel for the external device call by, such as by: a) disconnecting, logically, the on-board speaker and microphone from the audio codec; and/or b) mapping, logically, the computing device’s a built-in play device and a recording device to the communication modem’s AUDIO_OUT unit and AUDIO_IN unit, respectively.
  • application processor sets up the audio channel for the external device call, such as by: a) connecting the external device’s AUDIO_IN unit to the computing device’s built-in play device which is redirected to the communication modem’s AUDIO_OUT unit by the audio codec; and/or b) connecting the external device’s AUDIO_OUT unit to the computing system’s built-in recording device which is redirected to the communication modem’s AUDIO_IN unit by the audio codec.
  • the phone call is initiated and continued.
  • a determination is made as to whether the phone call has ended. If not, the phone call continues at block 439. If yes, at block 443, a determination is made as to whether the external device is connected. If not, in one embodiment, at block 445, the previous audio channel is restored to normal, such as by: a) disconnecting the on-board speaker from the communication modem’s AUDIO_IN unit; and/or b) disconnecting the on-board microphone from the communication modem’s AUIDO_OUT unit.
  • the previous audio channel is restored as normal, such as by: a) un-mapping the computing device’s built-in play device from the communication modem’s AUIDO_OUT unit and mapping it to the on-board speaker as normal; and/or b) un-mapping the built-in recording device from the communication modem’s AUIDO_IN unit and mapping it to the on-board microphone as normal.
  • the phone call ends.
  • FIG. 5 illustrates an embodiment of a computing system 500 capable of supporting the operations discussed above.
  • Computing system 500 represents a range of computing and electronic devices (wired or wireless) including, for example, desktop computing systems, laptop computing systems, cellular telephones, personal digital assistants (PDAs) including cellular-enabled PDAs, set top boxes, smartphones, tablets, wearable devices, etc. Alternate computing systems may include more, fewer and/or different components.
  • Computing device 500 may be the same as or similar to or include computing devices 100 described in reference to Figure 1.
  • Computing system 500 includes bus 505 (or, for example, a link, an interconnect, or another type of communication device or interface to communicate information) and processor 510 coupled to bus 505 that may process information. While computing system 500 is illustrated with a single processor, it may include multiple processors and/or co-processors, such as one or more of central processors, image signal processors, graphics processors, and vision processors, etc. Computing system 500 may further include random access memory (RAM) or other dynamic storage device 520 (referred to as main memory) , coupled to bus 505 and may store information and instructions that may be executed by processor 510. Main memory 520 may also be used to store temporary variables or other intermediate information during execution of instructions by processor 510.
  • RAM random access memory
  • main memory main memory
  • Computing system 500 may also include read only memory (ROM) and/or other storage device 530 coupled to bus 505 that may store static information and instructions for processor 510.
  • Date storage device 540 may be coupled to bus 505 to store information and instructions.
  • Date storage device 540 such as magnetic disk or optical disc and corresponding drive may be coupled to computing system 500.
  • Computing system 500 may also be coupled via bus 505 to display device 550, such as a cathode ray tube (CRT) , liquid crystal display (LCD) or Organic Light Emitting Diode (OLED) array, to display information to a user.
  • display device 550 such as a cathode ray tube (CRT) , liquid crystal display (LCD) or Organic Light Emitting Diode (OLED) array
  • User input device 560 including alphanumeric and other keys, may be coupled to bus 505 to communicate information and command selections to processor 510.
  • cursor control 570 such as a mouse, a trackball, a touchscreen, a touchpad, or cursor direction keys to communicate direction information and command selections to processor 510 and to control cursor movement on display 550.
  • Camera and microphone arrays 590 of computer system 500 may be coupled to bus 505 to observe gestures, record audio and video and to receive and transmit visual and audio commands.
  • Network interface (s) 580 may provide access to a LAN, for example, by conforming to IEEE 802.11b and/or IEEE 802.11g standards, and/or the wireless network interface may provide access to a personal area network, for example, by conforming to Bluetooth standards.
  • Other wireless network interfaces and/or protocols, including previous and subsequent versions of the standards, may also be supported.
  • network interface (s) 580 may provide wireless communication using, for example, Time Division, Multiple Access (TDMA) protocols, Global Systems for Mobile Communications (GSM) protocols, Code Division, Multiple Access (CDMA) protocols, and/or any other type of wireless communications protocols.
  • TDMA Time Division, Multiple Access
  • GSM Global Systems for Mobile Communications
  • CDMA Code Division, Multiple Access
  • computing system 500 may vary from implementation to implementation depending upon numerous factors, such as price constraints, performance requirements, technological improvements, or other circumstances.
  • Examples of the electronic device or computer system 500 may include without limitation a mobile device, a personal digital assistant, a mobile computing device, a smartphone, a cellular telephone, a handset, a one-way pager, a two-way pager, a messaging device, a computer, a personal computer (PC) , a desktop computer, a laptop computer, a notebook computer, a handheld computer, a tablet computer, a server, a server array or server farm, a web server, a network server, an Internet server, a work station, a mini-computer, a main frame computer, a supercomputer, a network appliance, a web appliance, a distributed computing system, multiprocessor systems, processor-based systems, consumer electronics, programmable consumer electronics, television, digital television, set top box,
  • Embodiments may be implemented as any or a combination of: one or more microchips or integrated circuits interconnected using a parentboard, hardwired logic, software stored by a memory device and executed by a microprocessor, firmware, an application specific integrated circuit (ASIC) , and/or a field programmable gate array (FPGA) .
  • logic may include, by way of example, software or hardware and/or combinations of software and hardware.
  • Embodiments may be provided, for example, as a computer program product which may include one or more machine-readable media having stored thereon machine-executable instructions that, when executed by one or more machines such as a computer, network of computers, or other electronic devices, may result in the one or more machines carrying out operations in accordance with embodiments described herein.
  • references to “one embodiment” , “an embodiment” , “example embodiment” , “various embodiments” , etc., indicate that the embodiment (s) so described may include particular features, structures, or characteristics, but not every embodiment necessarily includes the particular features, structures, or characteristics. Further, some embodiments may have some, all, or none of the features described for other embodiments.
  • Coupled is used to indicate that two or more elements co-operate or interact with each other, but they may or may not have intervening physical or electrical components between them.
  • the Command Execution Module 601 includes a central processing unit to cache and execute commands and to distribute tasks among the other modules and systems shown. It may include an instruction stack, a cache memory to store intermediate and final results, and mass memory to store applications and operating systems. The Command Execution Module may also serve as a central coordination and task allocation unit for the system.
  • the Adjacent Screen Perspective Module 2 could send data to the Screen Rendering Module to suggest, for example in shadow form, one or more target landing areas for the virtual object on that track to a user’s hand movements or eye movements.
  • the Direction of Attention Module 623 may be equipped with cameras or other sensors to track the position or orientation of a user's face or hands. When a gesture or voice command is issued, the system can determine the appropriate screen for the gesture. In one example, a camera is mounted near each display to detect whether the user is facing that display. If so, then the direction of attention module information is provided to the Object and Gesture Recognition Module 622 to ensure that the gestures or commands are associated with the appropriate library for the active display. Similarly, if the user is looking away from all of the screens, then commands can be ignored.
  • the Adjacent Screen Perspective Module 607 which may include or be coupled to the Device Proximity Detection Module 625, may be adapted to determine an angle and position of one display relative to another display.
  • a projected display includes, for example, an image projected onto a wall or screen. The ability to detect a proximity of a nearby screen and a corresponding angle or orientation of a display projected therefrom may for example be accomplished with either an infrared emitter and receiver, or electromagnetic or photo-detection sensing capability. For technologies that allow projected displays with touch input, the incoming video can be analyzed to determine the position of a projected display and to correct for the distortion caused by displaying at an angle.
  • An accelerometer, magnetometer, compass, or camera can be used to determine the angle at which a device is being held while infrared emitters and cameras could allow the orientation of the screen device to be determined in relation to the sensors on an adjacent device.
  • the Adjacent Screen Perspective Module 607 may, in this way, determine coordinates of an adjacent screen relative to its own screen coordinates. Thus, the Adjacent Screen Perspective Module may determine which devices are in proximity to each other, and further potential targets for moving one or more virtual object's across screens.
  • the Adjacent Screen Perspective Module may further allow the position of the screens to be correlated to a model of three-dimensional space representing all of the existing objects and virtual objects.
  • Example 2 includes the subject matter of Example 1, wherein the local call is based on one or more local input/output (I/O) devices coupled with the apparatus, wherein the one or more local I/O devices include at least one of a local microphone and a local speaker.
  • I/O input/output
  • Example 10 includes the subject matter of Example 9, wherein the local call is based on one or more local input/output (I/O) devices coupled with the computing device, wherein the one or more local I/O devices include at least one of a local microphone and a local speaker.
  • I/O input/output
  • Example 15 includes the subject matter of Example 9, wherein the first routing of the first audio comprises directing the first audio to or from at least one of the local microphone and the local speaker, wherein the first routing of the second audio comprising directing the second audio to or from at least one of the external microphone and the external speaker, wherein the second routing further comprises switching off at least one of the local microphone and the local speaker.
  • Example 16 includes the subject matter of Example 9, further comprising: executing at least one of the first routing and the second routing as determined by the routing of the audio; and establishing communication with the external audio device and one or more computing devices as facilitated by the phone call.
  • Example 17 includes a system comprising a storage device having instructions, and a processor to execute the instructions to facilitate a mechanism to perform one or more operations comprising: detecting a phone call at a computing device; evaluating the phone call including at least one of a local device-based phone call classified at local call and an external device-based phone call classified as external call; and determining routing of audio associated with the phone call, wherein routing includes at least one of a first routing of a first audio associated the phone call being the local call and a second routing of a second audio associated with the phone call being the external call.
  • Example 18 includes the subject matter of Example 17, wherein the local call is based on one or more local input/output (I/O) devices coupled with the computing device, wherein the one or more local I/O devices include at least one of a local microphone and a local speaker.
  • I/O input/output
  • Example 19 includes the subject matter of Example 17 or 18, wherein the local call comprises at least one of an incoming local call and an outgoing local call.
  • Example 20 includes the subject matter of Example 17, wherein the external call is based on one or more external I/O devices coupled with an external audio device, wherein the one or more external I/O devices include at least one of an external microphone and an external speaker, wherein the external audio device includes a Bluetooth headset.
  • Example 21 includes the subject matter of Example 17 or 20, wherein the external call comprises at least one of an incoming external call and an outgoing external call.
  • Example 22 includes the subject matter of Example 17, wherein the one or more operations comprise verifying the external audio device being in communication with the computing device, wherein the local call is facilitated if the external audio device is disconnected or inactivated, and wherein the external call is facilitated if the external audio device is connected and activated, wherein detecting the phone call further includes detecting the external audio device over a network including a proximity network.
  • Example 23 includes the subject matter of Example 17, wherein the first routing of the first audio comprises directing the first audio to or from at least one of the local microphone and the local speaker, wherein the first routing of the second audio comprising directing the second audio to or from at least one of the external microphone and the external speaker, wherein the second routing further comprises switching off at least one of the local microphone and the local speaker.
  • Example 24 includes the subject matter of Example 17, wherein the one or more operations comprise: executing at least one of the first routing and the second routing as determined by the routing of the audio; and establishing communication with the external audio device and one or more computing devices as facilitated by the phone call.
  • Example 25 includes an apparatus comprising: means for detecting a gesture initiated by a sending user having access to a computing device; means for detecting a phone call at a computing device; means for evaluating the phone call including at least one of a local device-based phone call classified at local call and an external device-based phone call classified as external call; and means for determining routing of audio associated with the phone call, wherein routing includes at least one of a first routing of a first audio associated the phone call being the local call and a second routing of a second audio associated with the phone call being the external call.
  • Example 26 includes the subject matter of Example 25, wherein the local call is based on one or more local input/output (I/O) devices coupled with the computing device, wherein the one or more local I/O devices include at least one of a local microphone and a local speaker.
  • I/O input/output
  • Example 27 includes the subject matter of Example 25 or 26, wherein the local call comprises at least one of an incoming local call and an outgoing local call.
  • Example 28 includes the subject matter of Example 25, wherein the external call is based on one or more external I/O devices coupled with an external audio device, wherein the one or more external I/O devices include at least one of an external microphone and an external speaker, wherein the external audio device includes a Bluetooth headset.
  • Example 29 includes the subject matter of Example 25 or 28, wherein the external call comprises at least one of an incoming external call and an outgoing external call.
  • Example 30 includes the subject matter of Example 25, further comprising means for verifying the external audio device being in communication with the computing device, wherein the local call is facilitated if the external audio device is disconnected or inactivated, and wherein the external call is facilitated if the external audio device is connected and activated, wherein detecting the phone call further includes detecting the external audio device over a network including a proximity network.
  • Example 31 includes the subject matter of Example 25, wherein the first routing of the first audio comprises directing the first audio to or from at least one of the local microphone and the local speaker, wherein the first routing of the second audio comprising directing the second audio to or from at least one of the external microphone and the external speaker, wherein the second routing further comprises switching off at least one of the local microphone and the local speaker.
  • Example 33 includes at least one non-transitory or tangible machine-readable medium comprising a plurality of instructions, when executed on a computing device, to implement or perform a method as claimed in any of claims or examples 9-16.
  • Example 34 includes at least one machine-readable medium comprising a plurality of instructions, when executed on a computing device, to implement or perform a method as claimed in any of claims or examples 9-16.
  • Example 35 includes a system comprising a mechanism to implement or perform a method as claimed in any of claims or examples 9-16.
  • Example 36 includes an apparatus comprising means for performing a method as claimed in any of claims or examples 9-16.
  • Example 37 includes a computing device arranged to implement or perform a method as claimed in any of claims or examples 9-16.
  • Example 38 includes a communications device arranged to implement or perform a method as claimed in any of claims or examples 9-16.
  • Example 40 includes at least one non-transitory or tangible machine-readable medium comprising a plurality of instructions, when executed on a computing device, to implement or perform a method or realize an apparatus as claimed in any preceding claims.
  • Example 41 includes a system comprising a mechanism to implement or perform a method or realize an apparatus as claimed in any preceding claims.
  • Example 42 includes an apparatus comprising means to perform a method as claimed in any preceding claims.
  • Example 43 includes a computing device arranged to implement or perform a method or realize an apparatus as claimed in any preceding claims.
  • Example 44 includes a communications device arranged to implement or perform a method or realize an apparatus as claimed in any preceding claims.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Telephone Function (AREA)

Abstract

A mechanism is described for facilitating smart voice routing for phone calls using incompatible operating systems at computing devices according to one embodiment. A method of embodiments, as described herein, includes detecting a phone call at a computing device, and evaluating the phone call including at least one of a local device-based phone call classified at local call and an external device-based phone call classified as external call. The method may further include determining routing of audio associated with the phone call, where routing includes at least one of a first routing of a first audio associated the phone call being the local call and a second routing of a second audio associated with the phone call being the external call.

Description

[Corrected under Rule 26, 03.12.2015] FACILITATING SMART VOICE ROUTING FOR PHONE CALLS USING INCOMPATIBLE OPERATING SYSTEMS AT COMPUTING DEVICES FIELD
Embodiments described herein generally relate to computers. More particularly, embodiments relate to facilitating smart voice routing for phone calls using incompatible operating systems at computing devices.
BACKGROUND
With the rise the use of computing devices, such as mobile computers, it is getting increasingly common to make phone calls using computing devices as opposed to conventional phones. However, several operating systems still lack support for the phone function, which requires the users to rely on add-ons, conventional phones, and/or the like, which is inefficient, time-consuming, inconvenient, and cumbersome.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings in which like reference numerals refer to similar elements.
Figure 1 illustrates a computing device employing a smart audio-routing phone mechanism according to one embodiment.
Figure 2 illustrates a smart audio-routing phone mechanism according to one embodiment.
Figure 3A illustrates a transaction sequence for facilitating audio routing at computing devices having operating systems that lack phone-capabilities according to one embodiment.
Figure 3B illustrates a transaction sequence for facilitating audio routing at computing devices having operating systems that lack phone-capabilities according to one embodiment.
Figure 3C illustrates an audio codec module for facilitating audio routing for external device calls at computing devices that lack phone-capabilities according to one embodiment.
Figure 3D illustrates an audio routing at computing devices having operating systems that lack phone-capabilities according to one embodiment.
Figure 4 illustrates a method for facilitating audio routing at computing devices having operating systems that lack phone-capabilities according to one embodiment.
Figure 5 illustrates computer environment suitable for implementing embodiments of the present disclosure according to one embodiment.
Figure 6 illustrates a method for facilitating dynamic targeting of users and communicating of message according to one embodiment.
DETAILED DESCRIPTION
In the following description, numerous specific details are set forth. However, embodiments, as described herein, may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in details in order not to obscure the understanding of this description.
Embodiments provide for a novel technique for voice/audio routing for different phone calling scenarios for incompatible operating systems, such as those operating systems that do not provide native phone stack support. For example, in one embodiment, a phone call voice path from a communication/broadband modem ( “communication modem” , “broadband modem” , or “communication module” ) (e.g., third generation (3D) modem, fourth generation (4G) modem, etc. ) may be routed to a computing device’s input/output components, such as speaker and microphone, via an audio codec module ( “audio codec” or simply “codec” ) . Similarly, for example, for a proximity external device-based call, such as a Bluetooth calls using a Bluetooth device (e.g., headset) , a phone call voice from a communication modem may be redirected twice via the audio codec and an application processor (e.g., System on Chip (SoC) ) separately and finally routing it to a Bluetooth headset. It is contemplated that terms “voice” and “audio” may be referenced interchangeably throughout this document.
In one embodiment, a voice routing technique is provided for facilitating circuit-switching phone calls on computing devices that may be equipped with communication/broadband modems ( “communication modem” or “broadband modem” ) and installed with an incompatible operating system that does not support phone calling (e.g., 
Figure PCTCN2015090680-appb-000001
 desktop version, etc. ) . This novel voice routing technique addresses both the regular calls and Bluetooth calls and further, unlike a soft phone call (such as using Voice over Internet Protocol (VoIP) ) , the novel technique provides for zero voice delay that is conventionally caused by network traffic.
It is contemplated and to be noted that embodiments are not limited to any particular number and type of software applications, application services, customized settings, etc., or any particular number and type of computing devices, networks, deployment details, etc. ; however, for the sake of brevity, clarity, and ease of understanding, throughout this document, references are made to user interfaces, software applications, user preferences, customized settings, mobile computers (e.g., smartphones, tablet computers, etc. ) , communication medium/network (e.g., protocols, communication or input/output components, cloud network, proximity network, the Internet, etc. ) , but that embodiments are not limited as such.
Figure 1 illustrates a computing device 100 employing a smart audio-routing phone mechanism 110 according to one embodiment. Computing device 100 serves as a host machine for hosting smart audio-routing phone mechanism ( “smart phone mechanism” ) 110 that includes any number and type of components, as illustrated in Figure 2, to facilitate intelligent, dynamic, and automatic voice routing to facilitate phone calling using operating systems that do not support phone calling, as will be further described throughout this document.
Computing device 100 may include any number and type of data processing devices, such as large computing systems, such as server computers, desktop computers, etc., and may further include set-top boxes (e.g., Internet-based cable television set-top boxes, etc. ) , global positioning system (GPS) -based devices, etc. Computing device 100 may include mobile computing devices serving as communication devices, such as cellular phones including smartphones, personal digital assistants (PDAs) , tablet computers, laptop computers (e.g., UltrabookTM system, etc. ) , e-readers, media internet devices (MIDs) , media players, smart televisions, television platforms, intelligent devices, computing dust, media players, head-mounted displays (HMDs) (e.g., wearable glasses, such as 
Figure PCTCN2015090680-appb-000002
 glassTM, head-mounted binoculars, gaming displays, military headwear, etc. ) , and other wearable devices (e.g., smartwatches, bracelets, smartcards, jewelry, clothing items, etc. ) , and/or the like.
Computing device 100 may include an operating system (OS) 106 serving as an interface between hardware and/or physical resources of the computer device 100 and a user. Computing device 100 further includes one or more processor (s) 102, memory devices 104, network devices, drivers, or the like, as well as input/output (I/O) sources 108, such as touchscreens, touch panels, touch pads, virtual or regular keyboards, virtual or regular mice, etc.
It is to be noted that terms like “node” , “computing node” , “server” , “server device” , “cloud computer” , “cloud server” , “cloud server computer” , “machine” , “host machine” , “device” , “computing device” , “computer” , “computing system” , and the like, may be used interchangeably throughout this document. It is to be further noted that terms like “application” , “software application” , “program” , “software program” , “package” , “software package” , “code” , “software code” , and the like, may be used interchangeably throughout this document. Also, terms like “job” , “input” , “request” , “message” , and the like, may be used interchangeably throughout this document. It is contemplated that the term “user” may refer to an individual or a person or a group of individuals or persons using or having access to computing device 100.
Figure 2 illustrates a smart audio-routing phone mechanism 110 according to one embodiment. In one embodiment, smart phone mechanism 110 may include any number and type of components, such as (without limitation) : detection/reception logic 201;  evaluation/classification logic 203; audio routing logic 205; verification logic 207; application/execution logic 209; and communication/compatibility logic 211. Computing device 100 is further shown as offering user interface 221 and hosting input/output sources 108 having capturing/sensing components 231 (e.g., sensors, detectors, microphones, etc. ) , such as microphone 241, and output sources 233 (e.g., display devices, speakers, etc. ) , such as speaker 243.
In one embodiment, smart phone mechanism 110 may be hosted by computing device 100, such as a data processing/communication device including a mobile computer (e.g., smartphone, tablet computer, etc. ) , a wearable computer (e.g., wearable glasses, smart bracelets, smartcards, smart watches, HMDs, etc. ) , an Internet of Things (IoT) device (e.g., home security system, thermostat, washer/dryer, light panel, sprinkler system, etc. ) , and/or the like. In another embodiment, computing device 100 may be a larger data processing/communication machine, such as a server computer, a desktop computer, a laptop computer, etc. In one embodiment, computing device 100 may be in communication with one or more other computing devices, such as computing device 270 (e.g., mobile device, wearable device, IoT device, laptop computer, desktop computer, server computer, etc. ) over communication medium 260, such as one or more networks (e.g., Cloud network, the Internet, proximity network, such as Bluetooth, etc. ) . In embodiment, computing device 100 is further shown to be in communication with one or more external communication devices, such as external communication device ( “external device” ) 280 (e.g., Bluetooth device, such as Bluetooth headset, etc. ) , having one or more input/output components with audio sensing/broadcasting capabilities.
For example and in one embodiment, computing device 100 may serve as a server computer hosting smart phone mechanism 110 in its entirety while communicating one or more services offered by smart phone mechanism 110 to one or more personal devices (e.g., desktop computers, laptop computers, mobile computers, wearable devices, IoT devices, etc. ) , such as computing device 270, over communication medium 260, such as a cloud network. In another embodiment, computing device 100 itself may be a personal device, such as similar to or the same as computing devices 270, where each  computing device  100, 270 may include smart phone mechanism 110, either partially or entirely, as part or in support of a software application, such as software application 271. In one embodiment, software application 271 may include a smart phone (SP) application based on smart phone mechanism 110 or, in another embodiment, software application 271 may include any other non-SP application, such as a web browser, etc.
For example, in case of software application 271 being a non-SP application, computing device 270 may provide certain communication/data processing features or components, such as  user interface 273 (e.g., mobile/IoT application-based interface, web browser, etc. ) , I/O components 275, communication logic 277, etc., but rely on one or more components of smart phone mechanism 110 at computing device 100 for further processing. For example and in one embodiment, in case of software application 271 being an SP application (which may be downloadable or accessible over communication medium 260) , SP-based software application 271 may be the same as or similar to smart phone mechanism 110 (such as having one or more components of smart phone mechanism 110) making it capable of performing one or more tasks of smart phone mechanism 110 at computing device 270.
In one embodiment, software application 271 may be capable of interacting with smart phone mechanism 110, via communication logic 277, over communication medium 260, while a user may access software application 271 via user interface 273 (e.g., mobile/IoT application interface, web browser, etc. ) . Further, as with computing device 100, computing device 270 may host I/O components 275, such as (without limitation) sensors, detectors, actuators, microphones, speakers, 2D/3D cameras, touchscreens, display devices, and/or the like.
Computing devices  100, 270 may be further in communication with one or more repositories or data sources or databases, such as database (s) 265, to obtain, communicate, store, and maintain any amount and type of data (e.g., voice/audio data, contact information, local/remote device data, communication modem/module data, data tables, data maps, media, metadata, templates, real-time data, historical contents, user and/or device identification tags and other information, resources, policies, criteria, rules, regulations, upgrades, profiles, preferences, configurations, etc. ) .
In some embodiments, communication medium 260 may include any number and type of communication channels or networks, such as cloud network, the Internet, intranet, Internet of Things ( “IoT” ) , proximity network, such as Bluetooth, etc. It is contemplated that embodiments are not limited to any particular number or type of computing devices, services or resources, databases, networks, etc.
In one embodiment, I/O source (s) 108 include capturing/sensing component (s) 231 and output component (s) 233 which, as will be further described below, may include any number and type of I/O components, such as sensor arrays, detectors, displays, etc. For example, capturing/sensing components 231 may include (without limitations) sensor array (such as context/context-aware sensors and environmental sensors, such as camera sensors, ambient light sensors, Red Green Blue (RGB) sensors, movement sensors, etc. ) , depth-sensing cameras, 2D cameras, 3D cameras, image sources, audio/video/signal detectors, microphones, eye/gaze-tracking systems, head-tracking systems, etc. Similarly, for example, output components 233  may include (without limitation) audio/video/signal sources, display planes, display panels, display screens/devices, projectors, display/projection areas, speakers, etc. .
Capturing/sensing components 231 may further include one or more of vibration components, tactile components, conductance elements, biometric sensors, chemical detectors, signal detectors, electroencephalography, functional near-infrared spectroscopy, wave detectors, force sensors (e.g., accelerometers) , illuminators, eye-tracking or gaze-tracking system, head-tracking system, etc., that may be used for capturing any amount and type of visual data, such as images (e.g., photos, videos, movies, audio/video streams, etc. ) , and non-visual data, such as audio streams or signals (e.g., sound, noise, vibration, ultrasound, etc. ) , radio waves (e.g., wireless signals, such as wireless signals having data, metadata, signs, etc. ) , chemical changes or properties (e.g., humidity, body temperature, etc. ) , biometric readings (e.g., figure prints, etc. ) , brainwaves, brain circulation, environmental/weather conditions, maps, etc. It is contemplated that “sensor” and “detector” may be referenced interchangeably throughout this document. It is further contemplated that one or more capturing/sensing components 231 may further include one or more of supporting or supplemental devices for capturing and/or sensing of data, such as illuminators (e.g., infrared (IR) illuminator) , light fixtures, generators, sound blockers, etc.
It is further contemplated that in one embodiment, capturing/sensing components 231 may further include any number and type of context sensors (e.g., linear accelerometer) for sensing or detecting any number and type of contexts (e.g., estimating horizon, linear acceleration, etc., relating to a mobile computing device, etc. ) . For example, capturing/sensing components 231 may include any number and type of sensors, such as (without limitations) : accelerometers (e.g., linear accelerometer to measure linear acceleration, etc. ) ; inertial devices (e.g., inertial accelerometers, inertial gyroscopes, micro-electro-mechanical systems (MEMS) gyroscopes, inertial navigators, etc. ) ; gravity gradiometers to study and measure variations in gravitation acceleration due to gravity, etc.
Further, for example, capturing/sensing components 231 may include (without limitations) : audio/visual devices (e.g., cameras, microphones, speakers, etc. ) ; context-aware sensors (e.g., temperature sensors, facial expression and feature measurement sensors working with one or more cameras of audio/visual devices, environment sensors (such as to sense background colors, lights, etc. ) , biometric sensors (such as to detect fingerprints, etc. ) , calendar maintenance and reading device) , etc. ; global positioning system (GPS) sensors; resource requestor; and trusted execution environment (TEE) logic. TEE logic may be employed separately or be part of resource requestor and/or an I/O subsystem, etc. Capturing/sensing  components 231 may further include voice recognition devices, photo recognition devices, facial and other body recognition components, voice-to-text conversion components, etc.
Similarly, for example, output components 233 may include dynamic tactile touch screens having tactile effectors as an example of presenting visualization of touch, where an embodiment of such may be ultrasonic generators that can send signals in space which, when reaching, for example, human fingers can cause tactile sensation or like feeling on the fingers. Further, for example and in one embodiment, output components 233 may include (without limitation) one or more of light sources, display devices and/or screens, audio speakers, tactile components, conductance elements, bone conducting speakers, olfactory or smell visual and/or non/visual presentation devices, haptic or touch visual and/or non-visual presentation devices, animation display devices, biometric display devices, X-ray display devices, high-resolution displays, high-dynamic range displays, multi-view displays, and head-mounted displays (HMDs) for at least one of virtual reality (VR) and augmented reality (AR) , etc.
As previously discussed, several conventional operating systems, 
Figure PCTCN2015090680-appb-000003
 operating system desktop versions 7 and 8, etc., may not provide the necessary native phone stack support for phone calling. Conventional techniques offered to resolve these deficiencies are severely limited in their application as they require an additional third-party phone-enabled hardware equipment, such as Universal Serial Bus (USB) dongles, etc., which are cumbersome and add cost to the user and yet such techniques are without voice delays and do not support certain types of phone calls, equipment, such as Bluetooth devices, etc.
Embodiments provide for a novel audio-routing technique as facilitated by smart phone mechanism 110 to work with such incompatible operation systems in facilitating phone calls without requiring additional hardware or causing voice delays, etc. For example, based on this novel audio-routing technique as facilitated by smart phone mechanism 110, different audio routing patterns may be selectively incorporated for different phone call scenarios, such as in case of a regular phone call, the phone call voice path from a communication modem (e.g., 3G modem) may be routed to a speaker of output components 233 and microphone of capturing/sensing components 231 via the codec. Similarly, for example, in case of a Bluetooth-based phone call (such as using a Bluetooth headset) , the phone call voice path may be twice redirected from the communication modem via the codec and application processor, separately, and finally rerouted to the Bluetooth headset.
Embodiments provide for resolving voice routing problems caused by lack of phone features on certain operating systems for phone calls that are based on, for example, traditional circuit-switching techniques. For example, on the one hand, unlike the existing hard-phone  solutions, users do not need to spend money on buying extra hardware (e.g., 3G USB dongle, etc. ) and hold or install it for a phone call to work. Embodiments provide for a convenient, efficient, cost-less technique for making phone calls using native communication modems and further, allowing for regular phone calls (such as without headsets, etc. ) as well as other types of phone calls (such as using wired headsets, Bluetooth headsets, etc. ) .
In one embodiment, when an outgoing phone call is placed by a user at computing device 100, whether it be using a phone keypad provided through user interface 221 or dialing using a external device, such as a Bluetooth dialing device, etc., this placement of the phone call by the user is detected by or received at detection/reception logic 201. Similarly, if a phone call to computing device 100 is placed by a user at another device, such as computing device 270, this call may then be regarded as an incoming call and detected at or received by detection/reception logic 201.
On detecting a request for a call out or a call in, evaluation/classification logic 203 may then be triggered to evaluate the type of the placed phone call. For example, if the phone call is evaluated to be a local device call (such as using local listening/speaking devices, and not using any external listing/speaking devices, such as a Bluetooth headset, etc. ) , evaluation/classification logic 203 may classify this type of phone call to be a local device-based call. Similarly, if the phone call is evaluated by to be an external device call (such as using external listing/speaking devices, such as a Bluetooth device, and not using local listening/speaking devices) , evaluation/classification logic 203 may classify this type of phone call to be an external device-based call.
In one embodiment, upon classifying the phone call (e.g., outgoing phone call, incoming phone call, etc. ) , audio routing logic 205 may then be triggered to appropriately route the audio being transmitted due to the phone call. In one embodiment, if the phone call is classified as a local device-based call ( “local call” or “local device call” ) (whether the phone call is arriving or being dialed-out) , audio routing logic 205 is triggered to establish an audio path to local I/O sources 108, such as microphone 241 and speaker 243. For example, as illustrated with reference to Figure 3A, once the call is placed or answered, the audio codec is facilitated by audio routing logic 205 to route an AUDIO_IN signal and an AUDIO_OUT signal (e.g., 3G IN/OUT signals, 4G IN/OUT signals, etc. ) to speaker 243 and microphone 241, respectively, and mix the AUDIO_IN/OUT voice for recording as desired or necessitated.
Similarly, in one embodiment, if the phone call is classified as an external device-based call ( “external call” or “external device call” ) (whether the phone call is arriving or being dialed-out) , audio routing logic 205 is triggered to establish an audio path to external device 280 and  while any audio path to local microphone 241 and speaker 243 are switched off such that the audio is not needed to be transferred to/from local microphone 241 speaker 243. Alternatively, as described with reference to Figure 3B, a communication modem AUDIO_IN signal is directed to external device AUDIO_OUT, while external device AUDIO_IN is routed to external device AUDIO_OUT.
In one embodiment, as audio routes are established by audio routing logic 205, verification logic 207 may continue to verify these routes and the status of phone call and any components involved in the phone call. For example, it is contemplated that a phone call may start out as a local device call and end up being an external device call, such as a user may choose to place a local device all at computing device 100, but then after a while put on the their external device 280 that may convert the local device call into an external device call. Similarly, for example, an external device call may be converted into a local device all by the user, such as if external device 280 runs out of battery, goes death, or as desired or necessitated by the user. In one embodiment, verification logic 207 may continuously verify audio routes, connections, microphone 241, speaker 243, external device 280, communication modem, application processor, other components, etc.
In one embodiment, upon verification of the audio routes, connections, components, etc., application/execution logic 209 may then be used to execute the audio routes and the overall framework to facilitate the phone call, whether it be calling in or calling out.
Communication/compatibility logic 211 may be used to facilitate dynamic communication and compatibility between computing devices 100, 270, external device 280 (e.g., Bluetooth headset) , database (s) 265, communication medium 260, etc., and any number and type of other computing devices (such as wearable computing devices, mobile computing devices, desktop computers, server computing devices, etc. ) , processing devices (e.g., central processing unit (CPU) , graphics processing unit (GPU) , etc. ) , capturing/sensing components (e.g., non-visual data sensors/detectors, such as audio sensors, olfactory sensors, haptic sensors, signal sensors, vibration sensors, chemicals detectors, radio wave detectors, force sensors, weather/temperature sensors, body/biometric sensors, scanners, etc., and visual data sensors/detectors, such as cameras, etc. ) , user/context-awareness components and/or identification/verification sensors/devices (such as biometric sensors/detectors, scanners, etc. ) , memory or storage devices, data sources, and/or database (s) (such as data storage devices, hard drives, solid-state drives, hard disks, memory cards or devices, memory circuits, etc. ) , network (s) (e.g., Cloud network, the Internet, Internet of Things, intranet, cellular network, proximity networks, such as Bluetooth, Bluetooth low energy (BLE) , Bluetooth Smart, Wi-Fi proximity,  Radio Frequency Identification (RFID) , Near Field Communication (NFC) , Body Area Network (BAN) , etc. ) , wireless or wired communications and relevant protocols (e.g., 
Figure PCTCN2015090680-appb-000004
 WiMAX, Ethernet, etc. ) , connectivity and location management techniques, software applications/websites, (e.g., social and/or business networking websites, business applications, games and other entertainment applications, etc. ) , programming languages, etc., while ensuring compatibility with changing technologies, parameters, protocols, standards, etc.
Throughout this document, terms like "logic" , “component” , “module” , “framework” , “engine” , “tool” , and the like, may be referenced interchangeably and include, by way of example, software, hardware, and/or any combination of software and hardware, such as firmware. Further, any use of a particular brand, word, term, phrase, name, and/or acronym, such as “phone” , “phone call” , “calling in” , “calling out” , “local device call” , “external device call” , “communication modem” , “Bluetooth headset” , “application processor” , “AUDIO_IN” , “AUDIO_OUT” , “audio routing” , “audio” , “voice” , “speaker” , “microphone” , “user” , “user profile” , “user preference” , “rule” , “policy” , “sender” , “receiver” , “personal device” , “smart device” , “mobile computer” , “wearable device” , “cloud device” , “cloud-based server computer” , “third-party server computer” , “remote processing system” , etc., should not be read to limit embodiments to software or devices that carry that label in products or in literature external to this document.
It is contemplated that any number and type of components may be added to and/or removed from smart phone mechanism 110 to facilitate various embodiments including adding, removing, and/or enhancing certain features. For brevity, clarity, and ease of understanding of smart phone mechanism 110, many of the standard and/or known components, such as those of a computing device, are not shown or discussed here. It is contemplated that embodiments, as described herein, are not limited to any particular technology, topology, system, architecture, and/or standard and are dynamic enough to adopt and adapt to any future changes.
Figure 3A illustrates a transaction sequence 300 for facilitating audio routing at computing devices having operating systems that lack phone-capabilities according to one embodiment. Transaction sequence 300 may be performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, etc. ) , software (such as instructions run on a processing device) , or a combination thereof. In one embodiment, transaction sequence 300 may be performed by smart phone mechanism 110 of Figures 1-2. The processes of transaction sequence 300 are illustrated in linear sequences for brevity and clarity in presentation; however, it is contemplated that any number of them can be performed in  parallel, asynchronously, or in different orders. For brevity, many of the details discussed with reference to the previous Figures 1-2 may not be discussed or repeated hereafter.
As an initial matter, in one embodiment, the illustrated  components  241, 243, 301, 303, 305 may be part of or hosted by computing device 100 of Figures 1-2. However, it is contemplated and to be noted that embodiments are not limited to any such components and that one or more components may be added, removed, or modified, as desired or necessitated, and thus, these  components  241, 243, 301, 303, 305 are illustrated and discussed herein as examples and that embodiments are not limited as such. For example, although 3G-based standard, such as 3G-based communication modem, etc., is referenced throughout this document, embodiments are not limited to 3G, 4G, or any particular type of communication or broadband standards, communication modems, modules, protocols, etc.
For the illustrated local device call, in one embodiment, smart phone mechanism 110 of Figures 1-2 may be used to route/reroute the audio such that one or more of the local I/O devices, such as microphone 241 and/or speaker 243, at computing device 100 are used for sensing and playing the audio. In one embodiment, upon evaluating this call and classifying it as a local device call by evaluation/classification logic 203, any recording 321 for call answered and any ringing 323 for call ringing (e.g., I2S_CODEC) may be facilitated by application/execution logic 209 of Figure 2 to be performed through one or more components of application processor 301 which may be part of or one of processor (s) 102 of Figure 1.
Further, in one embodiment, in opening this audio path using microphone 241 and speaker 243, codec 303 may be used to route AUDIO_IN signal 311 (e.g., 3G audio signal, such as I2S_3G) and AUDIO_OUT signal 313 (e.g., 3G audio signal) to the computing device’s speaker 243 and microphone 241, respectively, and mix, through mixer 307, the voices of AUDIO_IN 311/AUDIO_OUT 313 for recording 321 as facilitated by application/execution logic 209 of Figure 2, as desired or necessitated.
Figure 3B illustrates a transaction sequence 350 for facilitating audio routing at computing devices having operating systems that lack phone-capabilities according to one embodiment. Transaction sequence 350 may be performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, etc. ) , software (such as instructions run on a processing device) , or a combination thereof. In one embodiment, transaction sequence 350 may be performed by smart phone mechanism 110 of Figures 1-2. The processes of transaction sequence 350 are illustrated in linear sequences for brevity and clarity in presentation; however, it is contemplated that any number of them can be performed in  parallel, asynchronously, or in different orders. For brevity, many of the details discussed with reference to the previous Figures 1-3A may not be discussed or repeated hereafter.
As an initial matter, in one embodiment, the illustrated  components  241, 243, 301, 303, 305 may be part of or hosted by computing device 100 of Figures 1-2. However, it is contemplated and to be noted that embodiments are not limited to any such components and that one or more components may be added, removed, or modified, as desired or necessitated, and thus, these  components  241, 243, 301, 303, 305 are illustrated and discussed herein as examples and that embodiments are not limited as such. For example, although 3G-based standard, such as 3G-based communication modem, etc., is referenced throughout this document, embodiments are not limited to 3G, 4G, or any particular type of communication or broadband standards, communication modems, modules, protocols, etc. Similarly, Bluetooth headset is used as an example of external communication device 280 but that embodiments are not limited to Bluetooth or Bluetooth device or any other particular type of external device.
In one embodiment, once the illustrated phone call is evaluated and classified by evaluation/classification logic 203 to be an external device call, audio routing logic 205 of Figure 2 may then be used to facilitate two  redirections  351A, 351B of audio streams, such as first redirection 351A at application processor 301 and a second redirection 351B at codec 303 such that any local I/O devices, such as microphone 241 and speaker 243, are switched off 353 and thus prevented from being part of the audio path, such as microphone 241 may be switched off when external device 280 (e.g., Bluetooth headset, etc. ) is connected and used for calling out and similarly, speaker 243 may be switched off 353 as soon as a phone call is answered using external device 280.
Stated differently, in one embodiment, since the call is classified as an external device call and employs and uses external device 280, any audio signal is no longer transferred to/from the computing device’s microphone 241 and/or speaker 243 during the phone call. Accordingly and alternatively, in one embodiment, I/O components of external device 280 are used where external device AUDIO_IN signal 341 is redirected to communication modem AUDIO_OUT signal 313, while communication modem AUDIO_IN signal 311 is redirected to external device AUDIO_OUT 343 through  redirections  351A and 351B of audio signals at application processor 303 and codec 303, respectively.
Figure 3C illustrates an audio codec module 303 for facilitating audio routing for external device calls at computing devices that lack phone-capabilities according to one embodiment. Audio codec 303 may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, etc. ) , software (such as instructions run on a processing device) , or a  combination thereof. In one embodiment, various processes performed using audio codec 303 may be facilitated by smart phone mechanism 110 of Figures 1-2. For brevity, many of the details discussed with reference to the previous Figures 1-3B may not be discussed or repeated hereafter.
In one embodiment, as illustrated, audio codec 303 may be facilitated by audio routing logic 205 and application/execution logic 209 of smart phone mechanism 110 of Figure 2 to perform redirection of audio streams, such as AUDIO_IN signal 311 and AUDIO_OUT signal 313 relating to an external device call employing and using a external device 280 (e.g., Bluetooth headset) of Figure 3B. For example, once the external device call is answered, audio codec 303 is triggered by audio routing logic 205 of Figure 2 to disconnect from microphone 241 and speaker 243 and redirect AUDIO_IN signal 311 and AUDIO_OUT signal 313 such that these  signals  311, 313 do not reach microphone 241 and speaker 243 since the call is an external device call using an external communication device, such as external device 280 of Figure 2.
As illustrated, in one embodiment, microphone is disconnected at connection segment 361 while speaker 243 is disconnected at connection segment 363. In one embodiment, in case of an operating system that does not support phone capabilities, the operating system may not be aware of this audio path change and device list may be kept with the original playback/recording device map.
Figure 3D illustrates an audio routing 370 at computing devices having operating systems that lack phone-capabilities according to one embodiment. Audio routing 370 may be performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, etc. ) , software (such as instructions run on a processing device) , or a combination thereof. In one embodiment, audio routing 370 may be performed by smart phone mechanism 110 of Figures 1-2. The processes of audio routing 370 are illustrated in linear sequences for brevity and clarity in presentation; however, it is contemplated that any number of them can be performed in parallel, asynchronously, or in different orders. For brevity, many of the details discussed with reference to the previous Figures 1-3C may not be discussed or repeated hereafter.
For external device calls, in one embodiment, audio routing or redirection 370 is performed between one or more recording devices (also referred to as “record devices” ) 371 and one or more playback devices (also referred to as “play devices” or “playing devices” ) 381 which may be part of one or more of computing/communication devices, such as  computing devices  100, 270, and/or other communication devices, such as external communication device 280 of Figure 2. For example and in one embodiment, audio from recording devices 371,  originating at external device microphone 373 (e.g., Bluetooth headset microphone) , is captured in audio buffer 1 391 and redirected to play devices 381 via built-in speaker 243. Similarly, audio from built-in microphone 241 is captured in audio buffer 2 393 and redirected over to external device speaker 383. As previously discussed with regard to Figure 3B, using this twice redirection technique, a successful external device call is ensured even on operating systems that do not support phone calling.
Figure 4 illustrates a method 400 for facilitating audio routing at computing devices having operating systems that lack phone-capabilities according to one embodiment. Method 400 may be performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, etc. ) , software (such as instructions run on a processing device) , or a combination thereof. In one embodiment, method 400 may be performed by smart phone mechanism 110 of Figures 1-2. The processes of method 400 are illustrated in linear sequences for brevity and clarity in presentation; however, it is contemplated that any number of them can be performed in parallel, asynchronously, or in different orders. For brevity, many of the details discussed with reference to the previous Figures 1-3D may not be discussed or repeated hereafter.
Method begins with either a calling in at 401 or calling out at 402 of a phone call at a computing device. In one embodiment, for calling in or placing a phone call at 401, a phone call is received by a communication modem/module (e.g., 3G communication modem, 4G communication modem, etc. ) at block 403. At block 405, a trigger is sent from the communication modem to an application processor. In another embodiment, at block 407, for calling in or receiving a phone call at 402, a dial panel is opened to make the phone call.
In one embodiment, at block 409, a determination is made to whether there is an existing phone call coming in or going out. If yes, at block 411, the existed phone call, whether coming in or going out, is continued while the upcoming phone call is rejected. If not, at block 413, an external device (e.g., Bluetooth headset) connection status is detected.
At block 415, in one embodiment, a determination is made as to whether the phone call is a local device call (e.g., local device call in, local device call out) or an external device call (e.g., external device call in, external device call out) . If the phone call is determined to an local device call in, at block 417, a user-defined or other default ring bell is played which is routed to a built-in speaker at the computing device. Similarly, at block 419, if the phone call is determined to be an local device call out, a sound (e.g., beep) is played and routed to the built-in speaker at the computing device. Further, in one embodiment, if the phone call is determined to be an device call in, a user-defined or other ring bell is played and routed to an external device  AUDIO_OUT unit at block 421. Similarly, at block 423, if the phone call is an external device call out, a sound (e.g., beep) is played and routed to an external device AUDIO_IN unit.
In one embodiment, at block 425, a determination is made as to whether the phone call is answered. If not, at block 427, the ring bell/beep sound stops playing and process ends with an exit. If yes, the ring bell/beep sound stops at block 429 and method 400 continues at block 431 where a determination is made as to whether an external device (e.g., Bluetooth headset) is connected. If not, at block 433, the audio codec module at the computing device is triggered to set up an audio channel for the local device call, such as by: a) connecting an on-board speaker, such speaker 243 of Figure 2, to the communication modem’s AUDIO_IN unit; and/or b) connecting an on-board microphone, such microphone 241 of Figure 2, to the communication modem’s AUDIO_OUT unit.
Referring back to block 431, if yes, at block 435, audio codec sets up an audio channel for the external device call by, such as by: a) disconnecting, logically, the on-board speaker and microphone from the audio codec; and/or b) mapping, logically, the computing device’s a built-in play device and a recording device to the communication modem’s AUDIO_OUT unit and AUDIO_IN unit, respectively. In one embodiment, at block 437, application processor sets up the audio channel for the external device call, such as by: a) connecting the external device’s AUDIO_IN unit to the computing device’s built-in play device which is redirected to the communication modem’s AUDIO_OUT unit by the audio codec; and/or b) connecting the external device’s AUDIO_OUT unit to the computing system’s built-in recording device which is redirected to the communication modem’s AUDIO_IN unit by the audio codec.
At block 439, in one embodiment, the phone call is initiated and continued. At block 441, a determination is made as to whether the phone call has ended. If not, the phone call continues at block 439. If yes, at block 443, a determination is made as to whether the external device is connected. If not, in one embodiment, at block 445, the previous audio channel is restored to normal, such as by: a) disconnecting the on-board speaker from the communication modem’s AUDIO_IN unit; and/or b) disconnecting the on-board microphone from the communication modem’s AUIDO_OUT unit. If yes, in one embodiment, at block 447, the previous audio channel is restored as normal, such as by: a) un-mapping the computing device’s built-in play device from the communication modem’s AUIDO_OUT unit and mapping it to the on-board speaker as normal; and/or b) un-mapping the built-in recording device from the communication modem’s AUIDO_IN unit and mapping it to the on-board microphone as normal. At block 449, the phone call ends.
Figure 5 illustrates an embodiment of a computing system 500 capable of supporting the operations discussed above. Computing system 500 represents a range of computing and electronic devices (wired or wireless) including, for example, desktop computing systems, laptop computing systems, cellular telephones, personal digital assistants (PDAs) including cellular-enabled PDAs, set top boxes, smartphones, tablets, wearable devices, etc. Alternate computing systems may include more, fewer and/or different components. Computing device 500 may be the same as or similar to or include computing devices 100 described in reference to Figure 1.
Computing system 500 includes bus 505 (or, for example, a link, an interconnect, or another type of communication device or interface to communicate information) and processor 510 coupled to bus 505 that may process information. While computing system 500 is illustrated with a single processor, it may include multiple processors and/or co-processors, such as one or more of central processors, image signal processors, graphics processors, and vision processors, etc. Computing system 500 may further include random access memory (RAM) or other dynamic storage device 520 (referred to as main memory) , coupled to bus 505 and may store information and instructions that may be executed by processor 510. Main memory 520 may also be used to store temporary variables or other intermediate information during execution of instructions by processor 510.
Computing system 500 may also include read only memory (ROM) and/or other storage device 530 coupled to bus 505 that may store static information and instructions for processor 510. Date storage device 540 may be coupled to bus 505 to store information and instructions. Date storage device 540, such as magnetic disk or optical disc and corresponding drive may be coupled to computing system 500.
Computing system 500 may also be coupled via bus 505 to display device 550, such as a cathode ray tube (CRT) , liquid crystal display (LCD) or Organic Light Emitting Diode (OLED) array, to display information to a user. User input device 560, including alphanumeric and other keys, may be coupled to bus 505 to communicate information and command selections to processor 510. Another type of user input device 560 is cursor control 570, such as a mouse, a trackball, a touchscreen, a touchpad, or cursor direction keys to communicate direction information and command selections to processor 510 and to control cursor movement on display 550. Camera and microphone arrays 590 of computer system 500 may be coupled to bus 505 to observe gestures, record audio and video and to receive and transmit visual and audio commands.
Computing system 500 may further include network interface (s) 580 to provide access to a network, such as a local area network (LAN) , a wide area network (WAN) , a metropolitan area  network (MAN) , a personal area network (PAN) , Bluetooth, a cloud network, a mobile network (e.g., 3rd Generation (3G) , etc. ) , an intranet, the Internet, etc. Network interface (s) 580 may include, for example, a wireless network interface having antenna 585, which may represent one or more antenna (e) . Network interface (s) 580 may also include, for example, a wired network interface to communicate with remote devices via network cable 587, which may be, for example, an Ethernet cable, a coaxial cable, a fiber optic cable, a serial cable, or a parallel cable.
Network interface (s) 580 may provide access to a LAN, for example, by conforming to IEEE 802.11b and/or IEEE 802.11g standards, and/or the wireless network interface may provide access to a personal area network, for example, by conforming to Bluetooth standards. Other wireless network interfaces and/or protocols, including previous and subsequent versions of the standards, may also be supported.
In addition to, or instead of, communication via the wireless LAN standards, network interface (s) 580 may provide wireless communication using, for example, Time Division, Multiple Access (TDMA) protocols, Global Systems for Mobile Communications (GSM) protocols, Code Division, Multiple Access (CDMA) protocols, and/or any other type of wireless communications protocols.
Network interface (s) 580 may include one or more communication interfaces, such as a modem, a network interface card, or other well-known interface devices, such as those used for coupling to the Ethernet, token ring, or other types of physical wired or wireless attachments for purposes of providing a communication link to support a LAN or a WAN, for example. In this manner, the computer system may also be coupled to a number of peripheral devices, clients, control surfaces, consoles, or servers via a conventional network infrastructure, including an Intranet or the Internet, for example.
It is to be appreciated that a lesser or more equipped system than the example described above may be preferred for certain implementations. Therefore, the configuration of computing system 500 may vary from implementation to implementation depending upon numerous factors, such as price constraints, performance requirements, technological improvements, or other circumstances. Examples of the electronic device or computer system 500 may include without limitation a mobile device, a personal digital assistant, a mobile computing device, a smartphone, a cellular telephone, a handset, a one-way pager, a two-way pager, a messaging device, a computer, a personal computer (PC) , a desktop computer, a laptop computer, a notebook computer, a handheld computer, a tablet computer, a server, a server array or server farm, a web server, a network server, an Internet server, a work station, a mini-computer, a main frame computer, a supercomputer, a network appliance, a web appliance, a distributed computing  system, multiprocessor systems, processor-based systems, consumer electronics, programmable consumer electronics, television, digital television, set top box, wireless access point, base station, subscriber station, mobile subscriber center, radio network controller, router, hub, gateway, bridge, switch, machine, or combinations thereof.
Embodiments may be implemented as any or a combination of: one or more microchips or integrated circuits interconnected using a parentboard, hardwired logic, software stored by a memory device and executed by a microprocessor, firmware, an application specific integrated circuit (ASIC) , and/or a field programmable gate array (FPGA) . The term "logic" may include, by way of example, software or hardware and/or combinations of software and hardware.
Embodiments may be provided, for example, as a computer program product which may include one or more machine-readable media having stored thereon machine-executable instructions that, when executed by one or more machines such as a computer, network of computers, or other electronic devices, may result in the one or more machines carrying out operations in accordance with embodiments described herein. A machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs (Compact Disc-Read Only Memories) , and magneto-optical disks, ROMs, RAMs, EPROMs (Erasable Programmable Read Only Memories) , EEPROMs (Electrically Erasable Programmable Read Only Memories) , magnetic or optical cards, flash memory, or other type of media/machine-readable medium suitable for storing machine-executable instructions.
Moreover, embodiments may be downloaded as a computer program product, wherein the program may be transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of one or more data signals embodied in and/or modulated by a carrier wave or other propagation medium via a communication link (e.g., a modem and/or network connection) .
References to “one embodiment” , “an embodiment” , “example embodiment” , “various embodiments” , etc., indicate that the embodiment (s) so described may include particular features, structures, or characteristics, but not every embodiment necessarily includes the particular features, structures, or characteristics. Further, some embodiments may have some, all, or none of the features described for other embodiments.
In the following description and claims, the term “coupled” along with its derivatives, may be used. “Coupled” is used to indicate that two or more elements co-operate or interact with each other, but they may or may not have intervening physical or electrical components between them.
As used in the claims, unless otherwise specified the use of the ordinal adjectives “first” , “second” , “third” , etc., to describe a common element, merely indicate that different instances of like elements are being referred to, and are not intended to imply that the elements so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
Figure 6 illustrates an embodiment of a computing environment 600 capable of supporting the operations discussed above. The modules and systems can be implemented in a variety of different hardware architectures and form factors including that shown in Figure 4.
The Command Execution Module 601 includes a central processing unit to cache and execute commands and to distribute tasks among the other modules and systems shown. It may include an instruction stack, a cache memory to store intermediate and final results, and mass memory to store applications and operating systems. The Command Execution Module may also serve as a central coordination and task allocation unit for the system.
The Screen Rendering Module 621 draws objects on the one or more multiple screens for the user to see. It can be adapted to receive the data from the Virtual Object Behavior Module 604, described below, and to render the virtual object and any other objects and forces on the appropriate screen or screens. Thus, the data from the Virtual Object Behavior Module would determine the position and dynamics of the virtual object and associated gestures, forces and objects, for example, and the Screen Rendering Module would depict the virtual object and associated objects and environment on a screen, accordingly. The Screen Rendering Module could further be adapted to receive data from the Adjacent Screen Perspective Module 607, described below, to either depict a target landing area for the virtual object if the virtual object could be moved to the display of the device with which the Adjacent Screen Perspective Module is associated. Thus, for example, if the virtual object is being moved from a main screen to an auxiliary screen, the Adjacent Screen Perspective Module 2 could send data to the Screen Rendering Module to suggest, for example in shadow form, one or more target landing areas for the virtual object on that track to a user’s hand movements or eye movements.
The Object and Gesture Recognition System 622 may be adapted to recognize and track hand and arm gestures of a user. Such a module may be used to recognize hands, fingers, finger gestures, hand movements and a location of hands relative to displays. For example, the Object and Gesture Recognition Module could for example determine that a user made a body part gesture to drop or throw a virtual object onto one or the other of the multiple screens, or that the user made a body part gesture to move the virtual object to a bezel of one or the other of the multiple screens. The Object and Gesture Recognition System may be coupled to a camera or  camera array, a microphone or microphone array, a touch screen or touch surface, or a pointing device, or some combination of these items, to detect gestures and commands from the user.
The touch screen or touch surface of the Object and Gesture Recognition System may include a touch screen sensor. Data from the sensor may be fed to hardware, software, firmware or a combination of the same to map the touch gesture of a user’s hand on the screen or surface to a corresponding dynamic behavior of a virtual object. The sensor date may be used to momentum and inertia factors to allow a variety of momentum behavior for a virtual object based on input from the user’s hand, such as a swipe rate of a user’s finger relative to the screen. Pinching gestures may be interpreted as a command to lift a virtual object from the display screen, or to begin generating a virtual binding associated with the virtual object or to zoom in or out on a display. Similar commands may be generated by the Object and Gesture Recognition System using one or more cameras without benefit of a touch surface.
The Direction of Attention Module 623 may be equipped with cameras or other sensors to track the position or orientation of a user's face or hands. When a gesture or voice command is issued, the system can determine the appropriate screen for the gesture. In one example, a camera is mounted near each display to detect whether the user is facing that display. If so, then the direction of attention module information is provided to the Object and Gesture Recognition Module 622 to ensure that the gestures or commands are associated with the appropriate library for the active display. Similarly, if the user is looking away from all of the screens, then commands can be ignored.
The Device Proximity Detection Module 625 can use proximity sensors, compasses, GPS (global positioning system) receivers, personal area network radios, and other types of sensors, together with triangulation and other techniques to determine the proximity of other devices. Once a nearby device is detected, it can be registered to the system and its type can be determined as an input device or a display device or both. For an input device, received data may then be applied to the Object Gesture and Recognition System 622. For a display device, it may be considered by the Adjacent Screen Perspective Module 607.
The Virtual Object Behavior Module 604 is adapted to receive input from the Object Velocity and Direction Module, and to apply such input to a virtual object being shown in the display. Thus, for example, the Object and Gesture Recognition System would interpret a user gesture and by mapping the captured movements of a user’s hand to recognized movements, the Virtual Object Tracker Module would associate the virtual object's position and movements to the movements as recognized by Object and Gesture Recognition System, the Object and Velocity and Direction Module would capture the dynamics of the virtual object's movements,  and the Virtual Object Behavior Module would receive the input from the Object and Velocity and Direction Module to generate data that would direct the movements of the virtual object to correspond to the input from the Object and Velocity and Direction Module.
The Virtual Object Tracker Module 606 on the other hand may be adapted to track where a virtual object should be located in three-dimensional space in a vicinity of a display, and which body part of the user is holding the virtual object, based on input from the Object and Gesture Recognition Module. The Virtual Object Tracker Module 606 may for example track a virtual object as it moves across and between screens and track which body part of the user is holding that virtual object. Tracking the body part that is holding the virtual object allows a continuous awareness of the body part’s air movements, and thus an eventual awareness as to whether the virtual object has been released onto one or more screens.
The Gesture to View and Screen Synchronization Module 608, receives the selection of the view and screen or both from the Direction of Attention Module 623 and, in some cases, voice commands to determine which view is the active view and which screen is the active screen. It then causes the relevant gesture library to be loaded for the Object and Gesture Recognition System 622. Various views of an application on one or more screens can be associated with alternative gesture libraries or a set of gesture templates for a given view. As an example in Figure 1A a pinch-release gesture launches a torpedo, but in Figure 1B, the same gesture launches a depth charge.
The Adjacent Screen Perspective Module 607, which may include or be coupled to the Device Proximity Detection Module 625, may be adapted to determine an angle and position of one display relative to another display. A projected display includes, for example, an image projected onto a wall or screen. The ability to detect a proximity of a nearby screen and a corresponding angle or orientation of a display projected therefrom may for example be accomplished with either an infrared emitter and receiver, or electromagnetic or photo-detection sensing capability. For technologies that allow projected displays with touch input, the incoming video can be analyzed to determine the position of a projected display and to correct for the distortion caused by displaying at an angle. An accelerometer, magnetometer, compass, or camera can be used to determine the angle at which a device is being held while infrared emitters and cameras could allow the orientation of the screen device to be determined in relation to the sensors on an adjacent device. The Adjacent Screen Perspective Module 607 may, in this way, determine coordinates of an adjacent screen relative to its own screen coordinates. Thus, the Adjacent Screen Perspective Module may determine which devices are in proximity to each other, and further potential targets for moving one or more virtual object's across screens. The  Adjacent Screen Perspective Module may further allow the position of the screens to be correlated to a model of three-dimensional space representing all of the existing objects and virtual objects.
The Object and Velocity and Direction Module 603 may be adapted to estimate the dynamics of a virtual object being moved, such as its trajectory, velocity (whether linear or angular) , momentum (whether linear or angular) , etc. by receiving input from the Virtual Object Tracker Module. The Object and Velocity and Direction Module may further be adapted to estimate dynamics of any physics forces, by for example estimating the acceleration, deflection, degree of stretching of a virtual binding, etc. and the dynamic behavior of a virtual object once released by a user’s body part. The Object and Velocity and Direction Module may also use image motion, size and angle changes to estimate the velocity of objects, such as the velocity of hands and fingers
The Momentum and Inertia Module 602 can use image motion, image size, and angle changes of objects in the image plane or in a three-dimensional space to estimate the velocity and direction of objects in the space or on a display. The Momentum and Inertia Module is coupled to the Object and Gesture Recognition System 622 to estimate the velocity of gestures performed by hands, fingers, and other body parts and then to apply those estimates to determine momentum and velocities to virtual objects that are to be affected by the gesture.
The 3D Image Interaction and Effects Module 605 tracks user interaction with 3D images that appear to extend out of one or more screens. The influence of objects in the z-axis (towards and away from the plane of the screen) can be calculated together with the relative influence of these objects upon each other. For example, an object thrown by a user gesture can be influenced by 3D objects in the foreground before the virtual object arrives at the plane of the screen. These objects may change the direction or velocity of the projectile or destroy it entirely. The object can be rendered by the 3D Image Interaction and Effects Module in the foreground on one or more of the displays.
The following clauses and/or examples pertain to further embodiments or examples. Specifics in the examples may be used anywhere in one or more embodiments. The various features of the different embodiments or examples may be variously combined with some features included and others excluded to suit a variety of different applications. Examples may include subject matter such as a method, means for performing acts of the method, at least one machine-readable medium including instructions that, when performed by a machine cause the machine to performs acts of the method, or of an apparatus or system for facilitating hybrid communication according to embodiments and examples described herein.
Some embodiments pertain to Example 1 that includes an apparatus to facilitate smart voice routing for phone calls using incompatible operating systems at computing devices, comprising: detection/reception logic to detect a phone call at the apparatus; evaluation/classification logic to evaluate the phone call including at least one of a local device-based phone call classified at local call and an external device-based phone call classified as external call; and audio routing logic to determine routing of audio associated with the phone call, wherein routing includes at least one of a first routing of a first audio associated the phone call being the local call and a second routing of a second audio associated with the phone call being the external call.
Example 2 includes the subject matter of Example 1, wherein the local call is based on one or more local input/output (I/O) devices coupled with the apparatus, wherein the one or more local I/O devices include at least one of a local microphone and a local speaker.
Example 3 includes the subject matter of Example 1 or 2, wherein the local call comprises at least one of an incoming local call and an outgoing local call.
Example 4 includes the subject matter of Example 1, wherein the external call is based on one or more external I/O devices coupled with an external audio device, wherein the one or more external I/O devices include at least one of an external microphone and an external speaker, wherein the external audio device includes a Bluetooth headset.
Example 5 includes the subject matter of Example 1 or 4, wherein the external call comprises at least one of an incoming external call and an outgoing external call.
Example 6 includes the subject matter of Example 1, further comprising verification logic to verify the external audio device being in communication with the apparatus, wherein the local call is facilitated if the external audio device is disconnected or inactivated, and wherein the external call is facilitated if the external audio device is connected and activated, wherein the detection/reception logic is further to detect the external audio device over a network including a proximity network.
Example 7 includes the subject matter of Example 1, wherein the first routing of the first audio comprises directing the first audio to or from at least one of the local microphone and the local speaker, wherein the first routing of the second audio comprising directing the second audio to or from at least one of the external microphone and the external speaker, wherein the second routing further comprises switching off at least one of the local microphone and the local speaker.
Example 8 includes the subject matter of Example 1, further comprising: application/execution logic to executed at least one of the first routing and the second routing as  determined by the audio routing logic; and communication/compatibility logic to establish communication with the external audio device and one or more computing devices as facilitated by the phone call.
Some embodiments pertain to Example 9 that includes a method for facilitating smart voice routing for phone calls using incompatible operating systems at computing devices, comprising: detecting a phone call at a computing device; evaluating the phone call including at least one of a local device-based phone call classified at local call and an external device-based phone call classified as external call; and determining routing of audio associated with the phone call, wherein routing includes at least one of a first routing of a first audio associated the phone call being the local call and a second routing of a second audio associated with the phone call being the external call.
Example 10 includes the subject matter of Example 9, wherein the local call is based on one or more local input/output (I/O) devices coupled with the computing device, wherein the one or more local I/O devices include at least one of a local microphone and a local speaker.
Example 11 includes the subject matter of Example 9 or 10, wherein the local call comprises at least one of an incoming local call and an outgoing local call.
Example 12 includes the subject matter of Example 9, wherein the external call is based on one or more external I/O devices coupled with an external audio device, wherein the one or more external I/O devices include at least one of an external microphone and an external speaker, wherein the external audio device includes a Bluetooth headset.
Example 13 includes the subject matter of Example 9 or 12, wherein the external call comprises at least one of an incoming external call and an outgoing external call.
Example 14 includes the subject matter of Example 9, further comprising verifying the external audio device being in communication with the computing device, wherein the local call is facilitated if the external audio device is disconnected or inactivated, and wherein the external call is facilitated if the external audio device is connected and activated, wherein detecting the phone call further includes detecting the external audio device over a network including a proximity network.
Example 15 includes the subject matter of Example 9, wherein the first routing of the first audio comprises directing the first audio to or from at least one of the local microphone and the local speaker, wherein the first routing of the second audio comprising directing the second audio to or from at least one of the external microphone and the external speaker, wherein the second routing further comprises switching off at least one of the local microphone and the local speaker.
Example 16 includes the subject matter of Example 9, further comprising: executing at least one of the first routing and the second routing as determined by the routing of the audio; and establishing communication with the external audio device and one or more computing devices as facilitated by the phone call.
Some embodiments pertain to Example 17 includes a system comprising a storage device having instructions, and a processor to execute the instructions to facilitate a mechanism to perform one or more operations comprising: detecting a phone call at a computing device; evaluating the phone call including at least one of a local device-based phone call classified at local call and an external device-based phone call classified as external call; and determining routing of audio associated with the phone call, wherein routing includes at least one of a first routing of a first audio associated the phone call being the local call and a second routing of a second audio associated with the phone call being the external call.
Example 18 includes the subject matter of Example 17, wherein the local call is based on one or more local input/output (I/O) devices coupled with the computing device, wherein the one or more local I/O devices include at least one of a local microphone and a local speaker.
Example 19 includes the subject matter of Example 17 or 18, wherein the local call comprises at least one of an incoming local call and an outgoing local call.
Example 20 includes the subject matter of Example 17, wherein the external call is based on one or more external I/O devices coupled with an external audio device, wherein the one or more external I/O devices include at least one of an external microphone and an external speaker, wherein the external audio device includes a Bluetooth headset.
Example 21 includes the subject matter of Example 17 or 20, wherein the external call comprises at least one of an incoming external call and an outgoing external call.
Example 22 includes the subject matter of Example 17, wherein the one or more operations comprise verifying the external audio device being in communication with the computing device, wherein the local call is facilitated if the external audio device is disconnected or inactivated, and wherein the external call is facilitated if the external audio device is connected and activated, wherein detecting the phone call further includes detecting the external audio device over a network including a proximity network.
Example 23 includes the subject matter of Example 17, wherein the first routing of the first audio comprises directing the first audio to or from at least one of the local microphone and the local speaker, wherein the first routing of the second audio comprising directing the second audio to or from at least one of the external microphone and the external speaker, wherein the  second routing further comprises switching off at least one of the local microphone and the local speaker.
Example 24 includes the subject matter of Example 17, wherein the one or more operations comprise: executing at least one of the first routing and the second routing as determined by the routing of the audio; and establishing communication with the external audio device and one or more computing devices as facilitated by the phone call.
Some embodiments pertain to Example 25 includes an apparatus comprising: means for detecting a gesture initiated by a sending user having access to a computing device; means for detecting a phone call at a computing device; means for evaluating the phone call including at least one of a local device-based phone call classified at local call and an external device-based phone call classified as external call; and means for determining routing of audio associated with the phone call, wherein routing includes at least one of a first routing of a first audio associated the phone call being the local call and a second routing of a second audio associated with the phone call being the external call.
Example 26 includes the subject matter of Example 25, wherein the local call is based on one or more local input/output (I/O) devices coupled with the computing device, wherein the one or more local I/O devices include at least one of a local microphone and a local speaker.
Example 27 includes the subject matter of Example 25 or 26, wherein the local call comprises at least one of an incoming local call and an outgoing local call.
Example 28 includes the subject matter of Example 25, wherein the external call is based on one or more external I/O devices coupled with an external audio device, wherein the one or more external I/O devices include at least one of an external microphone and an external speaker, wherein the external audio device includes a Bluetooth headset.
Example 29 includes the subject matter of Example 25 or 28, wherein the external call comprises at least one of an incoming external call and an outgoing external call.
Example 30 includes the subject matter of Example 25, further comprising means for verifying the external audio device being in communication with the computing device, wherein the local call is facilitated if the external audio device is disconnected or inactivated, and wherein the external call is facilitated if the external audio device is connected and activated, wherein detecting the phone call further includes detecting the external audio device over a network including a proximity network.
Example 31 includes the subject matter of Example 25, wherein the first routing of the first audio comprises directing the first audio to or from at least one of the local microphone and the local speaker, wherein the first routing of the second audio comprising directing the second  audio to or from at least one of the external microphone and the external speaker, wherein the second routing further comprises switching off at least one of the local microphone and the local speaker.
Example 32 includes the subject matter of Example 25, further comprising: means for executing at least one of the first routing and the second routing as determined by the routing of the audio; and means for establishing communication with the external audio device and one or more computing devices as facilitated by the phone call.
Example 33 includes at least one non-transitory or tangible machine-readable medium comprising a plurality of instructions, when executed on a computing device, to implement or perform a method as claimed in any of claims or examples 9-16.
Example 34 includes at least one machine-readable medium comprising a plurality of instructions, when executed on a computing device, to implement or perform a method as claimed in any of claims or examples 9-16.
Example 35 includes a system comprising a mechanism to implement or perform a method as claimed in any of claims or examples 9-16.
Example 36 includes an apparatus comprising means for performing a method as claimed in any of claims or examples 9-16.
Example 37 includes a computing device arranged to implement or perform a method as claimed in any of claims or examples 9-16.
Example 38 includes a communications device arranged to implement or perform a method as claimed in any of claims or examples 9-16.
Example 39 includes at least one machine-readable medium comprising a plurality of instructions, when executed on a computing device, to implement or perform a method or realize an apparatus as claimed in any preceding claims.
Example 40 includes at least one non-transitory or tangible machine-readable medium comprising a plurality of instructions, when executed on a computing device, to implement or perform a method or realize an apparatus as claimed in any preceding claims.
Example 41 includes a system comprising a mechanism to implement or perform a method or realize an apparatus as claimed in any preceding claims.
Example 42 includes an apparatus comprising means to perform a method as claimed in any preceding claims.
Example 43 includes a computing device arranged to implement or perform a method or realize an apparatus as claimed in any preceding claims.
Example 44 includes a communications device arranged to implement or perform a method or realize an apparatus as claimed in any preceding claims.
The drawings and the forgoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, orders of processes described herein may be changed and are not limited to the manner described herein. Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts necessarily need to be performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples. Numerous variations, whether explicitly given in the specification or not, such as differences in structure, dimension, and use of material, are possible. The scope of embodiments is at least as broad as given by the following claims.

Claims (21)

  1. An apparatus to facilitate smart voice routing for phone calls using incompatible operating systems at computing devices, comprising:
    detection/reception logic to detect a phone call at the apparatus;
    evaluation/classification logic to evaluate the phone call including at least one of a local device-based phone call classified at local call and an external device-based phone call classified as external call; and
    audio routing logic to determine routing of audio associated with the phone call, wherein routing includes at least one of a first routing of a first audio associated the phone call being the local call and a second routing of a second audio associated with the phone call being the external call.
  2. The apparatus of claim 1, wherein the local call is based on one or more local input/output (I/O) devices coupled with the apparatus, wherein the one or more local I/O devices include at least one of a local microphone and a local speaker.
  3. The apparatus of claim 1 or 2, wherein the local call comprises at least one of an incoming local call and an outgoing local call.
  4. The apparatus of claim 1, wherein the external call is based on one or more external I/O devices coupled with an external audio device, wherein the one or more external I/O devices include at least one of an external microphone and an external speaker, wherein the external audio device includes a Bluetooth headset.
  5. The apparatus of claim 1 or 4, wherein the external call comprises at least one of an incoming external call and an outgoing external call.
  6. The apparatus of claim 1, further comprising verification logic to verify the external audio device being in communication with the apparatus, wherein the local call is facilitated if the external audio device is disconnected or inactivated, and wherein the external call is facilitated if the external audio device is connected and activated,
    wherein the detection/reception logic is further to detect the external audio device over a network including a proximity network.
  7. The apparatus of claim 1, wherein the first routing of the first audio comprises directing the first audio to or from at least one of the local microphone and the local speaker,
    wherein the first routing of the second audio comprising directing the second audio to or from at least one of the external microphone and the external speaker, wherein the second routing further comprises switching off at least one of the local microphone and the local speaker.
  8. The apparatus of claim 1, further comprising:
    application/execution logic to executed at least one of the first routing and the second routing as determined by the audio routing logic; and
    communication/compatibility logic to establish communication with the external audio device and one or more computing devices as facilitated by the phone call.
  9. A method for facilitating smart voice routing for phone calls using incompatible operating systems at computing devices, comprising:
    detecting a phone call at a computing device;
    evaluating the phone call including at least one of a local device-based phone call classified at local call and an external device-based phone call classified as external call; and
    determining routing of audio associated with the phone call, wherein routing includes at least one of a first routing of a first audio associated the phone call being the local call and a second routing of a second audio associated with the phone call being the external call.
  10. The method of claim 9, wherein the local call is based on one or more local input/output (I/O) devices coupled with the computing device, wherein the one or more local I/O devices include at least one of a local microphone and a local speaker.
  11. The method of claim 10, wherein the local call comprises at least one of an incoming local call and an outgoing local call.
  12. The method of claim 9, wherein the external call is based on one or more external I/O devices coupled with an external audio device, wherein the one or more external I/O devices include at least one of an external microphone and an external speaker, wherein the external audio device includes a Bluetooth headset.
  13. The method of claim 12, wherein the external call comprises at least one of an incoming external call and an outgoing external call.
  14. The method of claim 9, further comprising verifying the external audio device being in communication with the computing device, wherein the local call is facilitated if the external audio device is disconnected or inactivated, and wherein the external call is facilitated if the external audio device is connected and activated,
    wherein detecting the phone call further includes detecting the external audio device over a network including a proximity network.
  15. The method of claim 9, wherein the first routing of the first audio comprises directing the first audio to or from at least one of the local microphone and the local speaker,
    wherein the first routing of the second audio comprising directing the second audio to or from at least one of the external microphone and the external speaker, wherein the second routing further comprises switching off at least one of the local microphone and the local speaker.
  16. The method of claim 9, further comprising:
    executing at least one of the first routing and the second routing as determined by the routing of the audio; and
    establishing communication with the external audio device and one or more computing devices as facilitated by the phone call.
  17. At least one machine-readable medium comprising a plurality of instructions, when executed on a computing device, to implement or perform a method as claimed in any of claims 9-16.
  18. A system comprising a mechanism to implement or perform a method as claimed in any of claims 9-16.
  19. An apparatus comprising means for performing a method as claimed in any of claims 9-16.
  20. A computing device arranged to implement or perform a method as claimed in any of claims 9-16.
  21. A communications device arranged to implement or perform a method as claimed in any of claims 9-16.
PCT/CN2015/090680 2015-09-25 2015-09-25 Facilitating smart voice routing for phone calls using incompatible operating systems at computing devices WO2017049574A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/090680 WO2017049574A1 (en) 2015-09-25 2015-09-25 Facilitating smart voice routing for phone calls using incompatible operating systems at computing devices

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/090680 WO2017049574A1 (en) 2015-09-25 2015-09-25 Facilitating smart voice routing for phone calls using incompatible operating systems at computing devices

Publications (1)

Publication Number Publication Date
WO2017049574A1 true WO2017049574A1 (en) 2017-03-30

Family

ID=58385760

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/090680 WO2017049574A1 (en) 2015-09-25 2015-09-25 Facilitating smart voice routing for phone calls using incompatible operating systems at computing devices

Country Status (1)

Country Link
WO (1) WO2017049574A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005252809A (en) * 2004-03-05 2005-09-15 Nec Access Technica Ltd Premise system for ip phone and speech transfer method
CN102404662A (en) * 2011-12-19 2012-04-04 美律电子(深圳)有限公司 Earphone with double transmission interfaces
CN103458084A (en) * 2013-09-23 2013-12-18 廖大鸿 Mobile phone auxiliary device
CN104038625A (en) * 2013-03-05 2014-09-10 索尼移动通讯有限公司 Automatic routing of call audio at incoming call
US20150011264A1 (en) * 2013-07-02 2015-01-08 Nxp B.V. Mobile device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005252809A (en) * 2004-03-05 2005-09-15 Nec Access Technica Ltd Premise system for ip phone and speech transfer method
CN102404662A (en) * 2011-12-19 2012-04-04 美律电子(深圳)有限公司 Earphone with double transmission interfaces
CN104038625A (en) * 2013-03-05 2014-09-10 索尼移动通讯有限公司 Automatic routing of call audio at incoming call
US20150011264A1 (en) * 2013-07-02 2015-01-08 Nxp B.V. Mobile device
CN103458084A (en) * 2013-09-23 2013-12-18 廖大鸿 Mobile phone auxiliary device

Similar Documents

Publication Publication Date Title
US11757675B2 (en) Facilitating portable, reusable, and sharable internet of things (IoT)-based services and resources
US20210157149A1 (en) Virtual wearables
US11573607B2 (en) Facilitating dynamic detection and intelligent use of segmentation on flexible display screens
US10915161B2 (en) Facilitating dynamic non-visual markers for augmented reality on computing devices
US10542118B2 (en) Facilitating dynamic filtering and local and/or remote processing of data based on privacy policies and/or user preferences
US10176798B2 (en) Facilitating dynamic and intelligent conversion of text into real user speech
US10045001B2 (en) Powering unpowered objects for tracking, augmented reality, and other experiences
US20160195849A1 (en) Facilitating interactive floating virtual representations of images at computing devices
US10715468B2 (en) Facilitating tracking of targets and generating and communicating of messages at computing devices
US10402281B2 (en) Dynamic capsule generation and recovery in computing environments
US20160285929A1 (en) Facilitating dynamic and seamless transitioning into online meetings
US20170090582A1 (en) Facilitating dynamic and intelligent geographical interpretation of human expressions and gestures
US9792673B2 (en) Facilitating projection pre-shaping of digital images at computing devices
WO2017222651A1 (en) Smart crowd-sourced automatic indoor discovery and mapping
US20160285842A1 (en) Curator-facilitated message generation and presentation experiences for personal computing devices
US11474776B2 (en) Display-based audio splitting in media environments
WO2017049574A1 (en) Facilitating smart voice routing for phone calls using incompatible operating systems at computing devices
WO2017166267A1 (en) Consistent generation and customization of simulation firmware and platform in computing environments

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15904450

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15904450

Country of ref document: EP

Kind code of ref document: A1