GB2320657A - Wireless audio and video conferencing and telephony - Google Patents

Wireless audio and video conferencing and telephony Download PDF

Info

Publication number
GB2320657A
GB2320657A GB9724889A GB9724889A GB2320657A GB 2320657 A GB2320657 A GB 2320657A GB 9724889 A GB9724889 A GB 9724889A GB 9724889 A GB9724889 A GB 9724889A GB 2320657 A GB2320657 A GB 2320657A
Authority
GB
United Kingdom
Prior art keywords
video
signal
wireless
radio frequency
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB9724889A
Other versions
GB9724889D0 (en
Inventor
Timothy John Burke
Nancy Gamburd
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Motorola Solutions Inc
Original Assignee
Motorola Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Motorola Inc filed Critical Motorola Inc
Publication of GB9724889D0 publication Critical patent/GB9724889D0/en
Publication of GB2320657A publication Critical patent/GB2320657A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/148Interfacing a video terminal to a particular transmission medium, e.g. ISDN
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
    • H04N2007/145Handheld terminals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Telephonic Communication Services (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

A wireless video access apparatus 100 and wireless videophone 120 provide audio and video teleconferencing and telephony via a communication channel 103 to network 140 such as a PSTN or ISDN. A signal received by the network interface 210 from the network 140 is converted into an audio signal and a baseband video signal. The audio and video signals are wireline or RF transmitted to one or more video displays 225 or televisions. Videophone 120 receives the video signal from video transponder 115 and the audio signal from wireless base station 110. A video camera or camcorder 230 generates an outgoing video signal and audio signal. The processor 190 performs compression, decompression and protocol encoding and decoding. The telephone 295 or wireless videophone 120 may be used for call set-up or audio input and output. The video camera output may be displayed on the video displays 225 or wireless videophone in a loop-back mode of operation.

Description

1 2320657 APPARATUS, METHOD AND SYSTEM FOR WIRELESS AUDIO AND VIDEO
CONFERENCING AND TELEPHONY
CROSS-REFERENCE TO RELATED APPLICATIONS
This application is related to the following United States Patent Applications (collectively referred to as the "related applications"), each incorporated by reference herein, with priority claimed for all commonly disclosed subject matter:
Newlin et a]., United States Patent Application Serial No.
081658,792, filed June 5, 1996, entitled "AudioNisual Communication System and Method ThereoC, Motorola Docket No. PD05634AM (the "first related application"); Burke et al., United States Patent Application Serial No.
081706,100, filed August 30, 1996, entitled "Apparatus, Method And System For Audio And Video Conferencing And Telephony", Motorola Docket No. PD05686AM (the "second related application"); Burke et al., United States Patent Application Serial No.
081715,887, filed September 18, 1996, entitled "Videophone Apparatus, Method And System For Audio And Video Conferencing And Telephony", Motorola Docket No.
PD05689AM (the'lhird related application"); Burke et al., United States Patent Application Serial No.
081725,602, filed October 3, 1996, entitled "Apparatus, Method And System For Wireline Audio And Video Conferencing And Telephony", Motorola Docket No. PD05703AM (the "fourth related application"); and Burke et al., United States Patent Application Serial No.
081726,329, filed October 3, 1996, entitled Wideophone Apparatus, Method And System For Wireline Audio And Video Conferencing And Telephony", Motoro!a Docket No.
PD05725AM (the 1ifth related application"); and Newlin et al., United States Patent Application Serial No.
081735,295, filed October 22, 1996, entitled "Apparatus, Method 2 and System For Multimedia Control And Communication% Motorola Docket No. PD05688AM (the Sixth related application").
FIELD OF THE INVENTION
This invention relates in general to audio and video communications and, more specifically, to an apparatus, method and system for wireless audio and video conferencing and telephony.
BACKGROUND OF THE INVENTION
Currently, audio and video (visual) conferencing capabilities are implemented as computer based systems, such as in personal computers ("PCs"), as stand-alone, "roll abouC room systems, and as videophones. These systems typically require new and significant hardwarei- software. and. programming, and may also require significant communications network connections, for example, multiple channels ('9SOs") of an Integrated Services Digital Network (1SDW) connection or a T1/E1 connection.
For example, stand-alone, "roll abouV room systems for audio and video conferencing typically require dedicated hardware at significant expense, in the tens of thousands of dollars, utilizing dedicated video cameras, television or video displays, microphone systems, and the additional video conferencing equipment. Such systems may also require as many as six (or more) contiguous ISDN B channels (or T1/E1 DS0s), each operating at 64 kbps (kilobits per second). Such communication network capability is also expensive and potentially unnecessary, particularly when the additional channels are not in continuous use.
Current audiolvisual telephony or conferencing systems are also limited to providing such audiolvisual functionality only 3 at designated nodes, Le,, the specific system location, and are neither mobile nor distributed (having multiple locations). Stand-alone, "roll abouC room systems allow such audio and video conferencing only within or at that particular physical location. Videophones are also currently limited to their installed locations. Similarly, PC based systems provide such functionality only at the given PC having the necessary network connections (such as ISDN) and having the specified audio/visual conferencing equipment, such as a video camera, microphone, and the additional computer processing boards which provide for the audiolvisual processing. For other PCs to become capable of such audiolvisual conferencing functionality, they must also be equipped with any necessary hardware, software, programming and network connections.
Such conventional audiolvisual conferencing systems are also difficult to assemble, install, and use. For example, the addition of audiolvisual functionality to a PC-requires the. addition of a new PC card, camera, microphone, the installation of audio/visual control software, and the installation of new network connections, such as ISDN. PC based systems typically require, at a minimum, ISDN basic rate interface service, consisting of 2 ISDN B channels (each operating at 64 kbps) plus one D channel (operating at 16 kbps). In addition, such network connectivity may require additional programming of the PC with necessary ISDN specific configuration information, such as configuration information specific to the central office switch type of the service provider and ISDN service profile identifier (SPID) information. Video conference call set up procedures typically are also difficult and complicated utilizing these current systems.
Conventional audiolvisual telephony and conferencing equipment is also limited to communication with similar equipment at the far end (remote location). For example, videophone systems which utilize typical telephone systems 4 ("POTW (plain old telephone service)) transmit information in analog form, for example, as trellis code modulated data, at V.34 and V.34bis rates (Lg,,, highest data rates of approximately 28.8 to 33 kbps). Such POTS-based videophone systems would not be compatible with ISDN audio/visual conferencing and telephony systems which transmit information in digital form, such as utilizing Q.931 message signaling, Q.921 LAPD datalink, and Q.91 0 physical interface digital protocols, with data rates of 128 kbps (two B channels) or more (with additional channels or DS0s).
in addition, such current audiolvisual telephony and conferencing equipment are relatively expensive and, in most instances, sufficiently expensive to be prohibitive for in-home or other consumer use. For example, the cost of roll about, room based systems is typically tens of thousands of dollars. PC based videoconferencing systems are also expensive, with costs in the thousands. of dollars.
Current audiolvisual telephony andconferencing equipment also do not provide for multiple, simultaneous video conferences from more than one location, and also are not portable. In addition, current systems (such as those in PCs) do not provide for multiplexed video conference sessions, in which the output video may include display of video input from several video cameras at multiple locations.
Accordingly, a need has remained for audiolvisual conferencing and telephony systems, equipment, and methods which may operate at more than one designated node or location within the user premises, or may be portable or mobile, or may be configured as needed for additional locations. Such a system should be compatible for use with other existing video conferencing systems, should be user friendly, easy to install and use, and should be relatively less expensive for in-home purchase and use by consumers. In addition, such a system 1 should be able to provide multiple video conferencing sessions which may originate from multiple locations.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram illustrating a first embodiment of a wireless video access apparatus and a first embodiment of a wireless video conferencing system in accordance with the present invention.
FIG. 2 is a detailed block diagram illustrating a second embodiment of a wireless video access apparatus and a second embodiment of a wireless video conferencing system in accordance with the present invention.
FIG. 3 is a detailed block diagram illustrating a third embodiment of a wireless video access apparatus and a third embodiment of a wireless video conferencing system in accordance with the present. invention.
FIG. 4A is a block diagram illustrating a network interface, for a cable network, of a preferred apparatus embodiment in accordance with the present invention.
FIG. 4B is a block diagram illustrating a CATV RF transceiver for a network interface, for a cable network, of a preferred apparatus embodiment in accordance with the present invention.
FIG. 5A is a block diagram illustrating a network interface, for, a wireline network, of a preferred apparatus embodiment in accordance with the present invention.
FIG. 513 is a block diagram illustrating an embodiment of an ISDN portion of a wireline network interface utilizing an ISDN S/T interface.
FIG. 5C is a block diagram illustrating an embodiment of an ISDN portion of a wireline network interface utilizing an ISDN U interface.
1 6 FIG. 6 is a block diagram illustrating a microprocessor subsystem of the preferred apparatus embodiment in accordance with the present invention.
FIG. 7 is a block diagram illustrating an audiolvideo compression and decompression subsystem of the preferred apparatus embodiment in accordance with the present invention.
FIG. 8 is a block diagram illustrating a user audio interface of the preferred apparatus embodiment in accordance.
with the present invention.
FIG. 9 is a block diagram illustrating an RF modulator of the preferred apparatus embodiment in accordance with the present invention.
FIG. 10 is a block diagram illustrating an RF demodulator of the preferred apparatus embodiment in accordance with the present invention.
FIG. 11 is a block diagramAI ' ustrating a- camera Int ' edaw.
of the preferred apparatus embodiment in accordance with the present invention.
FIG. 12 is a block diagram illustrating a radio frequency video transponder of a preferred apparatus embodiment in accordance with the present invention.
FIG. 13 is a block diagram illustrating an infrared video transponder of a preferred apparatus embodiment in accordance with the present invention.
FIG. 14 is a block diagram illustrating a radio frequency wireless videophone of a preferred apparatus embodiment in accordance with the present invention.
FIG. 15 is a block diagram illustrating an infrared wireless videophone of a preferred apparatus embodiment in accordance with the present invention.
FIG. 16 is a block diagram illustrating a wireless telephone base station of a preferred apparatus embodiment in accordance with the present invention.
---1 1 7 FIG. 17 is a block diagram illustrating a wireless telephone audio transceiver of a preferred apparatus embodiment in accordance with the present invention.
FIG. 18 is a flow diagram illustrating the method of the preferred embodiment in accordance with the present invention.
FIG. 19 is a flow diagram illustrating the telephony and video conference control methodology of the preferred embodiment in accordance with the present invention.
DETAILED DESCRIPTION OF THE INVENTION
As mentioned above, a need has remained for audiolvisual conferencing and telephony systems, apparatus, and methods which may operate at more than one designated node or location within user premises, or may be portable or mobile, or may be configured as. needed Jor, adcUtional. locations. As illustrated in FIGs. 1 through. 19 discussed below, the preferred embodiment of the invention provides for such wireless audio and visual conferencing and telephony capability at one or more locations within the user premises, may be portable or mobile, and may be configured as needed for additional locations. In addition, in accordance with the preferred embodiment, the audiolvisual conferencing and telephony system utilizes equipment typically found in consumers' homes or premises, such as existing televisions, video cameras or camcorders, and telephones. In addition, such a system is designed to be compatible for use with other existing video conferencing systems, may be utilized over a variety of connected telecommunications networks (such as ISDN or POTS), is user friendly, easy to install and use, and should be relatively less expensive for in-home purchase and use by consumers.
8 FIG. 1 is a block diagram illustrating a first embodiment of a wireless video access apparatus 101 and a first embodiment of a wireless video conferencing system 100 in accordance with the present invention. The wireless video conferencing system 100 includes the wireless video access apparatus 100, a wireless telephone base station 110, a camera 230, a camera interface 235, and a video transponder 115. The wireless video conferencing system 100 may also include as an option one or more wireless videophone apparatuses 120, one or more telephones 295 and one or more video displays 225. The wireless video access apparatus 101 illustrated in FIG. 1 may also have a second and preferred embodiment as wireless video access apparatus 201 illustrated in FIG. 2, or a third embodiment as wireless video access apparatus 301 illustrated in FIG. 3, and as a consequence, as used herein, reference to any of the embodiments of the wireless video access apparatuses.1 01, 201- or 301 shall, be:
understood to mean and include the other apparatus embodiment or its equivalents.
Referring to FIG. 1, in accordance with the invention, the wireless video access apparatus 101 provides audio and video telephony and conferencing services over a first communication channel 103 which may be wireline, such as one or more twisted pair wires, coaxial cable, or hybrid fiber coaxial cable. Also in the preferred embodiment, the first communication channel 103 may be utilized for both digital and analog communications, such as ISDN and ordinary telephony commonly known as POTS. As discussed in the related applications, the first communication channel 103, in turn, is connected through a local digital (or analog) switch (not illustrated) or through a primary station (not illustrated) to a telecommunications network ("network") 140. The network 140, for example, may be a public switched telephone network ("PSTW) or an Integrated Services Digital Network (1SDW), a 9 cable services network, or any combination of such existing or future telecommunications networks.
As discussed in greater detail below, depending upon the type of network 140 and corresponding type of network interface 210 utilized in the wireless video access apparatus 101, the wireless video access apparatus 101 of the present invention may be directly coupleable (through a local digital or analog switch of a network provider central office) to a network 140 such as ISDN or PSTN. As a consequence, that configuration of a wireless video access apparatus 101 may be utilized with currently existing telecommunications infrastructure, such as ISDN or PSTN. In contrast, for cable network connection, as disclosed in the second and third related applications, the wireless video access apparatus may communicate with an intervening primary station which then provides access both to a cable video services infrastructure and to a network, such as ISDN or. PSTN, utilizing a protocol' such as CACS (Cable ACCess a:ignaling) over a communication channel 103 (such as a preferred hybrid fiber coaxial cable). While use of CACS and the system disclosed in the second and third related applications have certain advantages, such as very high speed, low error rate, asynchronous packet data transfer with very high data throughput, utilizing on demand channel assignment, direct network connectivity was precluded. As a consequence, depending upon the desired implementation, direct network connectivity may be provided, such that the wireless video access apparatus 101 of the present invention also may be utilized, for video conferencing and telephony, directly with currently existing telecommunications network infrastructure, such as ISDN or PSTN, without further infrastructure requirements.
Continuing to refer to FIG. 1, the wireless video access apparatus 101 is coupleable to a first communication channel 1 103, for communication with a network 140, and is coupled to a second communication channel 227, typically located within or about the user (or subscriber) premises. For example, the second communication channel 227 may be an internal 75 Ohm coaxial cable typically utilized with cable television, or may be another form of communication channel, such as twisted pair or other wireline, wireless, or PLC (power line carrier, over existing premise AC power lines). A wireless telephone base station 110, and as an option one or more telephones 295, are connected to the wireless video access apparatus via a user interface 215. A wireless videophone apparatus 120, also referred to as a wireless videophone 120, and, as an option, one or more video displays 225, may utilized to display the incoming video portion of an audio and video conferencing call or session (incoming in the sense of having been transmitted to the wireless video access apparatus 101 from another location), and. prefer:ably. also inoludes a sp for output of the incoming audio portion of an audio and video conferencing call or session. The video camera 230 is utilized to generate the outgoing video portion of an audio and video conferencing call or session (outgoing in the sense of being transmitted from the wireless video access apparatus 101 to another location), may also include a microphone for generation of the outgoing audio portion of an audio and video conferencing call or session, and is implemented utilizing an ordinary video camera or camcorder in the preferred embodiment. The camera interface 235 is utilized to modulate the video output signal from the video camera 230 for transmission on the second communication channel 227 to the wireless video access apparatus 101 and, as discussed in greater detail below, the camera interface 235 also may be directly incorporated within the video camera 230.
Continuing to refer to FIG. 1, the wireless video access apparatus 101 includes a network interface 210 (which may be 11 for wireline or cable networks), a radio frequency (RF) modulator and demodulator 205 (also referred to as an RF modulatorldemodulator 205), a user interface 215, and a processor arrangement 190. The network interface 210 is coupleable to the first communication channel 103 for reception of a first protocol signal from the network 140, to form a received protocol signal, and for transmission of a second protocol signal to the network 140, to form a transmitted protocol signal. These first and second protocol signals may have multiple layers and types of protocol encoding and modulation. First, such first and second protocol signals preferably include audiolvideo compression (and decompression) encoding (and decoding), preferably utilizing the International Telecommunications Union (ITU) H.32x series or family of protocols, such as H.320 utilized with digital services (ISDN), H.324 utilized with analog services (PSTN), H.323 utilized with LANs (local wee networksy, other FL32x protocois (such as H.321 and H. 322), and other ITU protocols pertaining to audiolvideo and other data communication. In addition, in the preferred embodiment, additional protocol layers are employed, involving further encoding/decoding and/or modulation/demodulation of an H.32x encoded audiolvideo signal. In the preferred embodiment, for ISDN transmission and reception, ISDN protocols are utilized for encoding, decoding, framing, etc., of an H.32x encoded audio/video signal, utilizing, for example, Q.931 message signaling, Q.921 LAPD data link, and 0.910 physical layer (interface) digital protocols. Also in the preferred embodiment, for PSTN (POTS) transmission and reception, an H.32x encoded audiolvideo signal is further protocol encoded/decoded and modulated/demodulated utilizing the ITU V.x family or series of analog transmission protocols, such as V.34, V.34bis, or potentially or proposed higher data rate analog protocois. For example, for an analog POTS 12 transmission, the audio/video data may be compressed and formatted utilizing ITU H.323 or H.324 protocols, then further encoded and modulated utilizing ITU V.34 or V.34bis protocols.
For cable network transmission, the audiolvideo data may be compressed and formatted utilizing ITU protocols, and then further encoded and modulated utilizing the CACS protocol, as disclosed in the related applications. As discussed in greater detail below with reference to FIGs. 3 and 4, the network interface 210 is utilized to transmit and receive analog or digital video and audio information and data (generally referred to as data), in any given format, protocol, or modulation scheme compatible with the network 140 and any particular network connections. For example, when coupled to an ISDN via the first communication channel 103, the wireline network interface 210 will transmit and receive data in accordance with the ISDN series of protocois, such as the 0.x series.
Also, as used herein-,.inp.ut.-and. output directions. ara defined to avoid confusion between incoming and outgoing signals because, for example, an incoming signal to the wireless video access apparatus 101 from the network 140 will also be an outgoing signal from the wireless video access apparatus 101 when transmitted to a wireless videophone 120 or a video display 225 on the second communication channel 227. As a consequence, as used herein, input and output directions are defined at the interface between the wireless video access apparatus 101, on the one hand, and the second communication channel 227 or wireless telephone base station 110, on the other hand, as follows: an input signal, such as an input video or audio signal, is input to the wireless video access apparatus 101 from the second communication channel 227 (or, in the case of input audio, from the wireless telephone base station 110), and may originate, for example, from the video camera 230, and will be transmitted from the wireless video access apparatus 101 to the network 140; conversely, an 13 output signal, such as an output video or audio signal, is output from the wireless video access apparatus 101 to the second communication channel 227 (or, in the case of output audio, to the wireless telephone base station 110), and may originate, for example, from a remote location via the network 140, is received by the wireless video access apparatus 101 via the first communication channel 103, and will be transmitted or output by the wireless video access apparatus 101 on the second communication channel 227 to the wireless videophone 120 (andlor a video display 225) or output to the wireless telephone base station 110.
Continuing to refer to FIG. 1, the RF modulator and demodulator 205 is utilized, first, to convert a baseband output video signal (from the processor arrangement 190) to a radio frequency output video signal, for initial transmission on the second communication channel 227 to the video transponder (andlor one or more of- the video displays -225), followed by.
retransmission by the video transponder 115 and receptio ' n of the output video signal by the wireless videophone 120; and second, to convert a radio frequency input video signal (from the camera interface 235) to a baseband input video signal, for input to the processor arrangement 190. As discussed in greater detail below, the video transponder 115 may be implemented as a radio frequency video transponder 115A (illustrated in FIG. 12) or as an infrared video transponder 11513 (illustrated in FIG. 13). The user interface 215 is utilized for reception of a control signal of a plurality of control signals, such as a request to place a telephony call, a request to place an audio and video conference call, and other control signals such as alerting signals of incoming telephony or audio and video conference calls. In the preferred embodiment, the user interface 215 also routes the audio portion of an audiolvideo conference to and from the wireless telephone base station 110, which retransmits output audio to the wireless videophone __1 14 and which receives input audio from the wireless videophone 120. In addition, either or both the video transponder 115 and the wireless telephone base station 110 may also be incorporated within the wireless video access apparatuses 101 (or 201).
Continuing to refer to FIG. 1, the processor arrangement 190 is coupled to the network interface 210, to the radio frequency modulatorldemodulator 205 and to the user interface 215. Depending upon the desired embodiment, the processor arrangement 190 performs a wide variety of functions, including audiolvideo compression and decompression, and protocol encoding and decoding, such as CACS protocol encoding and decoding (for cable network embodiments) and ITU Q.931 encoding and decoding (for ISDN embodliments). As explained in greater detail below, the processor arrangement 190 may be comprised of a single integrated circuit ("IC"), or may include a plurality of integrated circuits or other components connecte-d-or grouped together, such as microprocessors, digital signal processors, ASICs, associated memory (such as RAM and ROM), and other iCs and components. As a consequence, as used herein, the term processor arrangement should be understood to equivalently mean and include a single processor, or arrangement of processors, microprocessors, controllers, or some other grouping of integrated circuits which perform the functions discussed in greater detail below. For example, in the preferred embodiment, the processor arrangement 190 is implemented as illustrated in FIG. 2, and includes an audiolvideo compression and decompression subsystem 265 and a microprocessor subsystem 260. As discussed in greater detail below, the methodology of the present invention may be programmed and stored, as a set of program instructions for subsequent execution, in the processor arrangement 190 and its associated memory and other equivalent components. In the preferred embodiment, the -1\ processor arrangement 190 is utilized, in conjunction with a stored set of program instructions and in response to any control signals entered by the user or received from the network 140, first, to convert the received protocol signal (from the network interface 210) both to a baseband output video signal (to be modulated by the RF modulatorldemoduiator 205 and transmitted to a video transponder 115 for remodulation and wireless retransmission to a wireless videophone 120, and to an output audio signal (transmitted to the wireless telephone base station 110 and retransmitted to the wireless videophone 120, or combined with the baseband output video signal and modulated and transmitted to the wireless videophone 120); and second, to convert both a baseband input video signal (the demodulated input video signal having originated from the camera interface 235) and an input audio signal (from the wireless telephone base station 110 or combined with the baseband input videosigna[ -having originated from the-vided camera 230 and the camera interface 235), to the second protocol signal (to be modulated or formatted and transmitted by the network interface 210 to the network 140). The functions of each of the components of the wireless video access apparatus 101 are discussed in greater detail below.
FIG. 2 is a detailed block diagram illustrating a second embodiment of a wireless video access apparatus, namely, wireless video access apparatus 201, and illustrating a second embodiment of a wireless video conferencing system 200, in accordance with the present invention. The second apparatus embodiment, namely, the wireless video access apparatus 201 illustrated in FIG. 2, is the preferred apparatus embodiment of the invention, and is in all other respects equivalent to and may be utilized in a manner identical to the first embodiment, wireless video access apparatus 101, illustrated in FIG. 1.
Similarly, the second embodiment of the wireless video conferencing system, wireless video conferencing system 200, 16 is also the preferred system embodiment of the present invention, and is in all other respects equivalent to and may be utilized in a manner identical to the firstembodiment, wireless video conferencing system 100, illustrated in FIG. 1.
As illustrated in FIG. 2, the wireless video access apparatus 201 includes a microprocessor subsystem 260 and an audio/video compression and decompression subsystem 265, which form the processor arrangement 190 discussed above with reference to FIG. 1. Forming the network interface 210, the wireless video access apparaius 201 includes an ISDN interface 245 and a telephony interface 250; alternatively, the network interface 210 may also be implemented utilizing the cable network interface discussed in greater detail below with reference to FIG. 3. The wireless video access apparatus also includes a user audio interface 255 (which equivalently functions as the user interface 215 illustrated in FIG. 1); and an RF modulator 270 and RF-demodulator 275 -(which together. equivalently function as the RF modulatorldemodulator 205 illustrated in FIG. 1). In this preferred embodiment, the first communication channel 103 includes an ISDN or other digital line 105, coupleable to the ISDN interface 245, and a telephony (POTS) line 107, coupleable to the telephony interface 250. For cable network connection, the first communication channel 103 may be a coaxial cable, a fiber optical cable, or a hybrid fiber coaxial cable. Depending upon the desired embodiment, discussed below with reference to FIG. 4, both the ISDN interface 245 (and corresponding digital line 105) and the telephony interface 250 (and corresponding telephony line 107) do not need to be included, as one or the other is sufficient. For example, a user or subscriber who does not desire an ISDN connection may choose an implementation of the wireless video access apparatus 201 having only a telephony interface 250 (and corresponding telephony line 107), without an additional ISDN interface 245 (and 17 corresponding digital line 105). The preferred embodiment of the wireless video access apparatus 201 illustrated in FIG. 2 also includes a line or connector 115 for connection to a television antenna or to cable television for input of a television broadcast, cable television or other video; a fitter 285; and a directional coupler 290. The functions of each pf these components is explained in greater detail below.
Also as illustrated in FIG. 2, the second embodiment of a video conferencing system 200 includes the wireless video access apparatus 201; a wireless telephone base station 110; a video transponder 115; a video camera 230; and a camera interface 235 (which also may be combined or incorporated within the video camera 230). The video conferencing system 200 may also include, as options, one or more telephones 295, and one or more televisions 240 (which equivalently function as the video displays 225 illustrated in FIG. 1).
Referring to FIG. 2, the wireless video. access apparatQs 201 provides both telephony (POTS) and audiolvideo conferencing service using the wireless videophone 120 for video display, for audio input and output, for entry of control signals (which also may be entered via a telephone 295); and video camera 230 for video input. Output video may also be displayed via a television 240. When providing POTS service, the wireless video access apparatus 201 interfaces with the typical, existing twisted-pair cabling 294 in the user (or subscriber) premises so that any telephone in the user premises, such as a telephones 295, may be used, in addition to using the wireless videophone 120. In the preferred embodiment, the wireless video access apparatus 201 also provides line current and traditional "BORSH-r' functions for typical (POTS) telephone service, as explained in greater detail below.
When providing video conferencing service, the wireless videophone 120 and/or any telephone 295 may be used for call 18 (conference) establishment or set up and for audio input and output. The radio frequency output video signal (from the wireless video access apparatus 201) may be displayed on the wireless videophone 120 (andlor any of the televisions 240 connected to the second communication channel 227 (such as a CATV coaxial cable) within the user premises, using any channel (when not connected to cable TV) or using any vacant channel within the CATV downstream frequency band (for example, channel 3 or 4)). The radio frequency output video signal is originally received via the first communication channel 103 from the network 140 in a modulated or formatted digital form, such as digital data modulated and encoded utilizing one or more protocols such as CACS, H. 32x, and Q.x or V.x, which may be referred to as a received or first protocol signal. The first protocol signal is received over the first communication channel 103, having been transmitted via, for example, the network 140, from another,- second, user premises. The-first protocol signal, typically consisting of encoded/modulated and compressed digital data, is received by the wireless video access apparatus 201, which decodes/demodulates and decompresses the data and converts it to an output audio signal and to a baseband output video signal, such as an NTSCIPAL composite video signal (NTSC being a video format typically utilized in North America and Japan, with PAL being a video format typically utilized in Europe). Other video formats may also be used, such as SECAM (typically used in France) or HDTV (high definition television formats). This baseband output video signal (on line 271) is then RF modulated (using RF modulator 270) onto an available video RF carrier to form a (first) radio frequency output video signal and injected into the second communication channel 227 (cg,, coaxial cable) at the user premises using a directional coupler 290 (preferably 4 port). The radio frequency output video signal is then sent to the video transponder 115, and may also be sent to all 19 television receivers, such as televisions 240, within the user premises, such as a home or office. As discussed in greater detail below, the video transponder 115 remodulates the (first) radio frequency output video signal to a second frequency suitable for wireless retransmission to the wireless videophone 120, such as 900 MHz (for an RF frequency) or at an infrared (R) frequency, forming a second output video signal. For example, a radio frequency output video signal transmitted on the second communication channel 227 on channel 3 or 4 (61.25 or 67.25 MHz) may then be remodulated to a second frequency, such as 900 MHz, suitable for wireless retransmission by the video transponder 115 to a wireless videophone 120. In the preferred embodiment, the Video transponder 115 may be implemented as a radio frequency video transponder 11 SA or as an infrared video transponder 11 5B, discussed in greater detail below with reference to FIGs. 12 and 13. Correspondingly..in the preferred embodimenti-the.
wireless videophone 120 may be implemented as a radio frequency wireless videophone 120A or as an infrared wir eless videophone 12013, discussed in greater detail below with reference to FIGs. 14 and 15. The directional coupler 290 is used in the preferred embodiment to provide directional signal injection while providing isolation with any connected CATV network (which may be coupled via line 115). The output audio signal is transmitted to the wireless telephone base station 110 (and to a telephone 295), for retransmission to the wireless videophone 120.
The video transponder 115 and wireless telephone base station 110, or their equivalents, may also be incorporated directly within the wireless video access apparatus 201 (or 101). In addition, for wireless video transmission independently of the second communication channel 227, the video transponder 115 may be omitted, with the RF modulator (coupled to an antenna) directly providing a radio frequency output video signal at wireless frequencies. Under such circumstances, for video transmission, the intervening modulation of the baseband output video signal to a first radio frequency output video signal may be unnecessary and may be omitted, with the baseband output video signal instead being modulated directly to either an infrared output video signal or a radio frequency output video signal (such as 900 MHz) suitable for wireless transmission. These variations are discussed in greater detail below with reference to FIG. 3.
The video signal originating in the user premises and to be transmitted via the network 140 to another, second user premises (or any other location), originates from a video camera (or camcorder) 230 that produces a video signal, such as an NTSC/PAL composite video signal, which is also preferably modulated on channel 3 or 4 (61.25 or 67.25 MHz).
This RF video signal from the video camera 230 is connected or coupled to a camera interface 23.5. which jitilizes. an offset -- mixer to shift the RF video signal (typically on a 61.25 or 67.25 MHz carrier) up to a spectrum higher than typical CATV frequencies, such as the 1.2 GHz or 900 MHz bands, to avoid interfering with the radio frequency output video signals or other CATV downstream channels. When the video access apparatus is not connected to CATV, such offset mixing may be unnecessary and the camera interface 235 may be omitted from the system 200, provided interference with the downstream radio frequency output video signals may be avoided (for example, utilizing downstream transmission on channel 9 and upstream (input) transmission on channel 3 or 4). For those video cameras 230 which may not include a modulator to shift the NTSC/PAL composite video signal to channel 3 or 4, such modulation may be incorporated into the camera interface 235; conversely, the functions of the camera interface 235 may also be incorporated directly into the video camera 230. The shifted (offset mixed) video signal from the 21 camera interface 235 (or unshifted video signal directly from the camera 230, if CATV or other downstream interference is not an issue), referred to herein as a radio frequency input video signal, is then injected into the same second communication channel 227 (also connected to the televisions 240) and transmitted to the wireless video access apparatus 201. The wireless video access apparatus 201 receives the radio frequency input video signal via the directional coupler (preferably at 1.2 GHz or 900 MHz) and demodulates the signal to baseband using RF demodulator 275, to form a baseband input video signal (on line 272). The baseband input video signal is then combined with an input audio signal (from the wireless videophone 120 and received via the wireless telephone base station 110), and the baseband audiolvideo signal is converted to digital form and compressed, to form a second protocol signal, such as a H.32x encoded video signal, and is transmitted (to form.a-.transm-itted protorol.signa.which preferably also has further encoding and or modulation, such.
as a further 0.x or V.x encoded signal) via first communication channel 103. In the preferred embodiment, by using a vacant video channel at 1.2 GHz or 900 MHz, interference with any applicable downstream and upstream video, television, or CATV services tends to be avoided. The 1.2 GHz or 900 MHz signal is also filtered out of the feed-through cable or link 287 by a low pass fitter 285, so that the signal is highly attenuated before it may leave the wireless video access apparatus 201 through any cable which may be attached via line 115.
While the primary function of the wireless video access apparatus 101 (201 or 301) and the wireless video conferencing system 100 (200 or 300) is to provide full-duplex video communications, other secondary functions are also available in the preferred embodiment. For example, one such secondary function is a Ioop back function" which allows the user to view the video from the video camera 230 on the 1-1\ 1 1 22 wireless videophone 120 or on the screen of a television 240 or video display 225, such that the RF input video signal is demodulated (from 1.2 GHz or 900 MHz), remodulated onto a video RF carrier (that is tunable or receivable by the video transponder 115 or televisions 240), and utilized for an RF output video signal. Such a loop back feature is especially valuable for surveillance, such as for home security or for baby monitoring. Also, a picture-in-picture (or multiple window) function may be provided, in which a user may view a small window of the video from video camera 230 along with the received video from another location, for example, to provide baby monitoring within the small window while simultaneously watching a movie or video received from a CATV network, or to provide a self-view for viewer feedback concerning the positioning of the viewer's own video camera 230.
In addition, the wireless video access apparatus 101 (201 or 301) may be frequency. agile, such that video conferencing may occur on any channel. While video conferencing on typically empty television or cable channels such as channels 3 or 4 may be preferable, in accordance with the present invention, video conferencing on additional channels is also feasible. For example, an existing video channel may be blanked out or eliminated, utilizing a notch fitter, for any length of time, and the various input and output video signals inserted or overlaid into the now empty (filtered or muted) channel. Such frequency agility and injection of an audiolvideo signal, in the presence of existing programming, is one of many truly unique features of the present invention.
FIG. 3 is a detailed block diagram illustrating a third embodiment of a wireless video access apparatus, namely, wireless video access apparatus 301, and illustrating a third embodiment of a wireless video conferencing system 300, in accordance with the present invention. The third apparatus embodiment, namely, the wireless video access apparatus 301 23 illustrated in FIG. 3, is a preferred, completely wireless apparatus embodiment of the invention, and is in all other respects equivalent to and may be utilized in a manner identical to the first and second embodiments, wireless video access apparatuses 101 and 201, illustrated in FIGs. 1 and 2. Similarly, the third embodiment of the wireless video conferencing system, wireless video conferencing system 300, is also a preferred, completely wireless system embodiment of the present invention, and is in all other respects equivalent to and may be utilized in a manner identical to the first and second system embodiments, wireless video conferencing systems 100 and 200, illustrated in FIGs. 1 and 2.
Referring to FIG. 3, the wireless video conferencing system 300 includes the wireless video access apparatus 301, a wireless camera unit 302, and the wireless videophone apparatus 120. The wireless video access apparatus 301 is very similar to the other appayous.embochments. 101 and 20 1 utilizing the network interface 210, the user audio interface 255, the microprocessor subsystem 260, the audiolvideo compression and decompression subsystem 265, the RF modulator 270, and the RF demodulator 275, all in the same manner as discussed above. The wireless video access apparatus 301 differs from the other embodiments (101 and 201) in incorporating the wireless telephone base station 110 within the embodiment, and including a first RF transmitter 273 and a RF receiver 277 coupled to an antenna 276 for wireless video transmission and reception (rather than wireline video reception via the second communication channel 227). The first RF transmitter 273 and RF receiver 277 may be implemented utilizing known technology, and may also be incorporated within the RF modulator 270 and the RF demodulator 275, respectively.
Continuing to refer to FIG. 3, for output video transmission (as mentioned above), the intervening modulation 1- 1 1 24 of the baseband output video signal to a first radio frequency output video signal (followed by remodulation to a second frequency) may be unnecessary and may be omitted, with the baseband output video signal instead being modulated directly by the RF modulator 270 to a radio frequency output video signal (such as 900 MHz) suitable for wireless transmission via the first RF transmitter 273 and antenna 276. Similarly for input video reception, the radio frequency input video signal from the camera interface 235 may also be transmitted by a second RF transmitter 291, also at a (non-interfering) frequency suitable for wireless transmission. In this embodiment, the video camera 230 and the camera interface 235 (operating as discussed above), along with the second RF transmitter 291, are incorporated within a wireless camera unit 302, which may be portable and is not required to be coupled to a wireline communication channel, such as the second communication channel 227. The radio frequency- input video- signal from: th-t second RF transmitter 291 may be received by the RF receiver 277 (via antenna 276), and processed as discussed above with regard to the other embodiments. Not illustrated in FIG. 3, for infrared (rather than RF) transmission and reception, those skilled in the art may recognize that the RF modulator 270 and the RF transmitter 273, along with the RF demodulator 275 and the RF receiver 277, may be replaced by corresponding infrared components, such as those components discussed below with reference to FIG. 13 (driver circuit 590, IF diodes 595 with DC bias 597, for infrared transmission) and FIG. 15 (lens 625 and infrared detector 627, for infrared reception).
FIG. 4A is a block diagram illustrating a network interface 210, suitable for use with a cable network, of a preferred apparatus embodiment in accordance with the invention disclosed in the second related application. Such a network interface 210 for a cable network is also discussed in detail in the related applications. In addition, the CACS protocol utilized
U in the preferred embodiment is also discussed in detail in the related applications. For such a cable embodiment, the network interface 210 consists of a CATV RF transceiver 243 and a communications ASIC 253; alternatively and equivalently, the communications ASIC could also be considered to be a part of the processor arrangement 190 (in addition to the audio/video compression and decompression subsystem and the microprocessor subsystem). FIG. 413 is a block diagram illustrating the CATV RF transceiver 243 of the preferred apparatus embodiment of the present invention. In the preferred embodiment, the CATV RF transceiver 243 is frequency agile, providing upconversion and downconversion of the CACS signals to and from any available CACS carrier, with frequency control provided by the microprocessor subsystem 260. Referring to FIGs. 4A and 413, a first protocol signal, such as a CACS ri/4-DOPSK modulated downstream carrier in the 50 - 750 MHz-CATV1and, is received frorn.the li ' rst communication channel 103 and filtered in.the filter 306 (having a 50 - 750 MHz bandwidth), and in heterodyne downconverter 311, is heterodyne downconverted to baseband, with this incoming baseband signal having iriphase ("I") and quadrature CW) components (or signals). The local oscillators for the heterodyne downconverter are provided by a frequency synthesizer subsystem 316. The 1 and 0 components are then square root raised cosine ("SRRCII) filtered in a first SFIRC fitter 321 to remove noise and other distortions. The filtered 1 and 0 components are then mixed up to an intermediate frequency (IF) signal at 1.2 MHz, in the up mixer 326, for transfer to the communications ASIC 253 on bus 261 (or on another line connecting the up mixer 326 to the communications ASIC 253). In the preferred embodiment, the CACS carrier has a symbol rate of 384 kilosymbols/second and is transmitted with an excess bandwidth factor of 0.5, and with an occupied channel bandwidth of 600 kHz.
26 Continuing to refer to FIGs. 4A and 4B, a second protocol signal, such as a 768 kb/s TDMA burst, originating from the communications ASIC 253, is applied to a rii4-DOPSK waveform generator or modulator 331, which outputs baseband 1 and 0 components (signals). The 1 and Q signals are SRRC filtered (in second SIRRC filter 336) and then upconverted in RF upconverter 341 to the 5 - 40 MHz CATV upstream band, to form a transmit (or transmitted) protocol signal. As in the downconverter 311, local oscillators for the RF upconverter 341 are provided by the frequency synthesizer subsystem 316. The transmit power of the TDMA burst is programmable by the microprocessor 350 of the microprocessor subsystem 260 (discussed below with reference to FIG. 6) to provide network gain control, by a network 140, over any individual wireless video access apparatus 101, 201 or 301 connected to the network 140.
The communications ASI.C.2.53 is utilized in the preferred apparatus embodiment to provide low-level baseband functions to support a cable network protocol such as CACS. Functionally, communications ASIC 253 may be separated into a receive section and a transmit section (not separately illustrated in FIG. 413). In the receive section, the IF signal at 1.2 MHz (from the up mixer 326 of the CATV transceiver 243), contains the w4-MPSK modulated CACS signal. This downstream CACS rii4-DQPSK TDM signal is coherently demodulated, to provide baseband binary data as well as recovery of symbol and bit timing information. A TIDM frame is then synchronized and decoded, time slot data is extracted, and error control checking is performed. Such supervisory data, as well as user data in the payload, is then made available to the microprocessor subsystem 260 via the bus 261, which may be an address/data bus. The user data may also be directly routed out of the communications ASIC 253 for delivery to the audio codec 410 (FIG. 8) or the ---11 27 audiolvideo compression and decompression subsystem 265 (FIG. 7). In the transmit section of the communications ASIC 253, control data originating from the microprocessor 350, and compressed audio and video data from the audiolvideo compression and decompression subsystem 265, are transferred to the communications ASIC 253, to create an audiolvideo data stream. The audio/video data stream is then formatted with synchronization and error control information, resulting in binary TDIVIA bursts, which are then transferred to the CATV transceiver 243 for subsequent modulation and transmission as a transmitted protocol signal over the first communication channel 103. In the preferred embodiment, the communications ASIC 253 also provides other functions to support the wireless video access apparatus 201, including TDIVIA time alignment, sleep mode control for low power operation, data buffering for rate control, and interrupt generation of POTS interface. contfol signals, -.
FIG. 5A is a block diagram illustrating a network interface 210, for wireline networks, of the preferred apparatus embodiment in accordance with the present invention. As indicated above, such a network interface 210 for wireline preferably is comprised of both an ISDN (digital) interface 245 and a telephony (or analog) interface 250, although either alone (digital or analog interface) is sufficient. As discussed in greater detail below, the first and second protocol signals for wireline, preferably encoded utilizing H.32x and further encoded/modulated utilizing either Qx or V.x protocols, are transported to and from the network 140 through one or both of these interfaces 245 and 250. Referring to FIG. 5A, utilizing an ISDN (digital) interface 245, connection to an ISDN or other digital network via line 105 is made through a jack 305 which may be, as discussed in greater detail below with reference to FIGs. 513 and 5C, for example, an RJ 45 jack or an RJ 11 jack, depending upon the service provided by the digital network.
28 Coupled to the jack 305 is an isolation transformer circuit 310, which is further coupled to an ISDN transceiver 315 (which, as discussed below, may be either an SIT transceiver 315a or a U transceiver 315b). The ISDN transceiver 315, in turn, is coupled to the microprocessor subsystem 260 via a synchronous serial interface portion of bus 261.
FIG. 513 is a block diagram illustrating an ISDN S/T interface 245a for use with pre-existing ISDN service. For example, a digital network service provider may typically bring a twisted pair line to the outside of a subscriber's premises, and install an ISDN interface. As a consequence, when there is a pre-existing ISDN NT1 interface, such as interface 306 (having an NT1 function for two to four wire conversion), appropriate connection to the existing NT1 interface should be made utilizing an ISDN S/T interface 245a. As a consequence, as illustrated in FIG. 5B, the jack 305 is implemented as an RJ45 jack 305a, the isolation transformer circuit 310 is implemenled.
as an S/T dual isolation transformer 31 Oa, and the ISDN transceiver 315 is implemented as an ISDN SIT transceiver 315a (such as a Motorola MC145574 integrated circuit).
FIG. 5C is a block diagram illustrating an ISDN U interface 245b for use when there is no pre-existing ISDN service (having an installed NT1 interface). In this implementation, the jack 305 is implemented as an RJ1 1 jack 305b, the isolation transformer circuit 310 is implemented as a U isolation transformer 310b, and the ISDN transceiver 315 is implemented as an ISDN U transceiver 315b which also performs a NT1 function (such as a Motorola MC145572 integrated circuit).
Referring to FIG. 5A, for digital service, the ISDN interface 245 consists of an ISDN transceiver 315, such as the Motorola MC145574 or MC 145572, and an isolation transformer circuit 310, which provide the layer one interface for the transportation of two 64 kbps B channels and one 16 kbps U 29 D channel between the network 140 termination Gack 305) and the microprocessor subsystem 260, preferably performing certain portions of the ISDN protocols, namely, 0.910 physical layer and 0.921 LAPD data link protocols. The ISDN transceiver 315 provides the modulationlline transmit and demodulationlline receive functions, as well as activation, deactivation, error monitoring, framing, and bit and octet timing. The ISDN transceiver 315 interfaces with the microprocessor subsystem 260 over a synchronous serial interface (SS]) portion of the bus 261. As discussed in greater detail below, for such a wireline network embodiment, the microprocessor subsystem 260 performs the 0.931 message signaling ISDN protocol and provides overall control of all subsystems within a wireless video access apparatus 101, 201 or 301, while the audiolvideo compression and decompression subsystem 265 performs the H. 32x protocols.
Continuing to refer to-FIG. SA, for analog serviceAhe telephony (or analog) interface 250 performs analog modem functions, operating, for example, as a V.34 or V.34bis mo dem.
Connection to an analog network, via a telephony (POTS) line 107, is made via a jack 320, which is typically an RJ1 1 jack. Connected to the jack 320 is a dial (or data) access arrangement (DAA) 325, which receives an analog signal transmitted on the analog telephony line 107. DAM are known in the prior art and may be made of a variety of discrete components, including analog multiplexers, resistors, capacitors, and operational amplifiers, or may be embodied in whole or part as an integrated circuit, such as a Cermetek CH1837, and performs such functions as impedance matching, power level adjustment, isolation, surge voltage protection, and ring detection functions. Connected to the DAA 325 is a codec (coder-decoder) 330, such as a Motorola MC145500 integrated circuit (or, equivalently, an analog-digital (AID) converter) which converts an analog signal received from the line 107 to 1 sampled, digital form, and converts sampled, digital information to analog form for transmission over the line 107. The codec 330 is also referred to as a network codec 330, to distinguish it from a second codec, the audio codec 410, utilized in the user audio interface 255. The network codec 330 interfaces with a voice digital signal processor (DSP) 415 (of the user audio interface 255), also over a synchronous serial interface (SSI) portion of the bus 261. The network codec 330 performs V.x functions when in video mode, and voice functions when in telephony mode, as discussed in greater detail below.When utilized in this analog modem role (V.x functions), the voice DSP 415 operates in conjunction with the video processing DSP 365 (of the audio/video compression and decompression subsystem 265) utilizing a set of modem program instructions under the control of the microprocessor subsystem 260. The audio/video compression and decompression subsystem 265 also performs H.32x compression.and. decompression nf,.-.thg' various input and output audio and video signals. This telephony interface 250 is used in the preferred embodiment for V.x modem functions during a video telephony call, and analog audio functions during a typical voice (POTS) call.
FIG. 6 is a block diagram illustrating a microprocessor subsystem 260 of the preferred apparatus embodiment in accordance with the present invention. The microprocessor subsystem 260 consists of a microprocessor 350 or other processing unit, such as the Motorola MC68LC302, and memory 360, which includes random access memory (RAM) and read-only memory (ROM), and in the preferred embodiment, also includes flash programmable memory (such as flash EPROM or E2PROM), with communication provided over the bus 261 to the network interface 210, the user audio interface 255 (and voice DSP 415), and the audio/video compression and decompression subsystem 265. The read only memory portion of memory 360 also utilizes flash 1 1 1111 31 programmable memory, such that the memory contents may be downloaded from the network 140. As a consequence, different versions of operating software (program instructions), such as upgrades, may implemented without modifications to the wireless video access apparatus 201 and without user intervention.
Continuing to refer to FIG. 6, the microprocessor subsystem 260 provides device control and configuration, call processing, and is also used to implement an ISDN protocol stack when required for video calls, such as Q.931 message signaling. For wireline network applications, because the microprocessor subiystem interfaces with the ISDN interface 245 and the telephony interface 250 (via the voice DSP 415), a high speed data link may be established between the network 140 and the audiolvideo compression and decompression subsystem 265 using the microprocessor subsystem 260 as the data exchange and protocok. conversion device. User audio,: i.R the form of a pulse code modulated (PCM) data stream, may also be routed through the microprocessor 350 to the audiolvideo compression and decompression subsystem 265 from the voice DSP 415 of the user audio interface 255.
FIG. 7 is a block diagram illustrating an audiolvideo compression and decompression subsystem 265 of the preferred apparatus embodiment in accordance with the present invention. The audiolvideo compression and decompression subsystem 265 performs video compression of the baseband input video signal (originating from the video camera 230 and camera interface 235) and audio compression of the input audio signal (from the user audio interface 255), and decompression of the audio and the video data of the received, first protocol signal (the first protocol signal previously having been decoded andlor demodulated) for subsequent display on the wireless videophone 120 or television(s) 240, all preferably utilizing the H.32x family of protocols. The 32 audiolvideo compression and decompression subsystem 265 includes a video processing digital signal processor (DSP) 365, a red-green-blue digital to analog converter 370, a redgreen-blue analog to digital converter 390, an encoder 375, and an audiolvideo input processor 380. The video processing DSP (or video processing DSP subsystem) 365 is a highspeed programmable DSP (or DSP arrangement or subsystem such as a Motorola DSP56303 with associated support components, including memory and a hardware acceleration ASIC (discussed below), utilized to implement different video and audio compression and decompression algorithms, depending on the transmission rate andlor video conferencing standard at the remote end QL, the other premises with which the wireless video access apparatus 201 is communicating).
The program code for the video processing DSP 365 may also be downloaded from the microprocessor subsystem memory 360, which may also be downloaded by a service provider... through the network 140. As a consequence, video functionality of the wireless video access apparatus 201, including new algorithms, may be changed or upgraded onthe-fly, also without any hardware changes and without user intervention.
Continuing to refer to FIG. 7, compressed audio/video data received from the network 140 (as, for example, H.32x encoded protocol signals), via the network interface 210 and the microprocessor subsystem 260, is transferred to the video processing DSP 365 where it is decompressed, with video also converted to red-green-blue ("RGB") digital video signals, and with decompressed audio transferred to the user audio interface 255 (or combined with the decompressed video signal for subsequent modulation and transmission to the televisions 240 and the wireless videophone 120). The RGB digital video signals are then converted to RGB analog signals, by the RGB digital to analog ("D/A") converter 370, such as the Motorola 11 33 MC44200. The analog RGB signals, along with a composite synchronization signal, are then applied to an encoder 375, preferably an NTSC/PAL encoder such as a Motorola MC13077, resulting in an NTSC/PAL composite video signal, which may also be referred to as a baseband output video signal. The NTSC/PAL composite video signal is then transferred to the RF modulator 275 for upconversion to a radio frequency (to form the radio frequency output video signal), followed by transmission on the second communications channel 227 to the video transpondar 115 and display on a television 240.
For subsequent transmission over the network 140 of an input video signal (originating from the video camera 230 and the camera interface 235), a baseband input video signal, such as an NTSC/PAL composite video camera or camcorder signal, is received from the RF demodulator 270. The baseband input video signal is transferred to-an audiolvideo input processQr 380, such as a Motorola MC4401 1, which converts the baseband input video signal to analog RGB signals, while also providing a genlocked sampling clock for subsequent digitizing of the video signals. These input analog RGB signals are then converted to digital RGB signals by a RGB analog to digital converter 390, such as the Motorola MC44250, and transferred to the video processing DSP 365. The video processing DSP 365 compresses the digital RGB signals and audio data (from the user audio interface 255), preferably utilizing an H.32x protocol, and transfers the resulting data stream to the microprocessor subsystem 260 for additional analog or digital processing. It should be noted that as part of the H.32x protocol, audio information originating from the user audio interface 255 or from the video camera 230 (and camera interface 235) is compressed and combined with compressed video data before transmission to the network 140 via the network interface 210. For subsequent digital transmission, the 1-1.11 34 microprocessor subsystem 260 encodes the compressed audio/video data, for example, utilizing the 0.931 ISDN message signaling protocol, and transfers the processed data to the network interface 210, such as ISDN interface 245, for additional ISDN protocol processing and transmission over the first communication channel 103. For subsequent cable network transmission, the microprocessor subsystem 260 and the communications ASIC 253 perform CACS protocol encoding. For subsequent analog transmission, the microprocessor subsystem 260, the voice DSP 415 (of the user audio interface 255) and the video processing DSP 365 encode the compressed audio/video data utilizing analog protocols such as the V.x series of protocols, and transfer the processed data to the telephony interface 250, for additional V.x protocol processing and transmission over the first communication channel 103. In the preferred embodiment, the audiolvideo compression and decompression subsystem: 2C5 may also include additional random access memory for use by the video processing DSP 365 for partial or full storage of pixel data of an inputloutput video frame. Also in the preferred embodiment, a hardware acceleration ASIC is used to assist the video processing DSP 365 in processing speed intensive tasks, such as discrete cosine transforms associated with the compression and decompression processes.
FIG. 8 is a block diagram illustrating a user audio interface 255 of the preferred apparatus embodiment in accordance with the present invention. The user audio interface 255 is designed to interface with standard household telephone sets, including wireless devices and speaker phones, such as wireless telephone base station 110 and a telephone 295. The user audio interface 255 is intended to support both audio POTS calls and video calls. in the preferred embodiment, POTS calls are processed in a'lransparent" mode, such that placing and receiving telephone calls occur as if no video call functions were present. Also in the preferred embodiment, video calls are processed as an exception, requiring a designated or predetermined dialing sequence entered by the user to invoke a video call.
Referring to FIG. 8, a SLIC (Subscriber Loop interface Circuit) 400 provides "BORSHTn functions for telephone service within the user premises, such as that normally provided by a network central office, including DC (direct current) power for the telephone (,aattery);. Qvervoitage protection; Sing trip detection and facilitation of ringing insertion; Supervision features such as hook status and dial pulsing; tiybrid features such as two-wire differential to four-wire single-ended conversions and suppression of longitudinal signals at the twowire input; and Jesting. The SLIC 400 communicates with the telephone base station 110 and telephone 295 through an ordinary telephone line, such as twisted pair cabling 294, which has tip and ring lines. The.. ring.. generator-4Q5 -provides-highvoltage AC (alternating current) signals to ring the telephones 2951 through 295n. Connected to the SLIC 400, the audio codec 410 provides analog-to-digital conversion for voice digitizing of the input (voice) audio signal originating from the microphone portion of one or more of the wireless videophones 120 or telephone 295, to form an input (PCM) digital voice data stream or signal, and digital-to-analog conversion for voice recovery from an output (PCM) digital voice data stream or signal (to create the output audio signal to the speaker portion of wireless videophone 120 the telephones 295), and well as band limiting and signal restoration for PCM systems. The output and input (PCM) digital voice data streams connect directly to the voice processing DSP 415. The voice processing DSP 415, such as a Motorola DSP56303, contains program memory and data memory to perform signal processing functions such as DTIVIF/dial pulse detection and generation, analog modem functions, call progress tone (dial 36 tone, busy tone) generation, PCM-to-linear and linear-to-PCM conversion, and speech prompt playback. As indicated above, the voice processing DSP 415 also provides modem functions, such as V.x modem functions, to additionally support POTS or 5 other analog-based video calls. The voice processing DSP 415 interfaces with the microprocessor subsystem 260 and network codec 330 over the bus 261. The memory 420 (connected to the voice processing DSP 415), in the preferred embodiment, includes high density read only memory (referred.
to as speech ROM) containing PCM encoded (or compressed) speech segments used for interaction with the user, such as in prompting the user for keypad DTIVIF or dial pulse entry when in the video calling mode. In addition, optional speech random access memory may be used for user voice storage functions, and electrically alterable, programmable non-volatile (flash) memory for storage of programs (and updates) or algorithms.
The user audio interface.255, in the preferred. embodiment, operates in one of two modes, first, for telephony (POTS), and second, for video conferencing (calling). The telephony (POTS) mode is user transparent, as a default mode which is entered whenever the user goes off hook. As discussed in greater detail below, the video conferencing mode is entered as an exception, through the user entering (dialing) a specific, predetermined sequence which, in the preferred embodiment, is not recognized as a telephony sequence. In the telephony (POTS) mode, the voice processing DSP 415 generates the customary "diaW tone when the user telephone 295 or wireless videophone 120 goes off hook. The user then enters the diaiing sequence via the keypad of a telephone 295 or wireless videophone 120, just as in known or customary telephone dialing. The voice processing DSP 415 decodes the dialing digits and stores them in a calling memory buffer of memory 420. Upon decoding the first two digits entered (which are not the first two digits of the specific predetermined video 37 call sequence), the voice processing DSP 415 recognizes that the requested call is not a video call and, as a consequence, signals the microprocessor subsystem 260 to infflate a POTS call through the audiolvideo network 100 using the telephony (analog) interface 250. When the call is granted (by the network 140) and the audio link with the local digital or analog switch is established, the voice processing DSP 415 forwards the stored digits to the local digital or analog switch and connects the audio paths between the user's telephone(s) and the network 140. From this point on, the voice processing DSP 415 will not decode any dialed digits and will simply pass through the input and output PCM digital voice data stream, until the user's telephone goes on hook and the call is terminated.
Alternatively for a telephony session, the audioluser interface 255 may create or maintain a connection to a central office of a network 140, to provide.transparency- for telephony',Once the entry of the specific predetermined sequence for. video mode is detected, the audioluser interface 255 breaks or terminates the central office connection, and enters video mode, under local control of the wireless video access apparatus 201 (or 110).
As indicated above, the user initiates the video conferencing mode as an exception to the normal telephony mode, by entering a specific predetermined sequence which is recognized by the voice processing DSP 415 as a nontelephony sequence and, additionally in the preferred embodiment, as the predetermined sequence specific to the video mode. This methodology is also discussed below with reference to the flow chart of FIG. 19. For the video conference mode of the preferred embodiment, the first two digits of the specific, predetermined sequence are unique and specifically unused in a standard POTS call, such as " % and as a consequence, may specifically signal the audio voice 38 processing DSP 415 to enter the video call mode. Alternatively, other specific, predetermined sequences could be programmed by the user for recognition as a video conference mode by the voice processing DSP 415. Immediately after decoding the two special digits or other specific predetermined sequence, the voice processing DSP 415 generates or plays a speech prompt sequence, such as "Please select a call option or press the W key for help", which is stored in the speech ROM portion of memory 420. The action taken by the voice processing DSP 415 will then depend upon the sequence entered or key pressed by the user following the initial prompt. For example, if the Wkey is pressed, the user may hear a menu of commands such as, for example, the following:
To place a Directory call, press " 'To update the call Directory, press 2' To place a manual.video call, press X 'To mute the camera, press 4'! 'To view the camera on your television, press E' 1To hear this menu again, press # Thus, in the preferred embodiment, an automated and user friendly prompting sequence is used to guide the user through placing a wireless video conference call. Once the entry is complete, the information is then passed from the voice processing DSP 415 to the microprocessor subsystem 260, which will then attempt to connect the call through the network 140. If successful, the audio paths (input and output audio signals) will be connected through to the telephone 295 and wireless videophone 120, the output video path will be connected through to the video transponder 115 and any television 240 (or other video displays 225), and the input video path will be connected from the camera interface 235 (originating from the video camera 230). The video call I\- -, 39 terminates when the wireless videophone 120 or telephone 295 goes on hook, or another control signal is entered via the user interface 215 or user audio interface 255.
It should be noted that in the preferred embodiment, a simple directory feature may be used to simplify the video calling process. For example, after the user goes off hook and presses the ''key three times followed by a single digit 1% '2'...'9', a call automatically may be placed using a sequence of numbers stored in the directory for that digit. This feature may be necessary or desirable under a variety of circumstances, for example, when an ISDN call may require the entry of two separate 1 0-digit numbers to connect the call through the network 140. Also as an option in the preferred embodiment, a more sophisticated system may store a simple name tag or 15 other alphanumeric entry associated with the directory entry, created by the user, and played back to the user by the voice processing DSP 415. For examplea prompt in. response- to, making a directory call may be: Irro call 'grandma', press 1 'rTo call 'mother', press Z'; "To call 'work, press 31% in which the 20 speech segments "grandma", "mothe', and "work" are spoken by the user, recorded and stored in memory 420. More sophisticated systems may include speakerlvoice recognition techniques, to recognize the user selection, eliminating the need to press any keys on a telephone keypad or other manual 25 entry of information into the user interface 215 or user audio interface 255. It should also be noted that video call control functions, such as camera muting, unmuting, and local playback (loop back), also may be selected with the same user interface. Other sophisticated systems may also include use of 30 the wireless videophone 120, video display 225 or television 240 for on-screen visual display of a menu of options, with corresponding entry of user control signals, such as call control and placement information, occurring in a variety of ways, such as through the keypad of the wireless videophone 120 or telephones 295, through a infrared remote control link with the wireless video access apparatus 201 (101 or 301), or through the input video path via the second communication channel 227. In this manner, the keypad or remote control link, coupled with the video display, may effectively form a distributed graphical user interface for call control. These various methods of user prompting, on- screen display, and user feedback are especially useful to guide the user through the process of placing a video call, and help to make the wireless video conferencing system 200 (100 or 300) especially user-friendly.
In addition, these various methods also illustrate the "quadality" of the use of a wireless videophone 120 in the preferred embodiment, for telephony, for audio input and output, for video output, and for call control.
FIG. 9 is a block diagram illustrating an RF modulator 270 of the preferred apparatus embodiment in accordance with the present invention. The-RF modulator.270 convertslhe. baseband output video signal from the audiolvideo compression and decompression subsystem 265, such as an NTSC/PAL composite video signal, to a radio frequency output video signal, such as an amplitude modulated vestigial sideband RF signal, which is transmitted to the video transponder 115 and which may be viewed via the receiver of a user's television 240, for example, when tuned to channel 3 or 4. The RF modulator 270 may be implemented in a variety of ways, including through use of a video modulator 425, such as a Motorola MC1 373, followed by a gain stage (amplifier) 430, utilized in the preferred embodiment to overcome losses from the directional coupler 290 which feeds the RF output video signal into the second communication channel 227, such as a coaxial cable system in the user premises. A switchable notch filter may also be used to remove current programming from a particular channel (RF video carrier), while inserting the radio 41 frequency output video signal into the second communication channe1227.
FIG. 10 is a block diagram illustrating an RF demodulator 275 of the preferred apparatus embodiment in accordance with the present invention. In the preferred embodiment, the RF demodulator 275 is a full heterodyne receiver tuned to a specific channel in the 900 MHz band or 1.2 GHz band, to receive the radio frequency input video signal from the camera interface 235 (originating from the video camera 230). The radio frequency input video signal, fed into the RF demodulator 275 from the directional coupler 290, is bandpass filtered (at either 900 MHz or 1.2 GHz) in pre-filter 435, then mixed down to an intermediate frequency (IF) of, for example, 45 MHz, using the mixer 440 and a fixed reference oscillator 445. The signal is then surface acoustic wave (SAW) filtered by the SAW filter 450, or otherwise bandpass filtered, and transferred to a (color) TV IF subsystem 460,such asa Motorola MC4430 1, which, provides amplification, AM detection (demodulation) and automatic fine tuning, resulting in a baseband input video signal (baseband composite input video signal). This baseband input video signal is then transferred to the audiolvideo compression and decompression subsystem 265 for further processing as discussed above.
FIG. 11 is a block diagram illustrating a camera interface 235 of the preferred apparatus embodiment in accordance with the present invention. The camera interface 235 is used in conjunction with a video camera (or camcorder) 230 that outputs its signal as an RF video carrier on channel 3 or 4 (61.25 or 67.25 MHz), and is used to upconvert the video carrier to an RF carrier at 900 MHz or 1.2 GHz without intervening demodulation and modulation of the video signal.
As mentioned above, the camera interface 235 may be omitted when the wireless video access apparatus 201 (or 101) is not connected to CATV services and, in which case, the video /1 42 camera 230 may be directly connected to the second communication channel 227 (provided that interference with the RF output video signal may be avoided, for example, by having the RF input video signal from the video camera 230 on a different channel than the RF output video signal from wireless video access apparatus 201). As illustrated in FIG. 11, the input video signal from the video camera 230 is mixed up to the required output frequency using an offset mixer 465, a fixed reference oscillator 470, and a bandpass filter 475. Not illustrated in FIG. 11, if additional input video signals are desired from, for example, additional video cameras, the input video signals may also be multiplexed. This feature may be desirable, for example, when the system is to be used for surveillance of multiple points or locations, or when the user desires to transmit additional windows or screens within screens.
Alternatively, as mentioned above, the camera interface 235 may be directly incorporated within the video camera 230.
In addition, for those video cameras producing a NTSC/PAL composite video signal (rather than an RF video carrier on channel 3 or 4), an additional stage may be added within the camera interface 235 to modulate the NTSCIPAL composite video signal to an RF video carrier prior to offset mixing by offset mixer 465, or in lieu of offset mixing, directly modulating the NTSC/PAL composite video signal to 900 MHz or 1.2 GHz to form the RF input video signal.
FIG. 12 is a block diagram illustrating a radio frequency video transponder 11 5A of a preferred apparatus embodiment in accordance with the present invention (to be used in conjunction with a corresponding radio frequency wireless videophone 120A discussed below with reference to FIG. 14).
The radio frequency video transponder 11 5A receives the radio frequency output video signal broadcast throughout the user premises on the second communication channel 227, and in I1 43 turn, the radio frequency video transponder 11 5A rebroadcasts that signal through an air interface to videophone, usually on a second or different frequency, such as 900 MHz. As illustrated in FIG. 12, a radio frequency output video signal on the second communication channel 227 is heterodyne converted to an intermediate frequency (IF), such as 45.75 Mhz, using a mixer circuit 500, a frequency synthesizer (or phase lock loop) circuit 510 (such as a Motorola MC1 45220), and a converter 505 (which converts a square wave from the frequency synthesizer 510 to a sinusoid). The frequency syr, t.hesizer 510 may be programmed such that the video transponder 11 5A may receive on any standard CATV channel using channel select 515, which may be a microprocessor controller or programmable switches. Following a gain stage (or amplifier) 520, an IF bandpass filter 525 is used to remove unwanted mixing components and adjacent channel signals. From the IF, the NTSC encoded composite video signal is applied to., another mixer 530, which translates the frequency up to the 900 MHz band, utilizing a separate, second frequency synthesizer (PLL) circuit 540 to perform the frequency translation, also with a second converter 535. The resulting signal is then bandpass filtered in filter 545, amplified in a second gain stage 550, and applied to an antenna 555 which radiates the output video signal, referred to as a second output Video signal or a second frequency output video signal, through the air interface. Note that any other frequency band (second frequency) may be used which allows low-power (preferably unlicensed) operation of such wide-band signals.
FIG. 13 is a block diagram illustrating an infrared video transponder 11513 of a preferred apparatus embodiment in accordance with the present invention (to be used in conjunction with a corresponding infrared wireless videophone 120B discussed below with reference to FIG. 15). Referring to FIG. 13, as in the RF version, a radio frequency output video 44 signal on the second communication channel 227 is heterodyne converted to an intermediate frequency (IF), such as 45.75 Mhz, using a mixer circuit 560, a frequency synthesizer (or phase lock loop) circuit 570 (such as a Motorola MC145220), and a converter 565 (which also converts a square wave from the frequency synthesizer 570 to a sinusoid).
Following a gain stage (or amplifier) 575 and an IF bandpass filter 580, the IF video signal is applied to a television IF processor 585, such as a Motorola MC44301. This IF processor.
585 converts the IF signal to baseband, and also provides a video detector function. The output of the IF processor 585 is a composite video signal which is then applied to a driver circuit 590 and a chain of infrared emitter light emitting diodes (LEDs) 595. The IR LEDs convert the electrical, composite video signal into infrared light, which is amplitude modulated, which is broadcast through an air interface, such as in the user premises. A DC bias circuit-597. is used to set the LEDs 595 at a linear point of operation.
FIG. 14 is a block diagram illustrating a radio frequency (RF) wireless videophone 120A of a preferred apparatus embodiment in accordance with the present invention (to be used in conjunction with a corresponding radio frequency video transponder 115A discussed above). As part of an RF receiver 608 within the RF wireless videophone 120A, the second output video signal broadcast from the radio frequency video transponder.1 15A which appears at the antenna 600 is preffitered by a bandpass filter 603centered at the desired carrier frequency, such as 900 MHz. The output audio signal from the wireless telephone base station 110 is also picked up by the antenna 600, and transferred to the audio transceiver subsystem 605 (discussed in greater detail below with reference to FIG. 17). Continuing to refer to FIG. 14, the bandpassed signal from fitter 603 is then heterodyne converted to an intermediate frequency (IF), such as 45.75 Mhz, (utilizing a mixer 607, a converter 609, and a frequency synthesizer 610. as discussed above). The resulting signal is amplified in gain stage 612. bandpass filtered in filter 614, and applied to a television IF processor 616, such as a Motorola MC44301. This 5 IF processor 616 converts the IF output video signal to baseband output video signal, such as an NTSC/PAL composite video signal, and also provides various video detector and other video functions. The resulting baseband video signal is then applied to a display driver circuit 618, which provides horizontal and vertical component video information to drive a liquid crystal display (LCD) 620, such that the output video signal is then displayed on the LCID 620 or other comparable video display. In other embodiments, the functions of the IF processor 616, the display driver 618, and possibly other components of the RF receiver 608, may be combined into one IC, such as a Motorola MC44302.
FIG. 15 is a block diagram, illustrating.an infraredwireless videophone 120B of a preferred apparatus embodiment in accordance with the present invention (to be used in conjunction with a corresponding infrared video transponder 11513 discussed above). As discussed above for the RF wireless videophone 120A, the output audio signal from the wireless telephone base station 110 is also picked up by an antenna 626, and transferred to the audio transceiver subsystem 605 (discussed in greater detail below with reference to FIG. 17). Referring to FIG. 15, forming an IR receiver, the optical signal broadcast from the IR video transponder 11513 is received at the lens 625, where the optical signal is applied to an infrared detector diode circuit 627. This infrared detector diode circuit 627 converts the optical signal back into an electrical signal, recovering an amplitude modulated composite video signal, which is then amplified in gain stage 629 and applied to a display driver 631. The 46 resulting signal is then displayed on a liquid crystal display (LCID) 633 or other comparable video display.
FIG. 16 is a block diagram illustrating a wireless telephone base station 110 of a preferred apparatus embodiment in accordance with the present invention. The input audio, output audio and control section of the wireless telephone base station 110 may be implemented using standard cordless or wireless telephone technology, such as CT-1, CT-2, DECT, etc. The components forming the wireless telephone base station 110 tind utilized to implement a fullduplex audio telephony link, also may be integrated into the wireless video access apparatus 301, or may operate as standalone device as shown in the systems illustrated in FIGs. 1 and 2. Referring to FIG. 16, the wireless telephone base station 110, preferably operating at 49 MHz or 900 MHz, is coupleable to the twisted-pair telephone cabling within the home through tip and ring lines 641 and 642, respectively.. -- The twisted pair'.cabling, such as line 294 illustrated in FIG., 2, is coupleable to the wireless video access apparatus 101 (or 201) through the user interface 215 or the user audio interface 255, respectively, which process the input audio signal and the output audio signal, as discussed above. The wireless telephone base station 110 provides the interface with the two-wire telephone cable, converting those audio signals to and from radio frequency audio signals broadcast in the user premises.
Contnuing to refer to FIG. 16, an audio signal from tip and ring lines 641 and 642, is applied to a network interface 640, which provides network isolation and signal conversion. The telephone interface 643, such as a Motorola MC34016, forms the interface towards the telephone line and performs all speech and line interface functions, such as DC and AC line termination, 2 to 4 wire conversion, automatic gain control and hookswitch control. The audio output of this telephone interface 643 drives a transmitter subsystem 645, preferably a 1-1 1 1 1 47 low power 49 MHz FM transmitter IC, such as a.Motorola MC2833. This single-chip FM transmitter subsystem, along with passive external components, provides amplification, FM modulation, and upconversion to the 49 MHz RF carrier frequency. The output of this transmitter subsystem 645 is a radio frequency signal containing audio output information (and accordingly is referred to herein as a radio frequency output audio signal), which is applied to a duplexer fitter circuit 649 and antenna 651 for transmission of the output audio signal to the wireless videophone 120.
Correspondingly, the transmitted signal from the wireless videophone 120 is a radio frequency signal containing audio input information (and accordingly is referred to herein as a radio frequency input audio signal), and is received at the antenna 651, duplex filtered in duplexer 649, and applied to a narrowband-FM receiver 647, such as the Motorola MC3335.
This receiver 647 provides- dual. 17M. conversionwith oscitaor;.
mixers, quadrature discriminator, and carrier detection circuitry.
The recovered audio output from the receiver 647 is applied to the telephone interface 643 (and network interface 640), for transmission of the input audio signal to the user interface 215 or user audio interface 215, for processing as discussed above.
FIG. 17 is a block diagram illustrating a wireless telephone audio transceiver 605 of a preferred apparatus embodiment in accordance with the present invention. The radio frequency input and output audio signals (broadcast to and from the wireless.telephone base station 110) are input to (and output from the audio transceiver 605 via an antenna, such as antenna 626 or antenna 600 discussed above, via the duplexer 655. Within the cordless videophone, a telephone subsystem 657, such as the Motorola MC13109, is used along with other external circuits to provide the audio transceiver functions, although other standard cordless telephone systems may also be utilized. The telephone subsystem 657, such as 48 an MC13109, preferably integrates several of the functions into a single integrated circuit, including a dual conversion FM receiver, an audio compander function, a dual programmable PLL for frequency generation, and a low battery detector. The 5 microcontroller 663 interfaces with the telephone subsystem 657 to provide transmit and receiver frequency programming and transceiver control functions, keypad 665 input detection, DTIVIF generation, and other control functions. Transmit functions are implemented with the voltage controlled oscillator 667, which is programmed for the carrier frequency and provides the carrier frequency (e44, 49 MHz) which is FM modulated for input audio, filtered in filter 670, and applied to an antenna 600 Or 626. The output audio signal is broadcast to the user via speaker 659, while input audio is received from the user via microphone 661.
FIG. 18 is a flow diagram illustrating the method of the preferred embodiment of the-.present inventim As illustrated' ' in FIG. 18, the method begins, start step 700, with receiving a first protocol signal, such as a 0.x or V.x encoded/modulated H.32x audiolvideo signal, to form a received protocol signal, step 705. In the preferred embodiment, step 705 is performed in the network interface 210. Next, in step 715, the received protocol signal is converted to a baseband output video signal and an output audio signal. In the preferred embodiment, step 715 is performed by the processor arrangement 190, or more particularly by the microprocessor subsystem 260 (and possibly voice DSP 415) and the audiolvideo compression and decompression subsystem 265. In the preferred embodiment utilizing a wireless videophone 120 or a telephone 295 for audio output and input, an important feature of the present invention is the independence of the output audio signal from the output video signal. In the event that a television 240 or other video display 225 is also to be used for audio output, the output audio signal may be combined with the baseband output 49 video signal (rather than separating out the audio portion and separately routing it through the wireless telephone base station 110). Next, in step 725, the baseband output video signal (and possibly output audio signal as well) is modulated to form a radio frequency output video (and audio) signal, also referred to as a composite output video signal, and in step 735, the RF output video (and audio) signal is transmitted. In the preferred embodiment, steps 725 and 735 are performed by the RF mod ul atorldern odulator 205 or the RF modulator 270. In addition, the output audio signal may also be a combination of both near end and far end (remote) audio, resulting in near and far end combined audio available at the video display. This combination would allow both recording and monitoring of the audiolvideo information, from both the near and far ends. Next, in step 745, the radio frequency output video signal is remodulated to a second frequency to form a second output video signal and, in step 750., the.--second-output video -signa.i's transmitted, with both steps 745 and 750 preferably performed by the video transponder 115.
Concurrently with steps 705, 715, 725, 735, 745 and 750 (involving receiving (at a local location) video conference information transmitted from another location, such as a remote location), in the preferred embodiment, steps 710, 720, 730 and 740 are also occurring (involving transmitting (from a local location) video conference information to another location, such as a remote location). In step 710, a radio frequency input video signal and an input audio signal are received. As indicated above, in the preferred embodiment, the input video signal and input audio signal are each independent of the other. In the preferred embodiment, the radio frequency input video signal from the camera interface 235.(or directly from the video camera 230) is received by the RF demodulator 275 or the RF modulatorldemodulator 205, and an input audio signal is received via the wireless telephone base station 110 and transferred to either the user interface 215 or user audio interface 255. Alternatively, the input audio signal may also be received by a microphone in the video camera 230 and included as part of the RF input video signal from the camera interface 235. Next, preferably in the RF demodulator 275 or the RF modulator/demoduiator 205, in step 720 the RF input video (and possibly audio) signal is demodulated to form a baseband input video (and possibly audio) signal. In step 730, the baseband input video signal and the input audio signal are converted to a second protocol signal, preferably by the processor arrangement 190, or more specifically by the audiolvideo compression and decompression subsystem 265, the microprocessor subsystem 260, and the voice DSP 415. In step 740, the second protocol signal is transmitted to form a transmitted protocol signal, preferably by the network interface 210. Following steps 750 and 740, when the video conference has been terminated, step 755, such as by going. on hook,:the process may end, return step 760, and if the video conference has not been terminated in step 755, the method continues, returning to steps 705 and 710.
FIG. 19 is a flow diagram illustrating the telephony and video conference control methodology in accordance with the preferred embodiment of the present invention. FIG. 19 also illustrates the multiple roles of a telephone, such as a wireless videophone 120 or telephone 295, in the system of the present invention, including providing telephony (POTS), providing video call control, and providing the video and audio portion of the video conference. Referring to FIG. 19, beginning with start step 800, a request for service is detected, step 805, such as going off hook or receiving an incoming alert signal. Next, in step 810, a user indication or alert is provided, such as a dial tone or an incoming ring signal, and signaling information is collected, such as DTMF digits of a phone number or 1111.
When a video conference has been requested in step 815, \1 51 such as through entry of "" or receipt of an incoming message from the network 140, then the method proceeds to step 835. When a video conference has not been requested in step 815, the method proceeds to request or set up a telephony call, such as generating DTIVIF tones and connecting an audio path between the user's telephone and the network 140, step 820, followed by entering the transparent telephony mode and transmitting audio (typically PCM) data to the network 140, step 825. The audio data will have been PCM encoded, and will have been transformed into an appropriate digital or analog format (L4, ISDN, POTS, etc.) by the network interface 210 for transmission to the network 140. When the telephony call is terminated, step 830, the method may end, return step 860.
Continuing to refer to FIG. 19, when a video conference has been requested in step 815, the method proceeds to step 835 and initializes the video conference control system, such as playing an initial speech prompt. as.discussed.above,..-.N.lext, in step 840, the video input request type is collected and the corresponding requested service is performed, such as originating a video conference call using a directory, updating a video conference call directory, manually originating a video conference call, muting an input (audio or video), providing loop back (gj4, local self-view such as monitoring or other surveillance), playing help or error messages or menu options, or exiting the video conferencing control system. In step 845, a video conference call is requested or set up (such as for an incoming video call), and in step 850, the video conference mode is entered, with protocol encoded (L4, H.32x and either Q. x or V.x protocols) audio and video data being transmitted to the network 140. When the video conference call is terminated in step 855, such as by going on hook, the method may end, return step 860.
A particularly innovative feature of the various apparatus and system embodiments of the present invention is the "quad- t 52 ality" of the use of a wireless videophone 120 or telephone 295 in the preferred embodiment for telephony (POTS), for audio input and output, for video output, and for call control (for selecting either video or telephony modes). Another significant feature of the various embodiments of the present invention is the interoperability of both POTS telephony and ISDN telephony within the same device, such as a wireless videophone 120. As a consequence, when a wireless videophone 120 may be being used for ISDN video conferencing, the method of the invention may include various modes for avoiding potential conflict with simultaneous POTS use. For example, during an ISDN video conference in which a wireless videophone 120 is being utilized for call control and for audio input and output, the method provides for avoiding a POTS conflict, such as that which could occur if an incoming POTS call were received. One alternative for avoiding such conflict would consist of "busying ouf' the POTS kne 107.. wherr such an ISDN video conference is in progress. Another alternative would consist of providing POTS priority for the audio portion of the video conference, such as enabling a user to simultaneously receive the POTS audio while the video conference is occurring (or the video link maintained), for example, to provide for potentially exigent or emergency situations (such as emergency calls) which would typically occur via POTS lines. Other alternatives may include providing a POTS caller identification (caller ID) functionality, such that caller ID FSK modulated data could be displayed on a caller ID unit or on a video display (either video display 225 or an LCD 620 or 633), allowing the user to determine whether the video conference should or should not be terminated. Such an alternative may be implemented, for example, through a call waiting (flash hook) system, or by returning the POTS line to an on hook status followed by a ringing signal and going off hook. Similar conflict resolution schemes may be implemented for n 1 53 situations of an existing POTS call in progress followed by an incoming ISDN video call. In addition, a local, non-network flash system may also be implemented, allowing the user to toggle between a POTS call and a concurrent ISDN video call.
Also as indicated above, such conflict resolution may also be implemented utilizing the combination of the keypad of a wireless videophone 120 or a telephone 295 and video display (either video display 225 or an LCD 620 or 633) as a graphical user interface, for entry of user control signals and for selection of potentially competing calls.
Network configuration is yet another function which may be performed via a wireless videophone 120 (or a telephone 295) and user audio interface 255, especially utilizing menu options displayed utilizing an on screen display (either video display 225 or an LCD 620 or 633). For example, as disclosed in the fourth related application, automatic ISDN configuration capabilities, for example, for-.LSDN.-parameters- such as-switch type and SPID, may be implemented within the processor arrangement 190 and executed by the user via control functionality (as options entered by the user via the a wireless videophone 120, telephone 295 or other user interface 215). in addition, for POTS video conferencing capability, V.x or other modem configuration parameters (such as auto or manual answer) may also be configured as options entered by the user via the a wireless videophone 120, telephone 295 or other user interface 215.
The auto answer modem option also generates another possible area of conflict for POTS telephony versus POTS video conferencing, especially if a user is utilizing a telephone answering machine on the telephony (POTS) line 107. In the preferred embodiment, to determine whether an incoming POTS call is for telephony or video conferencing, in the preferred embodiment, a carrier (such as a V.34 carrier frequency) detector may be implemented, such that if a carrier n Ilk 54 is found, the wireless video access apparatus 101, 201 or 301 proceeds with V.x protocols (such as training), and if no carrier is detected, the wireless video access apparatus 101, 201 or 301 assumes a voice (telephony) call and allows the telephone 295 (or answering machine) to ring and answer the incoming call.
Similarly for ISDN telephony versus ISDN video conferencing, the wireless video access apparatus 101, 201 or 301 may detect an H.320 or other video protocol, and may provide a distinctive alert to indicate an incoming video call. If the user then goes off hook, then the ISDN video call is connected, for example, using the Q.931 protocol. Correspondingly, if an answering machine goes off hook, the audio portion of the ISDN call may be passed through, allowing an audio message to be left during, for example, an H.320 video conference call.
Numerous advantages from. the various wireless. video.. access apparatuses 101, 201 and 301, and from the various wireless video conferencing systems 100, 200 and 300, are readily apparent. First, because the output video signal is modulated and transmitted over the second communications channel 227, such as over an entire coaxial cable within the user premises, the audiolvisual conferencing and telephony system of the preferred embodiment may operate at more than one designated node or location within user premises, for example, utilizing any wireless videophone, other videophone, or telephone and television, within the user premises, providing multiple viewing points and multiple participation points. Such broadcast capability of the video conferencing functionality is truly unique to the invention disclosed herein and in the second related application. In addition, the audiolvisual conferencing and telephony system of the preferred embodiment may be mobile, utilizing the video camera 230 and camera interface 235 from a myriad of locations within the user premises and, indeed, from anywhere the second communications channel 227 (such as a coaxial cable) may reach. As a consequence, the user is not confined to a single location, such as at a PC or in a dedicated conference room, for video conferencing capability. In addition, the system may be configured as needed for additional locations, for example, simply by adding or removing televisions and video cameras.
Another significant feature of the present invention is the unique portability feature of the wireless videophone 120. In addition, in accordance with the preferred embodiment, the audiolvisual conferencing and telephony system utilizes equipment typically found in consumers' homes or premises, such as existing televisions, video cameras or camcorders, and telephones. As a consequence, the system may be implemented at relatively low cost, especially compared to the currently available PC-based or stand alone video conference systems. In addition, and in pontrast with_prior. art videQ. conferencing systems, the system of the present invention is designed to be compatible for use with other existing video conferencing systems, for example, those which may utilize either ISDN or POTS networks, rather than being solely compatible with. one or the other (but not both). Moreover, the system of the present invention is user friendly, easy to install and use, and should be relatively less expensive for in-home purchase and use by consumers.
Yet another significant feature of the present invention is the centralization of the. audiolvideo compression and decompression functions, and other video functions, in the wireless video access apparatus. This allows for a reduced cost of the wireless videophones, as duplication of such functionality may be avoided and all wireless videophones may share such video functionality. This also provides for ease and low cost of revisions and upgrades, as such revisions may 56 be downloaded into the wireless video access apparatus, with no changes required of the wireless videophones.
Another interesting feature of the apparatus and system embodiments of the present invention is the multiple functionality of the user interface, for example, the dual use of a telephone or wireless videophone (as a user interface) for control of the video conference call and for the audio portion of the video conference call. This feature is also in stark contrast to the prior art systems, which typically require special switching and special network operations for call placement and call control. Such dualfty is in addition to the concomitant use of the wireless videophone or telephone for POTS service.
Yet another significant feature of the preferred embodiment of the present invention is the transparency of telephony operation, such that a user need not be aware of the video conferencing capability to place or receive a telephone call.
Other special features- of. tfie- prefefred embodiment-of the present invention include the "loop bacC operation, such that the same systern may also be utilized for surveillance, such as baby monitoring, in addition to conferencing. With the multiplexing capability of the present invention, the video from multiple cameras may be looped back, for example, to provide simultaneous surveillance of multiple locations. Another significant feature of the present invention is the independence of the audio portion from the video portion of an audiolvideo conference. Moreover, the video conferencing capability illustrated is also protocol independent, such that a variety of communication protocols may be utilized and downloaded without user intervention.
From the foregoing, it will be observed that numerous variations and modifications may be effected without departing from the spirit and scope of the novel concept of the invention.
It is to be understood that no limitation with respect to the specific methods and apparatus illustrated herein is intended or 57 should be inferred. It is, of course, intended to cover by the appended claims all such modifications as fall within the scope of the claims.

Claims (9)

What is claimed is: 58 CLAIMS A wireless video access apparatus, comprising: a network interface coupleable to a first communication channel for reception of a first protocol signal to form a received protocol signal and for transmission of a second protocol signal to form a transmitted protocol signal; a radio frequency modulator to convert a baseband output video signal to a radio frequency output video signal; a radio frequency transmitter coup'ied to the radio frequency modulator to transmit the radio frequency output video signal; a radio frequency receiver to receive a radio frequency input video signal; a radio frequency demodulator coupled to the radio frequency receiver to convert the radio frequency input video signal to a baseband input -video. -signal; -. -. a user interface for reception of a first control signal of a plurality of control signals; and a processor arrangement, the processor arrangement coupled to the network interface, to the radio frequency modulator, to the radio frequency demodulator, and to the user interface, the processor arrangement responsive, through a set of program instructions, and in response to the first control signal, to convert the received protocol signal to the baseband output video. signal and to an output audio signal, the processor arrangement further responsive to convert the baseband input video signal and an input audio signal to the second protocol signal.
1\ 59
2. The wireless video access apparatus of claim 1, wherein the wireless video access apparatus is coupled through a wireless link to a camera interface, the camera interface for reception of an input video signal and conversion of the input video signal to the radio frequency input video signal.
3. The wireless video access apparatus of claim 2 wherein the camera interface has a radio frequency transmitter to transmit the radio frequency input video signal.
4. The wireless video access apparatus of claim 2 wherein the wireless video access apparatus is further coupled, via the camera interface, to a video camera, the video camera providing the input video signal.
5. The wireless video access apparatus of claim 1, further comprising: a wireless telephone base station, coupled to the user. interface, to transmit a radio frequency output audio signal and to receive a radio frequency input audio signal.
6. The wireless video access apparatus of claim 5 wherein the wireless telephone base station is coupled, via a wireless link, to a wireless videophone for entry of the plurality of control signals.
7. The wireless video access apparatus of claim 1 wherein the processor arrangement is further responsive to convert the received protocol signal to the baseband output video signal and the output audio signal, wherein the baseband output video signal and the output audio signal are independent.
8. The wireless video access apparatus of claim 1 wherein the processor arrangement is further responsive to convert the baseband input video signal and the input audio signal to the second protocol signal, wherein the baseband input video signal and the input audio signal are independent.
9. The wireless video access apparatus of claim 1 wherein the processor arrangement further comprises: a microprocessor subsystem; and an audiolvideo compression and decompression subsystem coupled to the microprocessor subsystem.
10, The wireless video access apparatus of claim 9 wherein the microprocessor subsystem further comprises: a microprocessor; and a memory coupled to the microprocessor.
GB9724889A 1996-11-27 1997-11-26 Wireless audio and video conferencing and telephony Withdrawn GB2320657A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US75718496A 1996-11-27 1996-11-27

Publications (2)

Publication Number Publication Date
GB9724889D0 GB9724889D0 (en) 1998-01-28
GB2320657A true GB2320657A (en) 1998-06-24

Family

ID=25046749

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9724889A Withdrawn GB2320657A (en) 1996-11-27 1997-11-26 Wireless audio and video conferencing and telephony

Country Status (6)

Country Link
AU (1) AU4520197A (en)
BR (1) BR9707105A (en)
DE (1) DE19751870A1 (en)
GB (1) GB2320657A (en)
ID (1) ID19728A (en)
RU (1) RU97120579A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2363036A (en) * 2000-05-31 2001-12-05 Nokia Mobile Phones Ltd Performing conferencing comprising routing established call in first network through channel in second network
GB2383504A (en) * 1998-06-03 2003-06-25 Orange Personal Comm Serv Ltd A video telephone for conferencing
GB2410160A (en) * 2004-01-15 2005-07-20 Jason Andrew Rees Base station for transmitting audio visual signal to a mobile device in a home network
US6985965B2 (en) * 2000-11-16 2006-01-10 Telefonaktiebolaget Lm Ericsson (Publ) Static information knowledge used with binary compression methods
EP1773054A3 (en) * 2005-10-07 2008-10-15 Samsung Electronics Co., Ltd. Method for performing video communication service and mobile communication terminal therefor
EP1746829A3 (en) * 2005-07-18 2009-04-22 LG Electronics Inc. Mobile communication terminal and method of video communications thereof
NO20120567A1 (en) * 2012-05-15 2013-11-18 Wiable As Wireless video conferencing system and method of installing the system.

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19929168A1 (en) * 1999-06-25 2000-12-28 Siemens Ag Integrated set-top-box telecommunications terminal for digital television

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1994001824A1 (en) * 1992-07-06 1994-01-20 Shaw Venson M A single chip integrated circuit system architecture for video-instruction-set-computing
EP0702490A1 (en) * 1994-09-13 1996-03-20 PHILIPS ELECTRONIQUE GRAND PUBLIC (Sigle: PHILIPS E.G.P.) Cordless telephone equipped with an image processing device and a camera

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1994001824A1 (en) * 1992-07-06 1994-01-20 Shaw Venson M A single chip integrated circuit system architecture for video-instruction-set-computing
EP0702490A1 (en) * 1994-09-13 1996-03-20 PHILIPS ELECTRONIQUE GRAND PUBLIC (Sigle: PHILIPS E.G.P.) Cordless telephone equipped with an image processing device and a camera

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2383504A (en) * 1998-06-03 2003-06-25 Orange Personal Comm Serv Ltd A video telephone for conferencing
US8463707B2 (en) 1998-06-03 2013-06-11 France Telecom Dynamic allocation of radio resources in a packet switched communications-system
GB2363036A (en) * 2000-05-31 2001-12-05 Nokia Mobile Phones Ltd Performing conferencing comprising routing established call in first network through channel in second network
GB2363036B (en) * 2000-05-31 2004-05-12 Nokia Mobile Phones Ltd Conference call method and apparatus therefor
US6985965B2 (en) * 2000-11-16 2006-01-10 Telefonaktiebolaget Lm Ericsson (Publ) Static information knowledge used with binary compression methods
GB2410160A (en) * 2004-01-15 2005-07-20 Jason Andrew Rees Base station for transmitting audio visual signal to a mobile device in a home network
EP1746829A3 (en) * 2005-07-18 2009-04-22 LG Electronics Inc. Mobile communication terminal and method of video communications thereof
US7893954B2 (en) 2005-07-18 2011-02-22 Lg Electronics Inc. Mobile communication terminal and method of video communications thereof
EP1773054A3 (en) * 2005-10-07 2008-10-15 Samsung Electronics Co., Ltd. Method for performing video communication service and mobile communication terminal therefor
US7999840B2 (en) 2005-10-07 2011-08-16 Samsung Electronics Co., Ltd. Method for performing video communication service and mobile communication terminal therefor
NO20120567A1 (en) * 2012-05-15 2013-11-18 Wiable As Wireless video conferencing system and method of installing the system.

Also Published As

Publication number Publication date
RU97120579A (en) 1999-10-27
GB9724889D0 (en) 1998-01-28
BR9707105A (en) 2005-01-25
DE19751870A1 (en) 1998-06-25
ID19728A (en) 1998-07-30
AU4520197A (en) 1998-06-04

Similar Documents

Publication Publication Date Title
US6011579A (en) Apparatus, method and system for wireline audio and video conferencing and telephony, with network interactivity
US6134223A (en) Videophone apparatus, method and system for audio and video conferencing and telephony
US5877821A (en) Multimedia input and control apparatus and method for multimedia communications
US5774857A (en) Conversion of communicated speech to text for tranmission as RF modulated base band video
EP1116409B1 (en) Access control means, communication device, communication system and television receiver
US5374952A (en) Videoconferencing system
US5922047A (en) Apparatus, method and system for multimedia control and communication
US5534914A (en) Videoconferencing system
US6346964B1 (en) Interoffice broadband communication system using twisted pair telephone wires
US6201562B1 (en) Internet protocol video phone adapter for high bandwidth data access
JP2000505616A (en) Cordless phone backlinks for interactive television systems
US20070242755A1 (en) System for bi-directional voice and data communications over a video distribution network
GB2320657A (en) Wireless audio and video conferencing and telephony
GB2318021A (en) wireline audio and vidoe conferencing and telephony
GB2312591A (en) Automatically connecting TV viewers to information services
US7127733B1 (en) System for bi-directional voice and data communications over a video distribution network
GB2328832A (en) Apparatus,method and system for audio and video conferencing and telephony
CN1187090A (en) Apparatus, method and system for wireless audi oand video conferencing and telephony
US5777664A (en) Video communication system using a repeater to communicate to a plurality of terminals
CN1244991A (en) Videophone apparatus, method and system for wireline audio and video conference and telephony
US5892537A (en) Audio-visual telecommunications unit designed to form a videophone terminal
GB2263844A (en) Communication systems
GB2318022A (en) Apparatus, method and system for wireline audio and video conferencing and telephony
JPH1174977A (en) Visitor notifying system
CN1232592A (en) Apparatus, method and system for wireline audio and video conferencing and telephony

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)