KR20150033337A - Terminal and operating method thereof - Google Patents

Terminal and operating method thereof Download PDF

Info

Publication number
KR20150033337A
KR20150033337A KR20130113213A KR20130113213A KR20150033337A KR 20150033337 A KR20150033337 A KR 20150033337A KR 20130113213 A KR20130113213 A KR 20130113213A KR 20130113213 A KR20130113213 A KR 20130113213A KR 20150033337 A KR20150033337 A KR 20150033337A
Authority
KR
South Korea
Prior art keywords
video call
mobile terminal
terminal
call data
data
Prior art date
Application number
KR20130113213A
Other languages
Korean (ko)
Inventor
김홍주
전병휘
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Priority to KR20130113213A priority Critical patent/KR20150033337A/en
Publication of KR20150033337A publication Critical patent/KR20150033337A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • H04L65/403Arrangements for multi-party communication, e.g. for conferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • H04N7/155Conference systems involving storage of or access to video conference sessions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2201/00Electronic components, circuits, software, systems or apparatus used in telephone systems
    • H04M2201/50Telephonic communication in combination with video communication

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Telephone Function (AREA)

Abstract

A mobile terminal comprises a user input unit, a display unit, and a control unit. The control unit displays an icon layout screen comprising a plurality of icons on the display unit, obtains a first user input to set an application corresponding to a selection icon among the icons through the user input unit, and displays one or more set buttons to set the application on the display unit when obtaining the first user input.

Description

[0001] TERMINAL AND OPERATING METHOD THEREOF [0002]

The present invention relates to a terminal and its operating method, and more particularly to the setting of an application in a terminal.

A terminal can be divided into a mobile terminal (mobile / portable terminal) and a stationary terminal according to whether the terminal can be moved. The mobile terminal can be divided into a handheld terminal and a vehicle mount terminal according to whether the user can directly carry the mobile terminal.

Such a terminal has various functions, for example, in the form of a multimedia device having multiple functions such as photographing and photographing of a moving picture, reproduction of a music or video file, reception of a game and broadcasting, etc. .

In particular, the terminal can perform a video call function. A mobile terminal including a wireless remote network communication module such as LTE can perform a video call with an opposite terminal through a wireless remote network.

However, if the wireless remote network communication module is not provided, or if the wireless remote communication module is not included, the mobile terminal can not participate in the video communication through the wireless remote network.

Particularly, wearable terminals such as a watch type terminal and a glasses type terminal are being marketed. However, these wearable terminals are more suitable for video call due to the feature of wearable, In many cases, it does not have a communication module, and as a result, it can not participate in a video call through a wireless remote network.

Embodiments of the present invention provide a terminal capable of participating in a video call through a wireless long-distance network even when a terminal that does not have or does not have a wireless long distance network communication module and is not subscribed to a wireless long distance network and a method of operating the terminal do.

In one embodiment, the first mobile terminal comprises: a video call data acquisition unit for acquiring first video call data of a user of the first mobile terminal; A wireless local area network communication module for receiving second video call data from a second mobile terminal through a wireless local area network; A controller for generating third video call data using the first video call data and the second video call data; And a wireless local area network (WLAN) communication module for transmitting the third video call data to a third mobile terminal through a wireless long distance network.

In another embodiment, the first mobile terminal may include a video call data acquisition unit for acquiring first video call data of a user of the first mobile terminal; A wireless remote network communication module for receiving second video call data from a second mobile terminal over a wireless remote network; A controller for generating third video call data using the first video call data and the second video call data; And a wireless local area network communication module for transmitting the third video call data to a third mobile terminal through a wireless local area network.

In another embodiment, the first mobile terminal may include a first video call data received from a third mobile terminal via a wireless remote network and a second video call data of a user of the second mobile terminal, A wireless local area network communication module for receiving third video call data from a second terminal through a wireless local area network; A speaker for outputting voice data in the third video call data; And a display unit for displaying images of a plurality of video call participants in the third video call data.

The terminal according to various embodiments of the present invention provides a user interface that allows a user to intuitively and easily input configuration information of an application without complicated procedures.

1 is a block diagram of a mobile terminal according to an embodiment of the present invention.
FIG. 2 illustrates a video call network topology according to an embodiment of the present invention.
3 illustrates a video call network topology according to another embodiment of the present invention.
4 is a flowchart illustrating an operation of a hub terminal according to an embodiment of the present invention.
5 illustrates a user interface for adding a video call indirect participant terminal to a WWAN video call according to an embodiment of the present invention.
FIG. 6 shows a user interface for adding a video call indirect participant terminal to a WWAN video call according to another embodiment of the present invention.
7 is a flowchart illustrating an operation of a hub terminal according to another embodiment of the present invention.
8 is a diagram illustrating a packet flow according to an embodiment of the present invention.
9 is a flowchart illustrating an operation of a mobile terminal according to an embodiment of the present invention.
10 is a front view of a mobile terminal displaying images of a plurality of video call participants according to an embodiment of the present invention.
11 to 13 show that the mobile terminal 100 displays images of a plurality of video call participants according to an embodiment of the present invention.
FIGS. 14 and 15 show that the mobile terminal 100 displays images of a plurality of video call participants according to an embodiment of the present invention.
FIG. 16 shows a user input for rotating a polyhedron according to an embodiment of the present invention.
FIG. 17 is a flowchart illustrating a method of acquiring a user's image according to an embodiment of the present invention.
FIG. 18 is a ladder diagram showing a method of performing a video call in cooperation with the eyeglass-type terminal and the hub terminal when the eyeglass-type terminal according to the embodiment of the present invention is used.

Hereinafter, a mobile terminal related to the present invention will be described in detail with reference to the drawings. The suffix "module" and " part "for the components used in the following description are given or mixed in consideration of ease of specification, and do not have their own meaning or role.

The mobile terminal described in this specification may include a mobile phone, a smart phone, a laptop computer, a digital broadcasting terminal, a PDA (Personal Digital Assistants), a PMP (Portable Multimedia Player), and navigation. However, it will be understood by those skilled in the art that the configuration according to the embodiments described herein may be applied to a fixed terminal such as a digital TV, a desktop computer, and the like, unless the configuration is applicable only to a mobile terminal.

Hereinafter, a structure of a mobile terminal according to an embodiment of the present invention will be described with reference to FIG.

1 is a block diagram of a mobile terminal according to an embodiment of the present invention.

The mobile terminal 100 includes a wireless communication unit 110, an audio / video input unit 120, a user input unit 130, a sensing unit 140, an output unit 150, a memory 160, A controller 170, a controller 180, a power supply 190, and the like. The components shown in FIG. 1 are not essential, and a mobile terminal having more or fewer components may be implemented.

Hereinafter, the components will be described in order.

The wireless communication unit 110 may include one or more modules for enabling wireless communication between the mobile terminal 100 and the wireless communication system or between the mobile terminal 100 and the network in which the mobile terminal 100 is located. For example, the wireless communication unit 110 may include a broadcast receiving module 111, a mobile communication module 112, a wireless Internet module 113, a short range communication module 114, and a location information module 115 .

The broadcast receiving module 111 receives broadcast signals and / or broadcast-related information from an external broadcast management server through a broadcast channel.

The broadcast channel may include a satellite channel and a terrestrial channel. The broadcast management server may refer to a server for generating and transmitting broadcast signals and / or broadcast related information, or a server for receiving broadcast signals and / or broadcast related information generated by the broadcast management server and transmitting the generated broadcast signals and / or broadcast related information. The broadcast signal may include a TV broadcast signal, a radio broadcast signal, a data broadcast signal, and a broadcast signal in which a data broadcast signal is combined with a TV broadcast signal or a radio broadcast signal.

The broadcast-related information may refer to a broadcast channel, a broadcast program, or information related to a broadcast service provider. The broadcast-related information may also be provided through a mobile communication network. In this case, it may be received by the mobile communication module 112.

The broadcast-related information may exist in various forms. For example, an EPG (Electronic Program Guide) of DMB (Digital Multimedia Broadcasting) or an ESG (Electronic Service Guide) of Digital Video Broadcast-Handheld (DVB-H).

For example, the broadcast receiving module 111 may be a Digital Multimedia Broadcasting-Terrestrial (DMB-T), a Digital Multimedia Broadcasting-Satellite (DMB-S), a Media Forward Link Only And a Digital Broadcasting System (ISDB-T) (Integrated Services Digital Broadcast-Terrestrial). Of course, the broadcast receiving module 111 may be adapted to other broadcasting systems as well as the digital broadcasting system described above.

The broadcast signal and / or broadcast related information received through the broadcast receiving module 111 may be stored in the memory 160.

The mobile communication module 112 transmits and receives radio signals to at least one of a base station, an external terminal, and a server on a mobile communication network. The wireless signal may include various types of data depending on a voice call signal, a video call signal or a text / multimedia message transmission / reception.

The wireless Internet module 113 is a module for wireless Internet access, and may be built in or externally attached to the mobile terminal 100. WLAN (Wi-Fi), Wibro (Wireless broadband), Wimax (World Interoperability for Microwave Access), HSDPA (High Speed Downlink Packet Access) and the like can be used as wireless Internet technologies.

The short-range communication module 114 refers to a module for short-range communication. Bluetooth, Radio Frequency Identification (RFID), infrared data association (IrDA), Ultra Wideband (UWB), ZigBee, and the like can be used as a short range communication technology.

The position information module 115 is a module for obtaining the position of the mobile terminal, and a representative example thereof is a Global Position System (GPS) module.

Referring to FIG. 1, an A / V (Audio / Video) input unit 120 is for inputting an audio signal or a video signal, and may include a camera 121 and a microphone 122. The camera 121 processes image frames such as still images or moving images obtained by the image sensor in the video communication mode or the photographing mode. The processed image frame can be displayed on the display unit 151. [

The image frame processed by the camera 121 may be stored in the memory 160 or transmitted to the outside through the wireless communication unit 110. [ Two or more cameras 121 may be provided depending on the use environment.

The microphone 122 receives an external sound signal through a microphone in a communication mode, a recording mode, a voice recognition mode, or the like, and processes it as electrical voice data. The processed voice data can be converted into a form that can be transmitted to the mobile communication base station through the mobile communication module 112 when the voice data is in the call mode, and output. Various noise reduction algorithms may be implemented in the microphone 122 to remove noise generated in receiving an external sound signal.

The user input unit 130 generates input data for a user to control the operation of the terminal. The user input unit 130 may include a key pad dome switch, a touch pad (static / static), a jog wheel, a jog switch, and the like.

The sensing unit 140 senses the current state of the mobile terminal 100 such as the open / close state of the mobile terminal 100, the position of the mobile terminal 100, the presence or absence of user contact, the orientation of the mobile terminal, And generates a sensing signal for controlling the operation of the mobile terminal 100. For example, when the mobile terminal 100 is in the form of a slide phone, it is possible to sense whether the slide phone is opened or closed. It is also possible to sense whether the power supply unit 190 is powered on, whether the interface unit 170 is connected to an external device, and the like. Meanwhile, the sensing unit 140 may include a proximity sensor 141.

The output unit 150 is for generating output related to the visual, auditory or tactile sense and includes a display unit 151, an audio output module 152, an alarm unit 153, and a haptic module 154 .

The display unit 151 displays (outputs) information processed by the mobile terminal 100. For example, when the mobile terminal is in the call mode, a UI (User Interface) or a GUI (Graphic User Interface) associated with a call is displayed. When the mobile terminal 100 is in the video communication mode or the photographing mode, the photographed and / or received video or UI and GUI are displayed.

The display unit 151 may be a liquid crystal display (LCD), a thin film transistor-liquid crystal display (TFT LCD), an organic light-emitting diode (OLED), a flexible display display, and a 3D display.

Some of these displays may be transparent or light transmissive so that they can be seen through. This can be referred to as a transparent display, and a typical example of the transparent display is TOLED (Transparent OLED) and the like. The rear structure of the display unit 151 may also be of a light transmission type. With this structure, the user can see an object located behind the terminal body through the area occupied by the display unit 151 of the terminal body.

There may be two or more display units 151 according to the embodiment of the mobile terminal 100. For example, in the mobile terminal 100, a plurality of display portions may be spaced apart from one another or may be disposed integrally with each other, or may be disposed on different surfaces.

(Hereinafter, referred to as a 'touch screen') in which a display unit 151 and a sensor for sensing a touch operation (hereinafter, referred to as 'touch sensor') form a mutual layer structure, It can also be used as an input device. The touch sensor may have the form of, for example, a touch film, a touch sheet, a touch pad, or the like.

The touch sensor may be configured to convert a change in a pressure applied to a specific portion of the display unit 151 or a capacitance generated in a specific portion of the display unit 151 into an electrical input signal. The touch sensor can be configured to detect not only the position and area to be touched but also the pressure at the time of touch.

If there is a touch input to the touch sensor, the corresponding signal (s) is sent to the touch controller. The touch controller processes the signal (s) and transmits the corresponding data to the controller 180. Thus, the control unit 180 can know which area of the display unit 151 is touched or the like.

Referring to FIG. 1, a proximity sensor 141 may be disposed in an inner region of the mobile terminal or in the vicinity of the touch screen, which is surrounded by the touch screen. The proximity sensor 141 refers to a sensor that detects the presence of an object approaching a predetermined detection surface or an object existing in the vicinity of the detection surface without mechanical contact using an electromagnetic force or an infrared ray. The proximity sensor 141 has a longer life than the contact type sensor and its utilization is also high.

Examples of the proximity sensor 141 include a transmission type photoelectric sensor, a direct reflection type photoelectric sensor, a mirror reflection type photoelectric sensor, a high frequency oscillation type proximity sensor, a capacitive proximity sensor, a magnetic proximity sensor, and an infrared proximity sensor. And to detect the proximity of the pointer by the change of the electric field along the proximity of the pointer when the touch screen is electrostatic. In this case, the touch screen (touch sensor) may be classified as a proximity sensor.

Hereinafter, for convenience of explanation, the act of recognizing that the pointer is positioned on the touch screen while the pointer is not in contact with the touch screen is referred to as "proximity touch" The act of actually touching the pointer on the screen is called "contact touch. &Quot; The position where the pointer is proximately touched on the touch screen means a position where the pointer is vertically corresponding to the touch screen when the pointer is touched.

The proximity sensor detects a proximity touch and a proximity touch pattern (e.g., a proximity touch distance, a proximity touch direction, a proximity touch speed, a proximity touch time, a proximity touch position, a proximity touch movement state, and the like). Information corresponding to the detected proximity touch operation and the proximity touch pattern may be output on the touch screen.

The audio output module 152 may output audio data received from the wireless communication unit 110 or stored in the memory 160 in a call signal reception mode, a call mode or a recording mode, a voice recognition mode, a broadcast reception mode, The sound output module 152 also outputs sound signals related to functions (e.g., call signal reception sound, message reception sound, etc.) performed in the mobile terminal 100. [ The audio output module 152 may include a receiver, a speaker, a buzzer, and the like.

The alarm unit 153 outputs a signal for notifying the occurrence of an event of the mobile terminal 100. Examples of events that occur in the mobile terminal include call signal reception, message reception, key signal input, touch input, and the like. The alarm unit 153 may output a signal for notifying the occurrence of an event in a form other than the video signal or the audio signal, for example, vibration. The video signal or the audio signal may be output through the display unit 151 or the audio output module 152 so that they may be classified as a part of the alarm unit 153.

The haptic module 154 generates various tactile effects that the user can feel. A typical example of the haptic effect generated by the haptic module 154 is vibration. The intensity and pattern of the vibration generated by the hit module 154 can be controlled. For example, different vibrations may be synthesized and output or sequentially output.

In addition to the vibration, the haptic module 154 may include a pin arrangement vertically moving with respect to the contact skin surface, a spraying force or suction force of the air through the injection port or the suction port, a touch on the skin surface, contact with an electrode, And various tactile effects such as an effect of reproducing a cold sensation using an endothermic or exothermic element can be generated.

The haptic module 154 can be implemented not only to transmit the tactile effect through the direct contact but also to allow the user to feel the tactile effect through the muscular sensation of the finger or arm. The haptic module 154 may include two or more haptic modules 154 according to the configuration of the portable terminal 100.

The memory 160 may store a program for the operation of the controller 180 and temporarily store input / output data (e.g., a phone book, a message, a still image, a moving picture, etc.). The memory 160 may store data on vibration and sound of various patterns outputted when a touch is input on the touch screen.

The memory 160 may be a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (for example, SD or XD memory), a RAM (Random Access Memory), SRAM (Static Random Access Memory), ROM (Read Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), PROM A disk, and / or an optical disk. The mobile terminal 100 may operate in association with a web storage that performs a storage function of the memory 160 on the Internet.

The interface unit 170 serves as a path for communication with all external devices connected to the mobile terminal 100. The interface unit 170 receives data from an external device or supplies power to each component in the mobile terminal 100 or transmits data to the external device. For example, a wired / wireless headset port, an external charger port, a wired / wireless data port, a memory card port, a port for connecting a device having an identification module, an audio I / O port, A video input / output (I / O) port, an earphone port, and the like may be included in the interface unit 170.

The identification module is a chip for storing various information for authenticating the use right of the mobile terminal 100 and includes a user identification module (UIM), a subscriber identity module (SIM), a general user authentication module A Universal Subscriber Identity Module (USIM), and the like. Devices with identification modules (hereinafter referred to as "identification devices") can be manufactured in a smart card format. Accordingly, the identification device can be connected to the terminal 100 through the port.

When the mobile terminal 100 is connected to an external cradle, the interface unit may be a path through which power from the cradle is supplied to the mobile terminal 100, or various command signals input by the user to the cradle may be transmitted It can be a passage to be transmitted to the terminal. The various command signals or the power source input from the cradle may be operated as a signal for recognizing that the mobile terminal is correctly mounted on the cradle.

The controller 180 typically controls the overall operation of the mobile terminal. For example, voice communication, data communication, video communication, and the like. The control unit 180 may include a multimedia module 181 for multimedia playback. The multimedia module 181 may be implemented in the control unit 180 or may be implemented separately from the control unit 180. [

The controller 180 may perform a pattern recognition process for recognizing handwriting input or drawing input performed on the touch screen as characters and images, respectively.

The power supply unit 190 receives external power and internal power under the control of the controller 180 and supplies power necessary for operation of the respective components.

The various embodiments described herein may be embodied in a recording medium readable by a computer or similar device using, for example, software, hardware, or a combination thereof.

According to a hardware implementation, the embodiments described herein may be implemented as application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays May be implemented using at least one of processors, controllers, micro-controllers, microprocessors, and other electronic units for performing other functions. In some cases, May be implemented by the control unit 180.

According to a software implementation, embodiments such as procedures or functions may be implemented with separate software modules that perform at least one function or operation. The software code may be implemented by a software application written in a suitable programming language. The software codes are stored in the memory 160 and can be executed by the control unit 180. [

In particular, at least one of the mobile communication module 112 and the wireless Internet module 113 may be a WWAN (Wireless Local Area Network) for wireless wide area network (WWAN) communication defined in IEEE 802.16 or Long Term Evolution It may correspond to a communication module. The short range communication module 114 may correspond to a WLAN communication module for wireless local area network (WLAN) communication such as infrared communication, Bluetooth communication, and Wi-Fi.

The wireless remote network may correspond to an infrastructure network. The wireless remote network 10 may correspond to a network in which data communication is charged.

The wireless local area network may correspond to an infrastructure network or an ad hoc network. The wireless remote network 10 may correspond to a network in which data communication is not charged.

An infrastructure network is a network in which mobile terminals communicate through a base station. An ad hoc network is a network in which mobile terminals communicate directly without relaying a base station. The base station may be referred to as an access point, a radio access satellite (RAS), a base transceiver station (BTS), a mobile multihop relay base station (MMR-BS) Or may include all or some of these functions.

Next, a video call network topology according to an embodiment of the present invention will be described with reference to FIG. 2 to FIG.

FIG. 2 illustrates a video call network topology according to an embodiment of the present invention.

As shown in FIG. 2, the mobile terminal 100a and the mobile terminal 100b perform a video call through the wireless remote network 10.

The mobile terminal 100a and one or more mobile terminals 100c belong to a local area 41. [ The mobile terminal 100b and one or more mobile terminals 100d belong to another local area 42. [

One or more mobile terminals 100c access the mobile terminal 100a via the wireless local area network 20 and participate in a video call between the mobile terminal 100a and the mobile terminal 100b. When the wireless local area network 20 is an infrastructure network such as Wi-Fi, one or more mobile terminals 100c access the mobile terminal 100a through a base station such as an access point.

One or more mobile terminals 100d access the mobile terminal 100b through the wireless local area network 30 and participate in a video call between the mobile terminal 100a and the mobile terminal 100b. When the wireless local area network 30 is an infrastructure network such as Wi-Fi, one or more mobile terminals 100d access the mobile terminal 100b through a base station such as an access point.

The mobile terminal 100a and the mobile terminal 100b directly participate in a WWAN video call, and hereinafter, a mobile terminal directly participating in a WWAN video call, regardless of whether it operates as a video call hub, .

Also, the mobile terminal 100a and the mobile terminal 100b directly participate in a WWAN video call and operate as a video call hub. Hereinafter, a mobile terminal operating as a video call direct participant and a video call hub will be referred to as a hub terminal.

One or more mobile terminals 100c and one or more mobile terminals 100d participate indirectly in a WWAN video call through a hub terminal without operating as a video call hub, Let's call it.

3 illustrates a video call network topology according to another embodiment of the present invention.

As shown in FIG. 3, the wireless local area network 20 and the wireless local area network 30 may correspond to the ad hoc network 21 and the ad hoc network 31, respectively.

The mobile terminal 100a and the mobile terminal 100b can operate as a tethering hub through tethering.

One or more mobile terminals 100c can directly access and connect to the mobile terminal 100a without relaying a base station such as an access point and participate in a video call between the mobile terminal 100a and the mobile terminal 100b.

Also, one or more mobile terminals 100d can directly access and connect to the mobile terminal 100b without relaying a base station such as an access point, and participate in a video call between the mobile terminal 100a and the mobile terminal 100b.

At this time, a connection created between the mobile terminal 100c and the mobile terminal 100a and a connection created between the mobile terminal 100d and the mobile terminal 100b may be used for a direct wireless connection or a peer- Called peer-to-peer connection.

Referring to FIG. 2 and FIG. 3, the hub terminal and the video call participating terminal may have the structure of FIG. In particular, one or more video call participating terminals connected to the hub terminal may include a terminal that does not include a WWAN communication module and a terminal that includes a WWAN communication module. The terminal including the WWAN communication module may be a terminal subscribed to the WWAN 10 or a terminal not subscribed to the WWAN 10.

2 and 3, a terminal not including the WAN communication module can participate in the video call through the WWAN 10 through the video communication network topology. A terminal including the WWAN communication module but not subscribed to the WWAN 10 can participate in the video call through the WWAN 10 as well. In addition, a terminal including the WWAN communication module and subscribed to the WWAN 10 can participate in a video call through the WWAN 10 free or at a low cost.

Next, the operation of the mobile terminal 100 operating as a tethering hub terminal according to an embodiment of the present invention will be described with reference to FIG. 4 to FIG.

4 is a flowchart illustrating an operation of a hub terminal according to an embodiment of the present invention.

Particularly, FIG. 4 shows a method in which the hub terminal 100a provides video call data to the video call direct participating terminal 100b.

First, the hub terminal 100a participates in a WWAN video call with the participating terminal 100b via the wireless remote network 10 (S101).

The hub terminal 100a adds one or more video call indirect participant terminals 100c to the WWAN video call in step S105 so that the at least one video call indirect participant terminal 100c can participate in the WWAN video call.

Various embodiments for adding the video call indirect participant terminal 100c to the WWAN video call will be described with reference to FIGS. 5 and 6. FIG.

5 illustrates a user interface for adding a video call indirect participant terminal to a WWAN video call according to an embodiment of the present invention.

In the embodiment of FIG. 5, the video call indirect participant terminal 100c may request participation of the WWAN video call to the hub terminal 100a through the wireless local area network 20. FIG. When the hub terminal 100a receives a request for participating in the WWAN video call, the hub terminal 100a includes a notification user interface for informing the WWAN video call participation request of the video call indirect participation terminal 100c, A query user interface that queries the user whether to allow the WWAN video call participation of the WWW server 100c. When the hub terminal 100a receives the participation permission of the WWAN video call from the user through the display of the query, the hub terminal 100a adds the video call indirect participation terminal 100c to the WWAN video call.

FIG. 6 shows a user interface for adding a video call indirect participant terminal to a WWAN video call according to another embodiment of the present invention.

In the embodiment of Figure 6, the hub terminal 100a includes a graphical user interface (GUI) element 51 that supports user selection of searches of peripheral terminals that may be added to a WWAN video call, A list 53 of peripheral terminals that may be added, and a GUI element 55 that supports user selection for addition of the retrieved peripheral terminals to the WWAN video call. GUI element 51 and GUI element 55 may be buttons.

Once the GUI element 51 is selected, the hub terminal 100a searches for one or more peripheral terminals that may be added to the WWAN video call. In particular, the hub terminal 100a may search for one or more peripheral terminals that can be added to a WWAN video call using a received signal strength indicator (RSSI). Then, the hub terminal 100a displays the list 53 of the searched peripheral terminals. Upon request of the peripheral terminal, the hub terminal 100a may add the peripheral terminal to the list 53 of peripheral terminals. If the GUI element 55 is selected, the hub terminal 100a adds the peripheral terminal corresponding to the selected GUI element 55 to the WWAN video call as a video call indirect participant terminal.

4 will be described again.

The hub terminal 100a acquires call image data of the user of the hub terminal 100a through the camera 121 (S107).

The hub terminal 100a acquires voice voice data of the user of the hub terminal 100a through the microphone 122 (S109).

The hub terminal 100a receives the video call data from the at least one video call indirect participant terminal 100c via the wireless local area network 20 (S111). At this time, the video call data may include call video data and voice call data of the user of the video call indirect participant terminal 100c.

The hub terminal 100a is connected to at least one speaker terminal corresponding to a speaker and at least one non-defective speaker terminal based on video call data of the hub terminal 100a and one or more video call indirect participant terminals 100c. A non-speaker terminal is determined (S115).

In one embodiment, the hub terminal 100a may determine one or more speaker terminals using the call video data of the plurality of terminals 100. [ Specifically, the hub terminal 100a performs the motion tracking of the face object in the call video data of the hub terminal 100a and one or more video call indirect participant terminals 100c, The corresponding terminal can be determined as the speaker terminal, and the terminal corresponding to the call image data in which the movement of the face object does not exist can be determined as the non-speaker terminal. At this time, the face object may include at least one of eyes, lips, mouths.

In another embodiment, the hub terminal 100a may determine one or more speaker terminals using the call voice data of the hub terminal 100a and the call voice data of the at least one video call indirect participant terminal 100c. Specifically, the hub terminal 100a may determine a terminal corresponding to the call video data having a volume of a volume equal to or greater than a certain size, and determine a terminal corresponding to call video data having a volume of a volume equal to or less than a predetermined size as a non-speaker terminal.

The hub terminal 100a transmits the wireless remote network 10 to the video call direct participant terminal 100b using the call video data of the hub terminal 100a and the call video data of the at least one video call indirect participant terminal 100c And generates WWAN call video data corresponding to the call video data to be transmitted through the mobile terminal (S117).

In one embodiment, the WWAN call video data may include video data of a speaker terminal and video data of a non-speaker terminal. In this case, step S115 may be omitted. In another embodiment, the WWAN call video data includes video data of a speaker terminal but may not include video data of a non-speaker terminal. Accordingly, even when there is a limitation on the bandwidth allocated to the WWAN call video data, a smooth video call can be performed.

In an embodiment, when the area occupied by the face in each image frame in the call image data is smaller than the reference value and the area occupied by the background is relatively large, the hub terminal 100a cuts a part of the background to increase the relative area occupied by the face can do. Accordingly, when the video call indirect participant terminal 100c is a watch type terminal or a glasses type terminal having a low data processing performance and a small display screen, such a terminal can directly recognize the face in real time and perform relatively high data processing Since the hub terminal having the performance recognizes the face in real time instead of the background, the video call indirect participant terminal 100c can effectively show the face of the conversation partner on a small display screen even if the user has low data processing performance.

When the WWAN call video data includes call video data of a plurality of terminals, the hub terminal 100a may generate WWAN call video data by combining call video data of a plurality of terminals 100. [ At this time, the plurality of terminals may include the hub terminal 100a and one or more video call indirect participant terminals 100c.

There may be a limit to the bandwidth allocated to WWAN call video data. Accordingly, in order for the hub terminal 100a to transmit the WWAN call image data within the limited bandwidth, the hub terminal 100a performs resizing for at least one of the call image data of the plurality of terminals 100, The size may be reduced, the image quality may be degraded, or the frame rate may be lowered.

The hub terminal 100a may combine call video data of a plurality of terminals 100 in an analog domain or in a digital domain.

When the hub terminal 100a synthesizes the call image data of the plurality of terminals 100 in the analog domain, the WWAN call image data is generated such that each image frame of the WWAN call image data includes call image frames of a plurality of terminals .

In particular, transmission of a single call video data packet stream for a single user to the hub terminal 100a may be allowed for a WWAN video call. That is, when the frame rate for the video call is 12 frames per second, transmission of the call video data stream of 12 frames per second may be allowed to the hub terminal 100a for WWAN video call. That is, the frame rate of WWAN call image data may be 12 frames per second. At this time, in order to transmit call image data for a plurality of users, each image frame of WWAN call image data may be divided into a plurality of frame regions. An image frame of the user's call image data of the hub terminal 100a and an image frame of the call image data of the user of the at least one video call indirect participant terminal 100c may be respectively arranged in a plurality of frame regions. For example, if each video frame of the WWAN call image data is divided into four frame regions, the video frame of the user's call video data of the hub terminal 100a is arranged in the upper left frame region, An image frame of the user's call image data of the call indirect participant terminal 100c is arranged and an image frame of the call image data of the user of the first video call indirect participant terminal 100c is placed in the lower left frame region, The video frame of the user's call video data of the first video call indirect participant terminal 100c may be placed in the frame area.

When the hub terminal 100a synthesizes the call video data of the plurality of terminals 100 in the digital domain, it is not necessary to divide each video frame of the WWAN call video data into a plurality of areas, And the packet stream of the call image data of the video call indirect participant terminal 100c may be multiplexed to generate the WWAN call image data.

Specifically, transmission of a plurality of call video data streams for a plurality of users to the hub terminal 100a may be permitted for WWAN video call. That is, when the frame rate for a video call is 12 video frames per second, each call video data stream may include 12 video frames per second. Transmission of four call image data streams to the hub terminal 100a may be allowed for WWAN video calls for four users. In the case where the frame rate is not lowered, transmission of 12 video frames per second to the hub terminal 100a can be allowed to transmit the respective call video data of 4 users, wherein the frame rate of the WWAN call video data is 48 Image frame. On the other hand, in the case of lowering the frame rate, transmission of three video frames per second to the hub terminal 100a can be allowed to transfer the respective call video data of four users, 12 video frames. In this case, the hub terminal 100a does not need to divide each image frame to be transmitted into a plurality of regions, and instead, the bit stream including the call image data of the hub terminal 100a and the call image data of the video call indirect participant terminal 100c Can be multiplexed to generate WWAN call video data.

The hub terminal 100a transmits the wireless remote network 10 to the video call direct participating terminal 100b using the call voice data of the hub terminal 100a and the call voice data of the at least one video call indirect participant terminal 100c And generates WWAN call voice data corresponding to the call voice data to be transmitted through the mobile communication terminal (S119).

In one embodiment, the WWAN call voice data may include voice data of the speaker terminal and voice data of the non-speaker terminal. In this case, step S115 may be omitted.

In another embodiment, the WWAN call voice data includes voice data of the speaker terminal but may not include voice data of the non-speaker terminal. Accordingly, even when there is a limitation on the bandwidth allocated to the WWAN call video data, a smooth video call can be performed.

When the WWAN call voice data includes call voice data of a plurality of terminals, the hub terminal 100a can synthesize the call voice data of the plurality of terminals 100 to generate WWAN call voice data. At this time, the plurality of terminals may include the hub terminal 100a and one or more video call indirect participant terminals 100c.

The bandwidth allocated to the hub terminal 100a for the WWAN video call may be limited. Therefore, in order for the hub terminal 100a to transmit call voice data of a plurality of terminals within a limited bandwidth, the hub terminal 100a may perform tone degradation for at least one of the call voice data of the plurality of terminals 100, The rate may be lowered.

The hub terminal 100a may combine the voice data of a plurality of terminals 100 in the analog domain or in the digital domain.

Specifically, transmission of a single call voice data packet stream for a single user to the hub terminal 100a may be allowed for a WWAN video call. That is, when the bit rate for the video call is 64 kbps, transmission of the 64 kbps call voice data packet stream to the hub terminal 100a for the WWAN video call can be allowed. At this time, in order to transmit call voice data for a plurality of users, the hub terminal 100a mixes the call voice of the user of the hub terminal 100a and the call voice of the user of the video call indirect participation terminal 100c in the analog domain WWAN call voice data can be generated.

When the hub terminal 100a synthesizes the call voice data of the plurality of terminals 100 in the digital domain, the packet stream including the call voice data of the hub terminal 100a and the voice call data of the video call indirect participant terminal 100c The packet stream of the data can be multiplexed to generate WWAN call voice data.

Specifically, transmission of a plurality of call voice data packet streams for a plurality of users to the hub terminal 100a may be allowed for a WWAN video call. That is, when the voice bit rate for a voice call is 64 kbps, each call voice data stream may include 64 kbps. Transmission of four call voice data streams to the hub terminal 100a for WWAN video calls for four users may be allowed.

In the case where the bit rate is not lowered, transmission of 64 kbps may be allowed to the hub terminal 100a for four users' call voice data, respectively, and the bit rate of the WWAN call voice data may be 256 kbps in total . In the case of lowering the bit rate, transmission of 16 kbps to the hub terminal 100a may be allowed for four users' call voice data, and the bit rate of the WWAN call voice data may be 64 kbps in total. In this case, the hub terminal 100a transmits the packet including the voice voice data of the hub terminal 100a and the call voice of the video call indirect participation terminal 100c in the analog domain, Stream and the packet stream of call voice data of the video call indirect participation terminal 100c can be multiplexed to generate WWAN call voice data.

Then, the hub terminal 100a transmits the WWAN video call data to the video call direct participating terminal 100b through the wireless remote network 10 (S121). At this time, the WWAN video call data may include WWAN call video data and WWAN call voice data.

7 is a flowchart illustrating an operation of a hub terminal according to another embodiment of the present invention.

Particularly, FIG. 7 shows a method in which the hub terminal 100a according to the embodiment of the present invention provides video call data to the video call indirect participant terminal 100c.

First, the hub terminal 100a participates in a WWAN video call with the participating terminal 100b via the wireless remote network 10 (S301).

The hub terminal 100a adds one or more video call indirect participant terminals 100c to the WWAN video call (S305), so that the at least one video call indirect participant terminal 100c can participate in the WWAN video call. The step S305 may be the same as or similar to the step S105, and thus a detailed description thereof will be omitted.

The hub terminal 100a acquires call image data of the user of the hub terminal 100a through the camera 121 (S307).

The hub terminal 100a acquires voice voice data of the user of the hub terminal 100a through the microphone 122 (S309).

The hub terminal 100a receives the video call data from the at least one video call indirect participant terminal 100c through the wireless local area network 20 at step S311. At this time, the video call data may include call video data and voice call data of the user of the video call indirect participant terminal 100c.

The hub terminal 100a receives the WWAN video call data from the participating terminal 100b via the wireless remote network 10 (S313). When the video call direct participant terminal 100b is a hub terminal, the WWAN video call data may include video call data of a plurality of the terminals 100. [

The hub terminal 100a includes at least one speaker terminal corresponding to the speaker based on the video call data of the hub terminal 100a, the video call direct participant terminal 100b and the at least one video call indirect participant terminal 100c, And one or more non-speaker terminals that do not correspond to the fault (S315). The step S315 may be the same as or similar to the step S115, and thus a detailed description thereof will be omitted.

The hub terminal 100a transmits the call image data to the another indirect call participant terminal 100c using the call video data of the hub terminal 100a, the video call direct participant terminal 100b and the at least one video call indirect participant terminal 100c, And generates WLAN call image data corresponding to the call image data to be transmitted through the local area network 20 (S317).

The step S317 may be the same as or similar to the step S117, and thus a detailed description thereof will be omitted.

In particular, the WLAN call video data includes video data of the speaker terminal but may not include video data of the non-speaker terminal. Accordingly, even when there is a limitation on the bandwidth allocated to the WLAN call video data, a smooth video call can be performed. In addition, when the video call indirect participant terminal 100c is a watch type terminal or a glasses type terminal having a low data processing performance and a small display screen, a comfortable video call can be provided to a user without such a terminal needing to directly filter a speaker .

The hub terminal 100a transmits to the other indirect participant terminal 100c wirelessly using the voice voice data of the hub terminal 100a, the video call direct participant terminal 100b and one or more video call indirect participant terminals 100c, And generates WLAN voice voice data corresponding to voice voice data to be transmitted through the local area network 20 (S319).

The step S319 may be the same as or similar to the step S119, and thus a detailed description thereof will be omitted.

In particular, the WLAN call voice data includes voice data of the speaker terminal but may not include voice data of the non-speaker terminal. Accordingly, even when there is a limitation on the bandwidth allocated to the WLAN call video data, a smooth video call can be performed. Also, when the video call indirect participant terminal 100c is a clock-type terminal or a glasses-type terminal having low data processing performance, a comfortable video call can be provided to the user without the need for the terminal to filter the speaker.

Then, the hub terminal 100a transmits the WLAN video call data to the video call indirect participant terminal 100c through the wireless local area network 20 (S121). At this time, the WLAN video call data may include WLAN call video data and WLAN call voice data.

The packet flow according to the embodiment of the present invention will now be described with reference to FIG.

8 is a diagram illustrating a packet flow according to an embodiment of the present invention.

Specifically, FIG. 8 shows a case where the hub terminals C and D synthesize video call data of a plurality of terminals in a digital domain, specifically, the hub terminals C and D include video data and voice data of a plurality of terminals And packet streams of video call data are multiplexed.

Referring to FIG. 8, the mobile terminal A generates a video call packet stream A and transmits it to the hub terminal C via the WLAN.

The pad type terminal B generates a video call packet stream B and transmits it to the hub terminal C via the WLAN.

The hub terminal C generates a video call packet stream C and transmits the video call packet stream A of the terminal A, the video call packet stream B of the terminal B, Generates the multiplexed video call packet stream A + B + C by multiplexing the packet stream C and transmits the multiplexed video call packet stream A + B + C to the hub terminal D via the WWAN .

The clock type terminal E generates a video call packet stream E and transmits it to the hub terminal D through the WLAN.

The eyeglass-type mobile terminal F generates a video call packet stream F and transmits the video call packet stream F to the host terminal D through the WLAN.

The hub terminal D generates a video call packet stream D and transmits the video call packet stream D of the terminal D, the video call packet stream E of the terminal E, Generates the multiplexed video call packet stream D + E + F by multiplexing the packet stream F and transmits the multiplexed video call packet stream D + E + F to the hub terminal C via the WWAN .

The hub terminal C receives the multiplexed video call packet stream D + E + F and multiplexes the video call packet stream B of the terminal B, the video call packet stream C of the terminal C, (B + C + D + E + F) by multiplexing the video call packet stream (D + E + F) + F) to the terminal A via the WLAN.

The hub terminal C receives the multiplexed video call packet stream D + E + F and multiplexes the video call packet stream A of the terminal A, the video call packet stream C of the terminal C, (A + C + D + E + F) by multiplexing the video call packet stream (D + E + F) + F) to the terminal B via the WLAN.

The hub terminal D receives the multiplexed video call packet stream A + B + C and multiplexes the video call packet stream D of the terminal D, the video call packet stream F of the terminal F, B + C + D + F) by multiplexing the video call packet stream (A + B + C + D) + F) to the terminal E via the WLAN.

The hub terminal D receives the multiplexed video call packet stream A + B + C and stores the video call packet stream D of the terminal D, the video call packet stream E of the terminal E, (A + B + C + D + E) by multiplexing the multiplexed video call packet stream (A + B + C) D + E) to the terminal F via the WLAN.

Next, the operation of the mobile terminal 100 operating as the video call indirect participation terminals 100c and 100d according to the embodiment of the present invention will be described with reference to FIG.

9 is a flowchart illustrating an operation of a mobile terminal according to an embodiment of the present invention.

9, the mobile terminal 100 operating as the video call indirect participation terminals 100c and 100d receives the WLAN video call data from the hub terminals 100a and 100b through the WLAN (S501).

The mobile terminal 100 outputs WLAN call voice data in the WLAN video call data (S503).

The mobile terminal 100 displays images of a plurality of video call participants in the WLAN video call data (S505).

Various embodiments for displaying images of a plurality of video call participants will be described with reference to FIGS. 10 to 16. FIG.

10 is a front view of a mobile terminal displaying images of a plurality of video call participants according to an embodiment of the present invention.

Referring to FIG. 10, the mobile terminal 100 may divide the call image display area of the display unit 151 into a plurality of sub areas, and display images of a plurality of video call participants in the plurality of sub areas, respectively.

11 to 13 show that the mobile terminal 100 displays images of a plurality of video call participants according to an embodiment of the present invention.

Particularly, Figs. 11 to 13 show that a mobile terminal having a small display screen displays images of a plurality of video call participants. 11 to 13 exemplify a clock-type terminal, and the embodiments of FIGS. 11 to 13 may be suitable for a clock-type terminal, but may be applied to other types of terminals.

11 to 13, the mobile terminal 100 generates a virtual polyhedron, arranges images of a plurality of video call participants on a plurality of faces of the polyhedron, and displays one of a plurality of faces of the polyhedron on a display can do.

FIGS. 14 and 15 show that the mobile terminal 100 displays images of a plurality of video call participants according to an embodiment of the present invention.

In particular, FIGS. 14 and 15 show that a mobile terminal having a small display screen displays images of a plurality of video call participants. 14 and FIG. 15 exemplify the glasses type terminal, and the embodiments of FIGS. 14 and 15 may be suitable for the glasses type terminal, but may be applied to other types of terminals.

Referring to FIG. 14, the mobile terminal 100 generates a plurality of virtual pages each having a plurality of bookmarks, arranges images of a plurality of video call participants in a plurality of virtual pages, Can be displayed.

Referring to FIG. 15, the mobile terminal 100 may display an image of one participant of a plurality of video call participants on one of the left and right display units. The mobile terminal 100 may display images of the remaining participants of the plurality of video call participants on the other of the left and right eye display units.

The mobile terminal 100 selects one participant among the plurality of video call participants (S507).

In one embodiment, the mobile terminal 100 may select one of a plurality of video call participants based on user input.

When a video of a plurality of video call participants is displayed as shown in FIG. 10, the mobile terminal 100 receives a user input for selecting one of the plurality of sub regions, Can be selected. At this time, the user input for selecting one of the plurality of sub regions may be a touch.

When the images of the plurality of video call participants are displayed as shown in FIGS. 11 to 13, the mobile terminal 100 receives a user input for rotating the polyhedron, rotates the polyhedron according to user input, The participants corresponding to the front face of the polyhedron can be selected. FIG. 16 shows a user input for rotating a polyhedron according to an embodiment of the present invention. As shown in FIG. 15, when the user input corresponds to touch movement after touching the front face of the polyhedron, the mobile terminal 100 may rotate the polyhedron according to the direction of the touch movement.

In another embodiment, the mobile terminal 100 can recognize a speaker and select a participant who is recognized as a speaker among a plurality of video call participants. In particular, such speaker-based participant selection may be applied to the embodiment of FIG.

In another embodiment, when the hub terminal 100 transmits only the image data of a participant recognized as a photographer, it is possible to select a participant corresponding to image data transmitted from among a plurality of image call participants.

The mobile terminal 100 displays an image of the selected participant (S509). The mobile terminal can display the image of the selected participant in the most part of the entire area of the display part 151, for example, half or more of the entire area. The mobile terminal 100 can display an image of the selected participant without displaying the image of the unselected participant.

In particular, in the embodiment of FIG. 15, in the display unit in which an image of one participant of a plurality of video call participants is displayed, the mobile terminal 100 can display an image of the selected participant without displaying an image of the unselected participant . Referring to FIG. 15, the mobile terminal 100 may display an image of a participant selected from a plurality of video call participants on a display unit of a left-eye display unit and a right-eye display unit. Then, the mobile terminal 100 can display an image of the remaining participants not selected among the plurality of video call participants on the other of the left eye display unit and the right eye display unit.

On the other hand, the eyeglass-type terminal has difficulty in acquiring a user's image. 17 and 18, a description will be given of an embodiment in which a video call can be made even if a user uses a glasses-type terminal.

FIG. 17 is a flowchart illustrating a method of acquiring a user's image according to an embodiment of the present invention.

The eyeglass-type mobile terminal 100 senses the movement of the user's eyes (S701). Since the mobile terminal 100 has a form of glasses, it can easily detect movement of the user's eyes by using an image sensor such as a camera 121 focused on the user's eyes.

The eyeglass-type mobile terminal 100 senses the movement of the mobile terminal 100 through a motion sensor (S703).

The eyeglass-type mobile terminal 100 estimates the motion of the user's head based on the motion of the mobile terminal (S705).

The mobile terminal 100 of the eyeglass type acquires the user's voice through the microphone 122 (S707) and recognizes the voice (S709).

The eyeglass-type mobile terminal 100 estimates the mouth shape of the user based on the recognized voice (S711).

The eyeglass-type mobile terminal 100 generates the image of the current user by applying the movement of the pupil, the estimated motion of the head, and the estimated shape of the mouth to the user's photograph acquired in advance (S713).

Then, the eyeglass-type mobile terminal 100 transmits the user image generated in the hub terminal through the WLAN (S715).

FIG. 18 is a ladder diagram showing a method of performing a video call in cooperation with the eyeglass-type terminal and the hub terminal when the eyeglass-type terminal according to the embodiment of the present invention is used.

The spectacle-type terminal 100c senses wearing of the spectacle-type terminal 100c (S901).

When it is detected that the glasses type terminal 100c is worn, the glasses type terminal 100c transmits a host authority transfer request to the hub terminal 100a through the WLAN 20 (S903).

When the hub terminal 100a receives the host authority transfer request from the eyeglass terminal 100c, the hub terminal 100a turns off the display unit 151 while maintaining the operation as the tethering hub and the operation of the camera 121 (Operation S905), kills the operation of the microphone 122 (S907), and transfers the host authority such as addition or deletion of the video call indirect participant terminal to the eyeglass terminal 100c (S909).

Then, the eyewear-type terminal 100c acquires the call voice data of the user (S911), and transmits the acquired call voice data to the hub terminal 100a (S913).

The hub terminal 100a acquires the call video data of the user in step S915 and generates WWAN video call data including the call voice data of the eyeglass terminal 100c and the call video data of the hub terminal 100a in step S917, , And transmits the WWAN video call data to the WWAN video call direct participating terminal through the WWAN 10 (S919).

According to an embodiment of the present invention, the above-described method can be implemented as a code readable by a processor on a medium on which a program is recorded. Examples of the medium that can be read by the processor include ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage, etc., and may be implemented in the form of a carrier wave (e.g., transmission over the Internet) .

The mobile terminal described above can be applied to not only the configuration and method of the embodiments described above but also all or some of the embodiments may be selectively combined so that various modifications may be made to the embodiments It is possible.

Claims (17)

In the first mobile terminal,
A video call data acquisition unit for acquiring first video call data of a user of the first mobile terminal;
A wireless local area network communication module for receiving second video call data from a second mobile terminal through a wireless local area network;
A controller for generating third video call data using the first video call data and the second video call data; And
And a wireless local area network communication module for transmitting the third video call data to a third mobile terminal through a wireless long distance network
A first mobile terminal.
The method according to claim 1,
Wherein each of the first video call data, the second video call data, and the third video call data includes video data and audio data
A first mobile terminal.
3. The method of claim 2,
The control unit
Wherein each video frame of the third video call data is divided into a plurality of areas and an image frame of the first video call data is arranged in one of the plurality of areas, The third video call data is generated so as to be arranged in another area
A first mobile terminal.
The method of claim 3,
The control unit
Reducing an image frame of the first video call data,
The image frame of the second video call data is reduced,
The reduced image frame of the first video call data is arranged in one of the plurality of areas and the reduced video frame of the second video call data is arranged in another one of the plurality of areas, To generate data
A first mobile terminal.
3. The method of claim 2,
The control unit
And mixes audio data of the first video call data and audio data of the second video call data in an analog domain to generate audio data of the third video call data
A first mobile terminal.
The method according to claim 1,
The control unit
Determining a speaker terminal and a non-speaker terminal among the first mobile terminal and the second mobile terminal based on the first video call data and the second video call data,
Wherein the third video call data includes video call data of the speaker terminal and does not include video call data of the non-speaker terminal
A first mobile terminal.
The method according to claim 1,
The control unit
Wherein the wireless remote network communication module is configured to participate in a wireless remote network video call with the third mobile terminal through the wireless remote network,
And adding the second mobile terminal to the wireless wide area network video call over the wireless local area network using the wireless local area network communication module
A first mobile terminal.
The method according to claim 1,
The video call data obtaining unit
A camera for acquiring image data of a user of the first mobile terminal through a camera of the first mobile terminal;
And a microphone for acquiring voice data of a user of the first mobile terminal through the microphone of the first mobile terminal
A first mobile terminal.
In the first mobile terminal,
A video call data acquisition unit for acquiring first video call data of a user of the first mobile terminal;
A wireless remote network communication module for receiving second video call data from a second mobile terminal over a wireless remote network;
A controller for generating third video call data using the first video call data and the second video call data; And
And a wireless local area network communication module for transmitting the third video call data to a third mobile terminal through a wireless local area network
A first mobile terminal.
10. The method of claim 9,
Wherein the wireless local area network communication module receives the fourth video call data from the fourth mobile terminal through the wireless local area network,
The control unit generates third video call data using the first video call data, the second video call data, and the fourth video call data
A first mobile terminal.
11. The method of claim 10,
The control unit
Determining a speaker terminal and a non-speaker terminal among the first mobile terminal, the second mobile terminal and the third mobile terminal based on the first video call data and the second video call data,
Wherein the third video call data includes video call data of the speaker terminal and does not include video call data of the non-speaker terminal
A first mobile terminal.
In the first mobile terminal,
The second mobile terminal transmits the third video call data including the first video call data received from the third mobile terminal and the second video call data of the user of the second mobile terminal through the wireless local area network A wireless local area network communication module receiving from the second terminal;
A speaker for outputting voice data in the third video call data; And
And a display unit for displaying images of a plurality of video call participants in the third video call data
A first mobile terminal.
13. The method of claim 12,
Further comprising a control unit for selecting one participant among a plurality of video call participants and displaying an image of the selected participant on the display unit
A first mobile terminal.
14. The method of claim 13,
Wherein the first mobile terminal is a clock type terminal,
The control unit
A virtual polyhedron is generated,
Arranging images of a plurality of video call participants on a plurality of faces of the virtual polyhedron,
And displaying one of a plurality of surfaces of the virtual polyhedron on the display unit
A first mobile terminal.
15. The method of claim 14,
The control unit
Acquiring a user input for rotating the virtual polyhedron,
Rotating the virtual polyhedron according to the user input,
And selects a participant corresponding to the front face of the virtual polyhedron in accordance with the rotation of the virtual polyhedron
A first mobile terminal.
14. The method of claim 13,
Wherein the first mobile terminal is a glasses type terminal,
The control unit
Creating a plurality of virtual pages each having a plurality of bookmarks,
Arranging images of a plurality of video call participants in the plurality of virtual pages,
Displaying one page of the plurality of virtual pages on the display unit
A first mobile terminal.
17. The method of claim 16,
An image sensor for detecting a pupil movement of a user of the first mobile terminal;
A movement sensor for sensing movement of the terminal of the first mobile terminal;
Further comprising a microphone for acquiring a user voice of the first mobile terminal,
The control unit
Estimating a user's head movement of the first mobile terminal of the user based on the terminal movement,
Estimating a user mouth shape of the first mobile terminal based on the user voice,
Generating a current user image by applying the pupil movement, the estimated user's head movement, and the estimated user mouth shape to a pre-acquired user's photograph,
Generating fourth video call data of a user of the first mobile terminal using the current user video,
And transmitting the fourth video call data to the second terminal through the wireless local area network using the wireless local area network communication module
A first mobile terminal.
KR20130113213A 2013-09-24 2013-09-24 Terminal and operating method thereof KR20150033337A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR20130113213A KR20150033337A (en) 2013-09-24 2013-09-24 Terminal and operating method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR20130113213A KR20150033337A (en) 2013-09-24 2013-09-24 Terminal and operating method thereof

Publications (1)

Publication Number Publication Date
KR20150033337A true KR20150033337A (en) 2015-04-01

Family

ID=53030703

Family Applications (1)

Application Number Title Priority Date Filing Date
KR20130113213A KR20150033337A (en) 2013-09-24 2013-09-24 Terminal and operating method thereof

Country Status (1)

Country Link
KR (1) KR20150033337A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180111033A (en) * 2017-03-31 2018-10-11 주식회사 하이시스테크놀로지 Device and method for bidirectional providing image information using bluetooth

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180111033A (en) * 2017-03-31 2018-10-11 주식회사 하이시스테크놀로지 Device and method for bidirectional providing image information using bluetooth

Similar Documents

Publication Publication Date Title
US10606553B2 (en) Display device and method of operating the same
US9182901B2 (en) Mobile terminal and control method thereof
KR102057558B1 (en) Mobile terminal and control method thereof
KR20130005174A (en) Mobile device and control method for the same
KR20130038660A (en) Mobile terminal and method for controlling the same
KR20130107093A (en) Mobile terminal and control method thereof
KR20140109167A (en) Mobile terminal and control method for the mobile terminal
KR101917695B1 (en) Mobile terminal and control method for the mobile terminal
KR20140136595A (en) Mobile terminal and method for controlling the same
US9411411B2 (en) Wearable electronic device having touch recognition unit and detachable display, and method for controlling the electronic device
KR20150033203A (en) Mobile terminal and method for controlling the same
KR20130127302A (en) Mobile terminal and control method thereof
KR20110030223A (en) Mobile terminal and control method thereof
KR101672215B1 (en) Mobile terminal and operation method thereof
KR20130034885A (en) Mobile terminal and intelligent information search method thereof
KR20130065074A (en) Electronic device and controlling method for electronic device
KR20110048617A (en) Method for displaying 3d image in mobile terminal and mobile terminal using the same
KR20150033337A (en) Terminal and operating method thereof
KR20140147057A (en) Wearable glass-type device and method of controlling the device
KR20130008109A (en) Mobile terminal, news providing apparatus and position based news providing method
KR101328052B1 (en) Mobile device and control method for the same
KR20140119532A (en) Mobile terminal and control method thereof
KR20120108657A (en) Mobile terminal and method for controlling the same
KR20150065511A (en) Mobile terminal and method for controlling of the same
KR20170024361A (en) Mobile terminal and method for controlling the same

Legal Events

Date Code Title Description
WITN Withdrawal due to no request for examination