WO2011147561A2 - Mobile unit, method for operating the same and network comprising the mobile unit - Google Patents

Mobile unit, method for operating the same and network comprising the mobile unit Download PDF

Info

Publication number
WO2011147561A2
WO2011147561A2 PCT/EP2011/002575 EP2011002575W WO2011147561A2 WO 2011147561 A2 WO2011147561 A2 WO 2011147561A2 EP 2011002575 W EP2011002575 W EP 2011002575W WO 2011147561 A2 WO2011147561 A2 WO 2011147561A2
Authority
WO
WIPO (PCT)
Prior art keywords
data
mobile unit
user
projector
user interface
Prior art date
Application number
PCT/EP2011/002575
Other languages
French (fr)
Other versions
WO2011147561A3 (en
Inventor
Chao Zhang
Original Assignee
Chao Zhang
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from DE201010052668 external-priority patent/DE102010052668A1/en
Application filed by Chao Zhang filed Critical Chao Zhang
Publication of WO2011147561A2 publication Critical patent/WO2011147561A2/en
Publication of WO2011147561A3 publication Critical patent/WO2011147561A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/163Wearable computers, e.g. on a belt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • G06F3/0425Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means using a single imaging device like a video camera for tracking the absolute position of a single or a plurality of objects with respect to an imaged reference surface, e.g. video camera imaging a display or a projection screen, a table or a wall surface, on which a computer generated image is displayed or projected

Definitions

  • the object of the invention is to provide a mobile unit, a method for operating the same, a network and a system that allow consumption of media content without the restrictions of a small display and also without the restrictions of a large device.
  • the invention provides a mobile unit, comprising a projector as an output unit, an optical user interface and a control unit coupled to the projector and 4he optical user interface, wherein the projector is configured to display data and the optical user interface is configured to scan gestures made by at least one pointing object of a user and wherein the control unit is configured to interpret data related to the scanned gestures as a user input.
  • the invention is based on providing media content for the media consumers on the go with a system which combines a large display with a single small device and interactivity of digital content with the easiness of a hard copy.
  • the projector allows to project images to be displayed on any surface which is currently available for the user. This can be the surface of a table or a sheet of paper placed at the outside of a briefcase.
  • the optical user interface allows interpreting gestures made with one ore more pointing objects, like the user's hand(s) or finger(s) or other object(s), as the user's input.
  • a simple example is projecting a document for reading.
  • the pages can be flipped by using a simple waving of the hand.
  • a very compact system incorporates the possibility of displaying content on a large "screen” while at the same time allowing interaction with it.
  • At least the projector and the optical user interface, and preferably also the control unit are components of a single device accommodated in, or attached to, a housing of the device.
  • the projector of the mobile is configured to project data onto a projection area
  • the optical user interface is configured to scan gestures inside the projection area.
  • the reliability of data input may be increased by solely scanning a certain area, i.e. the projection area. Distracting gestures made by the user for other reasons, e.g. while explaining during a talk, do not lead to misinterpreted user input.
  • the projector may be configured to project a border or a pattern along with the data image, the border preferably surrounding the projected data, or the pattern preferably overlying the projected data, respectively. It is then possible to correlate the location of the pointing object(s) with the projection area.
  • the optical user interface is a 2D- or 3D-scanner, preferably a laser scanner which further preferably uses an IR-radiation source. 2D- scanners are highly available and reliable devices, allowing an economic design of the mobile unit. 3D-scanners, however, open a further dimension to be used for user input gestures.
  • a laser scanner having an IR-radiation source is scanning the gestures by light that is invisible for the human eye. Consequently, the user is not hindered by the scanning light beam during his work.
  • the optical user interface comprises a camera, preferably an infrared-camera.
  • a camera also allows to conveniently and reliably scan the gestures which the user makes with his pointing object(s). If a stereo camera is being used, a 3D image of the gestures can easily be obtained.
  • the optical user interface is preferably configured to detect the projection area and its shape, and the projector is configured to adjust the projection of the data based on the detected shape of the projection area. Thus, unevenness in the projection area can be leveled out.
  • the projector and the optical user interface of the mobile unit are integrated in a common housing of a wearable device, having a clip for affixing the housing.
  • all sub-units of the mobile unit e.g. the projector, the camera/scanner, the control unit/main unit (processor, memory, etc.), the power supply, etc. are integrated in a common housing.
  • the power supply (e.g. a battery) and the control unit are located in a first housing and the projector and the camera are located in a second housing.
  • the first housing may be a small and light weight wearable device, because projectors, e.g. pico projectors and optical user interfaces, e.g. modern 3D-laser scanners are small in size and light-weighted.
  • the power source e.g. a rechargeable battery is usually heavier and greater in size, it may be advantageous to have it in the second housing as a separate device, that is connected to the wearable first device by a suitable cable. The user is nearly not hindered by the small wearable device, therefore allowing a comfortable usage of the mobile unit.
  • the clip allows the user to clip the projector to her/his chest so as to conveniently project the image on a surface located in front of her/him.
  • the projector and the camera are integrated in a common housing that is connectable to a further electronic device, such as a notebook, netbook or smartphone.
  • the control unit and the power supply is part of the latter.
  • the hardware of the further electronic device may be shared by the device itself and the mobile unit according the embodiment.
  • the mobile unit comprises a storage device, configured to cache data that has been received by a transceiver via a wireless connection, before the data is requested by the user.
  • pre-caching The operation of temporarily storing, i.e. caching, data before it is requested by the user shall be referred to as pre-caching.
  • This "intelligent caching" allows providing data to the user even if the network connection is bad or interrupted without losing the comfort of online access to e.g. internet data.
  • control unit is configured to determine an amount and/or a content of pre-cached data based on at least one user profile, wherein further preferably, the control unit is configured to determine a content of pre-cached data based on a user profile determined for a user of the mobile unit.
  • the respective content data i.e. the newspaper
  • the mobile unit recognizes the user's behavior and learns how to provide the best service to him or her.
  • a further advantageous approach is to configure the control unit so as to determine an amount of pre-cached data based on at least one parameter of the wireless connection.
  • the data rate and/or quality of service may be used as a parameter of the wireless connection.
  • the storage device of the mobile unit is configured to cache a small amount of data if the at least one parameter of the wireless connection is good, and to cache a great amount of data if the at least one parameter of the wireless connection is bad.
  • This "intelligent connectivity" improves the overall performance of the mobile unit. E.g., when the user is connected to a high speed network, like WLAN, 3G HSDPA or the like, there is very less caching. When the user however is on a low bandwidth network, like 2G GSM, etc.
  • Another parameter usable by the mobile unit is network quality, represented by the parameters of the wireless connection. E.g., when the quality of service or the bandwidth goes down, caching will be increased, and vice versa.
  • the amount of cached data may be based on information on signal coverage in the region in which the user is currently located. If the mobile unit recognizes that signal coverage could be a problem in the immediate future, then it starts pre-loading content.
  • a still further parameter is the battery power. Preferably, there will be less caching in the mobile unit, if battery power is low.
  • the object of the invention is also solved by a network for providing content to a plurality of mobile units as defined above, the network comprising a server, wherein the mobile units are connectable via an air interface to the server that is providing a content delivery service to the mobile units, and wherein the server generates a content based user profile and provides preferred data to the mobile units based on the content based user profile, wherein the preferred data is designated to be cached by a mobile unit before the data is requested by the user (pre-caching).
  • the server or a plurality of servers determine content related user profiles, e.g. by statistical analysis of access frequency, time, date, etc. of a certain content.
  • a newspaper will be accessed by a lot of mobile units mainly in the morning hours and on working days. Consequently, the server within the network will provide such preferred data to the mobile unit and recommend it for pre-caching.
  • the object of the invention is also solved by a system comprising a first and a second mobile unit according to the invention, wherein the control unit of the first mobile unit is configured to interpret a gesture made by at least one pointing object of a user of the first mobile unit in such a way that data that is presently displayed by the first mobile unit is selected and sent to the second mobile unit.
  • a quick an easy cooperation between users of different mobile units may by established in the aforementioned system.
  • the connection between the mobile unit may be established by known measures, e.g. via Bluetooth, IR-Data link or via a server that is providing a packed data service to both mobile units.
  • control unit of the second mobile unit is configured to interpret a gesture made by the pointing object(s) of a user of the first mobile in such a way that data that is received by the first mobile unit is inserted into data that is displayed by the second mobile unit.
  • Cooperation of users, especially when working together during a meeting, may be improved.
  • the object of the invention is also solved with a method for operating a mobile unit comprising a projector as an output unit, an optical user interface and a control unit coupled to the projector and the optical user interface, the method comprising the steps of: displaying data with the projector, scanning gestures made by at least one pointing object of a user with the optical user interface and interpreting data that is related to the scanned gestures with the control unit so as to determine a user input.
  • displaying data with the projector scanning gestures made by at least one pointing object of a user with the optical user interface and interpreting data that is related to the scanned gestures with the control unit so as to determine a user input.
  • the user is given a visual and/or acoustic feedback upon triggering a new action and/or transition from one action to another.
  • the type of feedback may also depend on the determined location of the pointing object in the projection area. Thus, it is possible to let the feedback change in accordance with different pre-defined areas within the projection area.
  • the gestures made by the pointing object(s) of a user are scanned in 3D, and 3D-scan-data is interpreted with the control unit so as to determine the user input.
  • the optical user interface forms a 3D image of the user's pointing object(s); these data allow interpreting the gestures made by the user very precisely.
  • the mobile unit further comprises a storage device, configured to cache data that has been received by a transceiver via a wireless connection, wherein the data is requested via the air interface before the data is requested by the user and an amount of cached data is determined based on a bandwidth of the wireless connection, content of the data and/or at least one user profile.
  • the mobile unit Preferably, the mobile unit generates the user profile.
  • FIG. 1 schematically shows exemplary user scenarios
  • FIG. 2 schematically shows the mobile unit according to the invention in a network according to the invention
  • FIG. 3 schematically shows the hardware architecture of the mobile unit according to an embodiment of the invention.
  • Figure 1 schematically shows a plurality of exemplary user scenarios, wherein a user, who is sitting at a table, uses a mobile unit according to the invention.
  • the mobile unit preferably is a single device including several functional components in, or attached to, a housing of the device.
  • the mobile is a wearable device that is clipped to the user's chest.
  • the mobile unit comprises a scanner and projector, wherein the projector is projecting data to a projection area.
  • the user may be a mobile user, sitting e.g. in a train or airplane.
  • the mobile unit is standing on a table or is clipped or mounted to a lamp or to another suitable framework.
  • Figure 2 schematically shows a scenario, in which the mobile unit according to the invention is connected via a 3G Wi-Fi connection or another suitable air(wireless)-link or wired-link to the Internet.
  • a plurality of services such as web browsing, different applications or a download of documents may be provided to the mobile unit.
  • the hardware architecture of the mobile unit is illustrated in Figure 3.
  • the mobile unit comprises a projector, an optical user interface such as a camera or a scanner, a power unit and a control unit.
  • the different components of the mobile unit may be integrated in a common housing or may be distributed to more than one housing.
  • the projector and the scanner may be integrated in a light weight wearable device, while the control unit and the power unit, e.g. a rechargeable battery are integrated in an additional housing.
  • the control unit and the power unit e.g. a rechargeable battery are integrated in an additional housing.
  • the mobile unit has an instant-on characteristic. This means that the mobile unit is ready to use in a very short time from switching it from off to on.
  • the instant on characteristic may be exemplarily achieved by storing the light operating system of the mobile unit in a suitable flash or other non-volatile memory. As a result, a short boot-up time may be achieved.
  • the projector is a so-called pico projector, also known as a pocket or micro projector, which is distinguished by its small dimensions while at the same time a good quality. Provision is made that the mobile unit or a wearable device can be clipped to the cloth of a user so that it can project pictures on a surface which is located in front of the user. This surface can be a table top or a sheet of paper placed on a brief case.
  • the projector of the mobile unit can be provided with a stabilizing unit which provides information used for controlling the projector.
  • the stabilizing unit can be a laser light source which obtains information on the surface onto which the pictures are projected. Since the device is expected to be preferably used in highly mobile scenarios, the display has to be stable in spite of the motion of the user or the projection surface.
  • the stabilizing unit ensures that the projector always offers an on-focus display. So, even if the surface moves or has curves on it, the displayed image always remains focused.
  • the optical user interface allows gathering information which is used as command input for the system.
  • the optical user interface comprises a camera which detects the motion of a pointing object, like the user's hand or finger, on the projected display and interprets it as different user inputs. For example, while the projected display is a page of the book, a simple motion with the right hand towards the left side will be interpreted as flipping to the next page. Another example is clicking by pointing the finger into the projected display.
  • These intuitive user interfaces will be combined with familiar interaction with digital interfaces such as clicking on an icon by placing finger on its projection. When the user interacts with a specific part of the 2D projection area, the exact location of the projection area needs to be determined so that a corresponding action can be initiated.
  • a rectangular border surrounding the projected data image, or a pattern overlying the projected data image will be projected along with the data image, either in the visible or in the IR spectrum.
  • the contour and the position of the pointing object of any kind can be independently or jointly detected using one or more cameras operating in the visible or IR spectrum.
  • the position of the pointing object with respect to the border or the pattern, respectively, will be used to determine the exact location of interaction. Details of these techniques will be explained further below.
  • the user can generate multiple events to trigger ' different actions in the software of the mobile unit by simply moving the pointing object. Based on . different characteristics of the motion of the pointing object (especially point in time, duration, speed and/or location of the motion), different events are defined.
  • the pointing object remaining at the same position for more than a predefined time threshold defines a click or press down.
  • a fast motion away from a certain location could trigger a release event.
  • Another example of a triggering release event is the pointing object leaving the projection area.
  • a visual and/or an acoustic feedback will be given to the user upon triggering a new action (event) and/or transition from one action to another.
  • a hovering of the pointing object in an area may be indicated by a special pattern (such as a ripple, a special colored circle, etc.) projected on the tip of the pointing object.
  • a transition from a click to press down may be indicated by another pattern, such as a ripple converging to the tip of the pointing object.
  • the feedback may also depend on the location of the pointing object in the projection area. For example, when the user's pointing object enters different areas, like text boxes, an active projection area, etc. the visual feedback pattern may change accordingly.
  • the optical user interface is capable of determining the location and movement of multiple pointing objects, preferably in a 3D interaction space. Multiple pointing objects could be multiple fingers, multiple hands, or other objects. Depending on the relative location and motion of these pointers in the 2D or 3D interaction space, different gestures can be defined.
  • the camera is adapted for creating a 2D or a 3D image of the pointing object(s) of the user.
  • the optical user interface comprises a laser preferably an IR-laser, and a receiver for receiving reflected radiation.
  • the image and/or the sequence of images provided by the optical user interface is interpreted by the computer in order to determine which command input the user intended to make.
  • the scanner is a 3D-scanner for gathering 3D information about the user's gestures.
  • the user can click to a certain button by just approaching it from the area above the projected keyboard, etc.
  • a gesture like pointing with a finger into a text may be interpreted as a zoom command.
  • the user may approach a text, that is projected onto a sheet of paper, from a region above the projected text and may point to a certain area by approaching his or her finger in a direction perpendicular to the projection surface.
  • the location of the pointing object(s) in the 3D interaction space is monitored, i.e. the coordinates of the location of the pointing object(s) and their change over time are constantly determined.
  • a respective 3D gesture will be associated with a pre-defined action (event), which will then be triggered.
  • a specific coordinate system (x,y,Z) is used, where coordinates x and y denote the location of the tip of a pointing object as virtually projected on the projection area. This is done by matching a scanned image of the projection area with the reference image that was projected so that the position of the pointing object on the reference image can be evaluated.
  • Determining coordinates x and y may be assisted by scanning a border and/or a pattern projected along with the data image as described above.
  • the Z- coordinate represents the distance of the tip of the pointing object from the camera. Based on the changes in (x,y,Z) over time specific user gestures can be identified. For example, (x,y) remaining the same but Z increasing indicates the motion of the pointing object towards a specific location in the projection area. This may be interpreted as a click at that location or as zooming in.
  • a customized projector is used which can beam both in the visible spectrum and in the IR spectrum.
  • the customized projector may comprise one or more IR laser diodes in addition to the usual RGB diodes.
  • both a normal pico projector and an IR projector, or a normal pico projector and a combination of a DOE (diffractive optical element) and an IR source are used.
  • a special pattern will be projected on the projection area, so that the pattern and the projection area will perfectly overlap.
  • the pattern may be composed of one or more wavelengths in the IR spectrum. The pattern will not obstruct the user's view of the projected visible image.
  • a scanner of the optical user interface will scan the pattern reflected back by the pointing object(s) in the interaction space.
  • the scanner is sensitive to only the corresponding wavelengths of the pattern in the IR spectrum and is immune to the projected image and ambient light.
  • the distortion of the captured pattern will be used to determine a real-time 3D depth map of the projection area and the interaction space (using methods such as triangulation). This technique can be used to determine the distance of each reflecting point from the optical user interface at any time.
  • a border is projected around the projection area, and the position of the pointer with respect to the border is mapped.
  • This is realized by projecting the border either in the visible spectrum or in the IR spectrum.
  • a camera operating in the visible spectrum captures the border.
  • an IR camera is used to capture the pointing object without any interference from the projected image in the visible spectrum.
  • the images captured by the two cameras are matched to determine the relative position of the pointing object in the projection area.
  • a single customized projector that can beam both in the visible and in the IR spectrum is used.
  • the customized projector may comprise one or more IR laser diodes in addition to the usual RGB diodes.
  • a normal pico projector and an IR projector or a normal pico projector and a combination of a DOE (diffractive optical element) and an IR source are used.
  • One or more IR cameras are used to detect both the border and the pointing object. The detection of a pointing object can be made more precise by adding an illumination source operating in a wavelength range that is different from the range used for projecting the border.
  • the Z-coordinate of the pointing object may be determined by evaluating the size of the pointing object as seen by the camera.
  • Another feature for which the camera is used is the geometric adaptation of the projected picture to offer a better view for the user.
  • the camera can detect the projection area and its shape. Based on the size and orientation of the projection surface, as scanned by the camera, the projected display will be adjusted.
  • the projector can be adapted to adjust the size and/or orientation of the projected data. For example, it can be ensured that the shape of the projection area is always a rectangle. This is preferably enabled by performing a pre-warping of the image before projecting it. Further, it may be an option to switch between horizontal and portrait display.
  • the term structuricontrol unit is used for hardware components that are required for controlling the operation of the mobile unit, in particular the connectivity to the internet, the projector and the optical user interface.
  • a standard interface for each type of media may be defined, including newspapers, books and videos.
  • a suitable interface for different types of media may be used.
  • newspapers will be delivered in a more interactive and intuitive interface that resembles the experience of reading a real hard copy, rather than browsing a website. This will be enabled using a customized media operating system.
  • the media operating system will open application interfaces for delivering customized content.
  • the mobile unit is designed for seamless availability of Internet content. This includes the ability of fast handover from one service to another. However, the mobile unit according to the embodiment of the invention is capable of offline browsing of the media if the network connection is completely lost. While fast handover is established, the mobile unit intelligently caches data for the user, thereby allowing him or her to have access to internet data even if the connection is completely interrupted. Caching is based on the context and the content of data, the user downloads from the internet. For example, newspapers that the user prefers to read can be downloaded every morning at a fixed time and can therefore be made accessible even when he is riding the underground train where there is no connectivity. Similarly the last read books or single chapters can also be cached locally. The mobile unit also predicts actions of the user. If the user consumes a certain media and previous users, after consuming the same media, tended to e.g. follow a link provided in the media, then the system pre-caches the content of the link in order to quickly make it available to the user.
  • the mobile unit further adapts its caching strategy to the available bandwidth of the wireless connection and predicts connectivity.
  • a wi-fi network e.g. WLAN based on IEEE 802.11
  • the system knows that there is no need to pre-cache much content.
  • the user is on a low bandwidth network, like 2G GSM, the traffic may stuck and thereby hindering the user from comfortable browsing.
  • the mobile unit uses the free bandwidth to pre-cache content based on its prediction strategy.
  • the mobile unit is also capable of providing internet content to the user if she or he approaches a null region, i.e.
  • the mobile unit Based on previous experience (the user taking the underground every morning from Monday to Friday) or based on information on signal coverage in the region in which the user currently is, the mobile unit pre-caches data before it enters the null region. The mobile unit recognizes that signal coverage could be a problem in the immediate future and starts pre-loading content.
  • the mobile unit can be incorporated entirely into a single housing which then constitutes the wearable device.
  • the system can also be split into two ore more components, e.g. into a very small wearable device, comprising the projector and the optical user interface, and into a main unit which comprises the further elements, like the control unit, the battery, etc. This allows to place the main unit e.g. in an inside pocket of a jacket and to clip the compact wearable device to the outside of the clothes, with a cable being used for connecting the two units.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)
  • Position Input By Displaying (AREA)

Abstract

The invention relates to a mobile unit, comprising a projector as an output unit, an optical user interface and a control unit coupled to the projector and the optical user interface, wherein the projector is configured to display data and the optical user interface is configured to scan gestures made by at least one pointing object of a user and wherein the control unit is configured to interpret data related to the scanned gestures as a user input.

Description

Mobile Unit, Method for operating the same
and Network comprising the Mobile Unit
Mobile consumption of digital media, especially from Internet, has been increasing steadily. This is indicated by several trends. Smartphone makes browsing of websites easy on a mobile phone and has been emerging as one of the most successful media consumption devices. Similarly, the rise of eBook readers also marks an increasing popularity in consuming readable materials. Gaming and watching online videos on the go also drive mobile media consumption.
One common challenge that faces all these users and use cases is the conflict in the requirement of the device they prefer to use. Consumption of media is often hampered by the small size of the display screens. At the same time the requirement of mobility demands a smaller device. There have been already many attempts in the market to marry these contradictory requirements. There were attempts to make the display slightly bigger by removing the physical keypad. eBook readers are light, portable devices, albeit with a black and white screen. There is also a new category of devices called tablet computers.
The object of the invention is to provide a mobile unit, a method for operating the same, a network and a system that allow consumption of media content without the restrictions of a small display and also without the restrictions of a large device.
In order to solve this object, the invention provides a mobile unit, comprising a projector as an output unit, an optical user interface and a control unit coupled to the projector and 4he optical user interface, wherein the projector is configured to display data and the optical user interface is configured to scan gestures made by at least one pointing object of a user and wherein the control unit is configured to interpret data related to the scanned gestures as a user input.
The invention is based on providing media content for the media consumers on the go with a system which combines a large display with a single small device and interactivity of digital content with the easiness of a hard copy. The projector allows to project images to be displayed on any surface which is currently available for the user. This can be the surface of a table or a sheet of paper placed at the outside of a briefcase. The optical user interface allows interpreting gestures made with one ore more pointing objects, like the user's hand(s) or finger(s) or other object(s), as the user's input. A simple example is projecting a document for reading. The pages can be flipped by using a simple waving of the hand. In summary, a very compact system incorporates the possibility of displaying content on a large "screen" while at the same time allowing interaction with it.
According to a very compact embodiment of the invention, at least the projector and the optical user interface, and preferably also the control unit, are components of a single device accommodated in, or attached to, a housing of the device. Preferably, the projector of the mobile is configured to project data onto a projection area, and the optical user interface is configured to scan gestures inside the projection area. The reliability of data input may be increased by solely scanning a certain area, i.e. the projection area. Distracting gestures made by the user for other reasons, e.g. while explaining during a talk, do not lead to misinterpreted user input.
In order to detect gestures within a specific area of the projection area, the exact location of interaction relative to the projection area needs to be determined. To this end, the projector may be configured to project a border or a pattern along with the data image, the border preferably surrounding the projected data, or the pattern preferably overlying the projected data, respectively. It is then possible to correlate the location of the pointing object(s) with the projection area. Advantageously, the optical user interface is a 2D- or 3D-scanner, preferably a laser scanner which further preferably uses an IR-radiation source. 2D- scanners are highly available and reliable devices, allowing an economic design of the mobile unit. 3D-scanners, however, open a further dimension to be used for user input gestures. Thereby the user interaction with the mobile unit can be more complex. However, he or she can interact with the mobile unit in a more intuitive and easier way while inputting complex user commands. A laser scanner having an IR-radiation source is scanning the gestures by light that is invisible for the human eye. Consequently, the user is not hindered by the scanning light beam during his work.
According to another embodiment, the optical user interface comprises a camera, preferably an infrared-camera. A camera also allows to conveniently and reliably scan the gestures which the user makes with his pointing object(s). If a stereo camera is being used, a 3D image of the gestures can easily be obtained. To improve the quality of the projected data it is desirable to automatically adjust the display according to the specific circumstances. Therefore, the optical user interface is preferably configured to detect the projection area and its shape, and the projector is configured to adjust the projection of the data based on the detected shape of the projection area. Thus, unevenness in the projection area can be leveled out.
It is further advantageous if the projector and the optical user interface of the mobile unit are integrated in a common housing of a wearable device, having a clip for affixing the housing. Preferably, all sub-units of the mobile unit, e.g. the projector, the camera/scanner, the control unit/main unit (processor, memory, etc.), the power supply, etc. are integrated in a common housing.
According to a further embodiment, the power supply (e.g. a battery) and the control unit are located in a first housing and the projector and the camera are located in a second housing. The first housing may be a small and light weight wearable device, because projectors, e.g. pico projectors and optical user interfaces, e.g. modern 3D-laser scanners are small in size and light-weighted. Since the power source, e.g. a rechargeable battery is usually heavier and greater in size, it may be advantageous to have it in the second housing as a separate device, that is connected to the wearable first device by a suitable cable. The user is nearly not hindered by the small wearable device, therefore allowing a comfortable usage of the mobile unit. The clip allows the user to clip the projector to her/his chest so as to conveniently project the image on a surface located in front of her/him. According to a further embodiment, the projector and the camera are integrated in a common housing that is connectable to a further electronic device, such as a notebook, netbook or smartphone. The control unit and the power supply is part of the latter. In other words, the hardware of the further electronic device may be shared by the device itself and the mobile unit according the embodiment.
Further preferably, the mobile unit comprises a storage device, configured to cache data that has been received by a transceiver via a wireless connection, before the data is requested by the user.
The operation of temporarily storing, i.e. caching, data before it is requested by the user shall be referred to as pre-caching.
This "intelligent caching" allows providing data to the user even if the network connection is bad or interrupted without losing the comfort of online access to e.g. internet data.
Advantageously, the control unit is configured to determine an amount and/or a content of pre-cached data based on at least one user profile, wherein further preferably, the control unit is configured to determine a content of pre-cached data based on a user profile determined for a user of the mobile unit. In a scenario of personalized usage of the mobile unit, e.g. the user is taking the underground every morning and is reading a newspaper, the respective content data, i.e. the newspaper, is downloaded before he or she enters the underground and loses the network connection. Advantageously, the mobile unit recognizes the user's behavior and learns how to provide the best service to him or her.
A further advantageous approach is to configure the control unit so as to determine an amount of pre-cached data based on at least one parameter of the wireless connection. Preferably the data rate and/or quality of service may be used as a parameter of the wireless connection. It is advantageous if the storage device of the mobile unit is configured to cache a small amount of data if the at least one parameter of the wireless connection is good, and to cache a great amount of data if the at least one parameter of the wireless connection is bad. This "intelligent connectivity" improves the overall performance of the mobile unit. E.g., when the user is connected to a high speed network, like WLAN, 3G HSDPA or the like, there is very less caching. When the user however is on a low bandwidth network, like 2G GSM, etc. the traffic is smoothed out over time by caching a higher amount of data. Another parameter usable by the mobile unit is network quality, represented by the parameters of the wireless connection. E.g., when the quality of service or the bandwidth goes down, caching will be increased, and vice versa.
Further advantageously, the amount of cached data may be based on information on signal coverage in the region in which the user is currently located. If the mobile unit recognizes that signal coverage could be a problem in the immediate future, then it starts pre-loading content. A still further parameter is the battery power. Preferably, there will be less caching in the mobile unit, if battery power is low.
The object of the invention is also solved by a network for providing content to a plurality of mobile units as defined above, the network comprising a server, wherein the mobile units are connectable via an air interface to the server that is providing a content delivery service to the mobile units, and wherein the server generates a content based user profile and provides preferred data to the mobile units based on the content based user profile, wherein the preferred data is designated to be cached by a mobile unit before the data is requested by the user (pre-caching). Advantageously, the server or a plurality of servers determine content related user profiles, e.g. by statistical analysis of access frequency, time, date, etc. of a certain content. Exemplarily, a newspaper will be accessed by a lot of mobile units mainly in the morning hours and on working days. Consequently, the server within the network will provide such preferred data to the mobile unit and recommend it for pre-caching. The object of the invention is also solved by a system comprising a first and a second mobile unit according to the invention, wherein the control unit of the first mobile unit is configured to interpret a gesture made by at least one pointing object of a user of the first mobile unit in such a way that data that is presently displayed by the first mobile unit is selected and sent to the second mobile unit. A quick an easy cooperation between users of different mobile units may by established in the aforementioned system. The connection between the mobile unit may be established by known measures, e.g. via Bluetooth, IR-Data link or via a server that is providing a packed data service to both mobile units.
Advantageously, the control unit of the second mobile unit is configured to interpret a gesture made by the pointing object(s) of a user of the first mobile in such a way that data that is received by the first mobile unit is inserted into data that is displayed by the second mobile unit. Cooperation of users, especially when working together during a meeting, may be improved.
The object of the invention is also solved with a method for operating a mobile unit comprising a projector as an output unit, an optical user interface and a control unit coupled to the projector and the optical user interface, the method comprising the steps of: displaying data with the projector, scanning gestures made by at least one pointing object of a user with the optical user interface and interpreting data that is related to the scanned gestures with the control unit so as to determine a user input. Regarding the advantages achieved with this method, reference is made to the above comments relating to the mobile unit according to the invention. In order to generate a plurality of events, different actions in the software of the mobile unit need to be defined and triggered at the user's desire. This can be accomplished by evaluating characteristics of the motion of the pointing object(s), especially point in time, duration, speed and/or location of the motion in relation to the scanned gestures performed by the user. Based on the evaluation a pre- defined action is associated to the characteristics. Thus, it is possible for a user to generate a desired event simply by performing a characteristic gesture.
For a more appealing interaction with the projected data, the user is given a visual and/or acoustic feedback upon triggering a new action and/or transition from one action to another.
The type of feedback may also depend on the determined location of the pointing object in the projection area. Thus, it is possible to let the feedback change in accordance with different pre-defined areas within the projection area. Preferably, the gestures made by the pointing object(s) of a user are scanned in 3D, and 3D-scan-data is interpreted with the control unit so as to determine the user input. The optical user interface forms a 3D image of the user's pointing object(s); these data allow interpreting the gestures made by the user very precisely.
It is expedient to use a specific coordinate system (x,y,Z) for gathering 3D information about the user's gestures. While (x,y) denotes the location of a pointing object as virtually projected on the projection area, Z represents the distance of the pointing object from the optical user interface. Further preferably, the mobile unit further comprises a storage device, configured to cache data that has been received by a transceiver via a wireless connection, wherein the data is requested via the air interface before the data is requested by the user and an amount of cached data is determined based on a bandwidth of the wireless connection, content of the data and/or at least one user profile. Preferably, the mobile unit generates the user profile. Content is cached based on previous behavior of the user and/or on the previous behavior of users which have used the mobile unit and may have accessed the same or similar content. This "intelligent caching" based on the history of usage makes available the presumably desired content available very quickly. Regarding the further advantages achieved, again reference is made to the above comments relating to the mobile unit.
An exemplary embodiment of the invention will now be described with reference to the enclosed drawings. In the drawings,
- Figure 1 schematically shows exemplary user scenarios; - Figure 2 schematically shows the mobile unit according to the invention in a network according to the invention; and
- Figure 3 schematically shows the hardware architecture of the mobile unit according to an embodiment of the invention.
Figure 1 schematically shows a plurality of exemplary user scenarios, wherein a user, who is sitting at a table, uses a mobile unit according to the invention. The mobile unit preferably is a single device including several functional components in, or attached to, a housing of the device.
According to a first scenario, the mobile is a wearable device that is clipped to the user's chest. The mobile unit comprises a scanner and projector, wherein the projector is projecting data to a projection area. The user may be a mobile user, sitting e.g. in a train or airplane.
According to further scenarios, maybe at home or at the office, the mobile unit is standing on a table or is clipped or mounted to a lamp or to another suitable framework. Figure 2 schematically shows a scenario, in which the mobile unit according to the invention is connected via a 3G Wi-Fi connection or another suitable air(wireless)-link or wired-link to the Internet. A plurality of services, such as web browsing, different applications or a download of documents may be provided to the mobile unit. The hardware architecture of the mobile unit is illustrated in Figure 3. The mobile unit comprises a projector, an optical user interface such as a camera or a scanner, a power unit and a control unit. The different components of the mobile unit may be integrated in a common housing or may be distributed to more than one housing. By way of an example only, the projector and the scanner may be integrated in a light weight wearable device, while the control unit and the power unit, e.g. a rechargeable battery are integrated in an additional housing. Pluralities of different possibilities of distributing the components to different housings are thinkable.
The mobile unit has an instant-on characteristic. This means that the mobile unit is ready to use in a very short time from switching it from off to on. The instant on characteristic may be exemplarily achieved by storing the light operating system of the mobile unit in a suitable flash or other non-volatile memory. As a result, a short boot-up time may be achieved.
The projector is a so-called pico projector, also known as a pocket or micro projector, which is distinguished by its small dimensions while at the same time a good quality. Provision is made that the mobile unit or a wearable device can be clipped to the cloth of a user so that it can project pictures on a surface which is located in front of the user. This surface can be a table top or a sheet of paper placed on a brief case.
The projector of the mobile unit can be provided with a stabilizing unit which provides information used for controlling the projector. The stabilizing unit can be a laser light source which obtains information on the surface onto which the pictures are projected. Since the device is expected to be preferably used in highly mobile scenarios, the display has to be stable in spite of the motion of the user or the projection surface. The stabilizing unit ensures that the projector always offers an on-focus display. So, even if the surface moves or has curves on it, the displayed image always remains focused.
The optical user interface allows gathering information which is used as command input for the system. Preferably, the optical user interface comprises a camera which detects the motion of a pointing object, like the user's hand or finger, on the projected display and interprets it as different user inputs. For example, while the projected display is a page of the book, a simple motion with the right hand towards the left side will be interpreted as flipping to the next page. Another example is clicking by pointing the finger into the projected display. These intuitive user interfaces will be combined with familiar interaction with digital interfaces such as clicking on an icon by placing finger on its projection. When the user interacts with a specific part of the 2D projection area, the exact location of the projection area needs to be determined so that a corresponding action can be initiated. For this, a rectangular border surrounding the projected data image, or a pattern overlying the projected data image, will be projected along with the data image, either in the visible or in the IR spectrum. The contour and the position of the pointing object of any kind can be independently or jointly detected using one or more cameras operating in the visible or IR spectrum. The position of the pointing object with respect to the border or the pattern, respectively, will be used to determine the exact location of interaction. Details of these techniques will be explained further below. The user can generate multiple events to trigger' different actions in the software of the mobile unit by simply moving the pointing object. Based on . different characteristics of the motion of the pointing object (especially point in time, duration, speed and/or location of the motion), different events are defined. For example, the pointing object remaining at the same position for more than a predefined time threshold, defines a click or press down. A fast motion away from a certain location could trigger a release event. Another example of a triggering release event is the pointing object leaving the projection area. A visual and/or an acoustic feedback will be given to the user upon triggering a new action (event) and/or transition from one action to another. For example, a hovering of the pointing object in an area may be indicated by a special pattern (such as a ripple, a special colored circle, etc.) projected on the tip of the pointing object. A transition from a click to press down may be indicated by another pattern, such as a ripple converging to the tip of the pointing object. The feedback may also depend on the location of the pointing object in the projection area. For example, when the user's pointing object enters different areas, like text boxes, an active projection area, etc. the visual feedback pattern may change accordingly. In a more sophisticated embodiment the optical user interface is capable of determining the location and movement of multiple pointing objects, preferably in a 3D interaction space. Multiple pointing objects could be multiple fingers, multiple hands, or other objects. Depending on the relative location and motion of these pointers in the 2D or 3D interaction space, different gestures can be defined.
The camera is adapted for creating a 2D or a 3D image of the pointing object(s) of the user. As an alternative, the optical user interface comprises a laser preferably an IR-laser, and a receiver for receiving reflected radiation. In any case, the image and/or the sequence of images provided by the optical user interface is interpreted by the computer in order to determine which command input the user intended to make.
Preferably, the scanner is a 3D-scanner for gathering 3D information about the user's gestures. By way of an example, the user can click to a certain button by just approaching it from the area above the projected keyboard, etc. Further, a gesture like pointing with a finger into a text may be interpreted as a zoom command. By way of an example, the user may approach a text, that is projected onto a sheet of paper, from a region above the projected text and may point to a certain area by approaching his or her finger in a direction perpendicular to the projection surface.
In order to gather the required 3D information about the user's gestures the location of the pointing object(s) in the 3D interaction space is monitored, i.e. the coordinates of the location of the pointing object(s) and their change over time are constantly determined. After evaluation of the corresponding motion data a respective 3D gesture will be associated with a pre-defined action (event), which will then be triggered. To this end, a specific coordinate system (x,y,Z) is used, where coordinates x and y denote the location of the tip of a pointing object as virtually projected on the projection area. This is done by matching a scanned image of the projection area with the reference image that was projected so that the position of the pointing object on the reference image can be evaluated. Determining coordinates x and y may be assisted by scanning a border and/or a pattern projected along with the data image as described above. The Z- coordinate represents the distance of the tip of the pointing object from the camera. Based on the changes in (x,y,Z) over time specific user gestures can be identified. For example, (x,y) remaining the same but Z increasing indicates the motion of the pointing object towards a specific location in the projection area. This may be interpreted as a click at that location or as zooming in. With regard to the required hardware for determining the 2D or 3D position of a pointing object, ideally a customized projector is used which can beam both in the visible spectrum and in the IR spectrum. The customized projector may comprise one or more IR laser diodes in addition to the usual RGB diodes. In an alternative embodiment both a normal pico projector and an IR projector, or a normal pico projector and a combination of a DOE (diffractive optical element) and an IR source are used. In any case, along with the visible image, a special pattern will be projected on the projection area, so that the pattern and the projection area will perfectly overlap. The pattern may be composed of one or more wavelengths in the IR spectrum. The pattern will not obstruct the user's view of the projected visible image. A scanner of the optical user interface will scan the pattern reflected back by the pointing object(s) in the interaction space. The scanner is sensitive to only the corresponding wavelengths of the pattern in the IR spectrum and is immune to the projected image and ambient light. The distortion of the captured pattern will be used to determine a real-time 3D depth map of the projection area and the interaction space (using methods such as triangulation). This technique can be used to determine the distance of each reflecting point from the optical user interface at any time.
According to an alternative method for determining the position of a pointing object, a border is projected around the projection area, and the position of the pointer with respect to the border is mapped. This is realized by projecting the border either in the visible spectrum or in the IR spectrum. In case of projecting the border in the visible spectrum, a camera operating in the visible spectrum captures the border. In addition to this visible spectrum camera an IR camera is used to capture the pointing object without any interference from the projected image in the visible spectrum. The images captured by the two cameras are matched to determine the relative position of the pointing object in the projection area. In case of projecting the border in the IR spectrum, ideally a single customized projector that can beam both in the visible and in the IR spectrum is used. The customized projector may comprise one or more IR laser diodes in addition to the usual RGB diodes. In an alternative embodiment both a normal pico projector and an IR projector, or a normal pico projector and a combination of a DOE (diffractive optical element) and an IR source are used. One or more IR cameras are used to detect both the border and the pointing object. The detection of a pointing object can be made more precise by adding an illumination source operating in a wavelength range that is different from the range used for projecting the border.
In general, the Z-coordinate of the pointing object may be determined by evaluating the size of the pointing object as seen by the camera. Another feature for which the camera is used is the geometric adaptation of the projected picture to offer a better view for the user. The camera can detect the projection area and its shape. Based on the size and orientation of the projection surface, as scanned by the camera, the projected display will be adjusted. Specifically, the projector can be adapted to adjust the size and/or orientation of the projected data. For example, it can be ensured that the shape of the projection area is always a rectangle. This is preferably enabled by performing a pre-warping of the image before projecting it. Further, it may be an option to switch between horizontal and portrait display. The term„control unit" is used for hardware components that are required for controlling the operation of the mobile unit, in particular the connectivity to the internet, the projector and the optical user interface.
For presenting media content, a standard interface for each type of media may be defined, including newspapers, books and videos. In contrast to known presentation of internet content, i.e. a standard browser used to display internet content from written text to streaming movies, a suitable interface for different types of media may be used. For example, newspapers will be delivered in a more interactive and intuitive interface that resembles the experience of reading a real hard copy, rather than browsing a website. This will be enabled using a customized media operating system. The media operating system will open application interfaces for delivering customized content.
The mobile unit is designed for seamless availability of Internet content. This includes the ability of fast handover from one service to another. However, the mobile unit according to the embodiment of the invention is capable of offline browsing of the media if the network connection is completely lost. While fast handover is established, the mobile unit intelligently caches data for the user, thereby allowing him or her to have access to internet data even if the connection is completely interrupted. Caching is based on the context and the content of data, the user downloads from the internet. For example, newspapers that the user prefers to read can be downloaded every morning at a fixed time and can therefore be made accessible even when he is riding the underground train where there is no connectivity. Similarly the last read books or single chapters can also be cached locally. The mobile unit also predicts actions of the user. If the user consumes a certain media and previous users, after consuming the same media, tended to e.g. follow a link provided in the media, then the system pre-caches the content of the link in order to quickly make it available to the user.
The mobile unit further adapts its caching strategy to the available bandwidth of the wireless connection and predicts connectivity. When the user is linked to a wi-fi network, e.g. WLAN based on IEEE 802.11 , and is not mobile, the system knows that there is no need to pre-cache much content. When the user however is on a low bandwidth network, like 2G GSM, the traffic may stuck and thereby hindering the user from comfortable browsing. By increasing the amount of cached data, downlink traffic is smoothed out over time. When there is no proactive request from the user to download content, the mobile unit uses the free bandwidth to pre-cache content based on its prediction strategy. The mobile unit is also capable of providing internet content to the user if she or he approaches a null region, i.e. an area without any service. Based on previous experience (the user taking the underground every morning from Monday to Friday) or based on information on signal coverage in the region in which the user currently is, the mobile unit pre-caches data before it enters the null region. The mobile unit recognizes that signal coverage could be a problem in the immediate future and starts pre-loading content.
Another parameter that is taken into account by the mobile unit is the available battery power. When battery power is low, there will be less caching in order to relieve the strain on the operating system. The mobile unit can be incorporated entirely into a single housing which then constitutes the wearable device. However, the system can also be split into two ore more components, e.g. into a very small wearable device, comprising the projector and the optical user interface, and into a main unit which comprises the further elements, like the control unit, the battery, etc. This allows to place the main unit e.g. in an inside pocket of a jacket and to clip the compact wearable device to the outside of the clothes, with a cable being used for connecting the two units.

Claims

Claims
1. A mobile unit, comprising a projector as an output unit, an optical user interface and a control unit coupled to the projector and the optical user interface, wherein the projector is configured to display data and the optical user interface is configured to scan gestures made by at least one pointing object of a user, and wherein the control unit is configured to interpret data related to the scanned gestures as a user input.
2. The mobile unit according to claim 1 , wherein at least the projector and the optical user interface, and preferably also the control unit, are components of a single device accommodated in, or attached to, a housing of the device.
3. The mobile unit according to claim 1 or
4. 2, wherein the projector is configured to project data onto a projection area, and the optical user interface is configured to scan gestures inside the projection area.
5. The mobile unit according to claim 3, wherein the projector is configured to project a border or a pattern along with the data, the border preferably surrounding the projected data, or the pattern preferably overlying the projected data, respectively.
6. The mobile unit according to any of the preceeding claims, wherein the optical user interface is a 2D- or 3D-scanner, preferably a laser scanner which preferably uses an IR-radiation source.
7. The mobile unit according to any of claims 1 to 4, wherein the optical user interface comprises a camera, preferably an infrared-camera.
8. The mobile unit according to any of claims 3 to 6, wherein the optical user interface is configured to detect the projection area and its shape, the projector being configured to adjust the projection of the data based on the detected shape of the projection area.
9. The mobile unit according to any of the preceding claims, wherein the projector and the optical user interface are integrated in a common housing of a wearable device, having a clip for affixing the housing.
10. The mobile unit according to any of the preceding claims, further comprising a storage device, configured to cache data that has been received by a transceiver via a wireless connection, before the data is requested by the user (pre-caching).
11. The mobile unit according to claim 9, wherein the control unit is configured to determine an amount and/or a content of pre-cached data based on at least one user profile.
12. The mobile unit according to claim 9 or claim 10, wherein the control unit is configured to determine an amount of pre-cached data based on at least one parameter of the wireless connection, wherein the parameter is preferably a data rate and/or a quality of service of the wireless connection.
13. The mobile unit according to claim 10 or claim 11 , wherein the storage device is configured to cache a small amount of data, if at least one parameter of the wireless connection is good and to cache a great amount of data, if the at least one parameter of the wireless connection is bad.
14. A network adapted for providing content to a plurality of mobile units, comprising a server, wherein the mobile units are connectable via an air interface to the server that is providing a content delivery service to the mobile units, and wherein the server generates a content based user profile and provides preferred data to the mobile units based on the content based user profile, wherein the preferred data is designated to be cached by a mobile unit, before the data is requested by the user (pre-caching).
15. A system comprising a first and a second mobile unit according to any of claims 1 to 11 , wherein the control unit of the first mobile unit is configured to interpret a gesture made by at least one pointing object of a user of the first mobile unit in such a way that data that is presently displayed by the first mobile unit is selected and sent to the second mobile unit.
16. The system according to claim 1 , wherein the control unit of the second mobile unit is configured to interpret a gesture made by the pointing object(s) of a user of the first mobile in such a way that data that is received by the first mobile unit is inserted into data that is displayed by the second mobile unit.
17. A method for operating a mobile unit comprising a projector as an output unit, an optical user interface and a control unit coupled to the projector and the optical user interface, the method comprising the steps of: a) displaying data with the projector, b) scanning gestures made by at least one pointing object of a user with the optical user interface, and c) interpreting data that is related to the scanned gestures with the control unit so as to determine a user input.
18. The method according to claim 16, wherein in relation to the scanned gestures characteristics of the motion of the pointing object(s), especially point in time, duration, speed and/or location of the motion, are evaluated and associated to pre-defined actions in the software of the mobile unit based on the evaluation.
19. The method according to claim 17, wherein a visual and or an acoustic feedback is given to the user upon triggering a new action and/or transition from one action to another.
20. The method according to claim 18, wherein the type of feedback depends on the determined location of the pointing object in the projection area.
21. The method according to any of claims 16 to 19, wherein the gestures made by the pointing object(s) of a user are scanned in 3D, and 3D-scan-data is interpreted with the control unit so as to determine the user input.
22. The method according to claim 20, wherein a specific coordinate system (x,y,Z) is used, where (x,y) denotes the location of a pointing object as virtually projected on the projection area, and Z represents the distance of the pointing object from the optical user interface.
23. The method according to any of claims 16 to 21 , wherein the mobile unit further comprises a storage device, configured to cache data that has been received by a transceiver via a wireless connection, wherein the data is requested via the air interface before the data is requested by the user and an amount of cached data is determined based on a bandwidth of the wireless connection, content of the data and/or at least one user profile.
24. The method according to claim 22, wherein the mobile unit generates the user profile.
PCT/EP2011/002575 2010-05-28 2011-05-24 Mobile unit, method for operating the same and network comprising the mobile unit WO2011147561A2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
DE102010021912 2010-05-28
DE102010021912.6 2010-05-28
DE102010052668.1 2010-11-26
DE201010052668 DE102010052668A1 (en) 2010-11-26 2010-11-26 Mobile unit used in media operating system, has projector which displays data, optical user interface that scans gestures made by pointing object of user and control unit that interprets data related to scanned gestures as user input

Publications (2)

Publication Number Publication Date
WO2011147561A2 true WO2011147561A2 (en) 2011-12-01
WO2011147561A3 WO2011147561A3 (en) 2012-04-12

Family

ID=44626958

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2011/002575 WO2011147561A2 (en) 2010-05-28 2011-05-24 Mobile unit, method for operating the same and network comprising the mobile unit

Country Status (1)

Country Link
WO (1) WO2011147561A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9740338B2 (en) 2014-05-22 2017-08-22 Ubi interactive inc. System and methods for providing a three-dimensional touch screen

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030174125A1 (en) * 1999-11-04 2003-09-18 Ilhami Torunoglu Multiple input modes in overlapping physical space
US20030132921A1 (en) * 1999-11-04 2003-07-17 Torunoglu Ilhami Hasan Portable sensory input device
JP2009245392A (en) * 2008-03-31 2009-10-22 Brother Ind Ltd Head mount display and head mount display system
US20100053151A1 (en) * 2008-09-02 2010-03-04 Samsung Electronics Co., Ltd In-line mediation for manipulating three-dimensional content on a display device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9740338B2 (en) 2014-05-22 2017-08-22 Ubi interactive inc. System and methods for providing a three-dimensional touch screen

Also Published As

Publication number Publication date
WO2011147561A3 (en) 2012-04-12

Similar Documents

Publication Publication Date Title
US11032514B2 (en) Method and apparatus for providing image service
US10591729B2 (en) Wearable device
WO2021212922A1 (en) Object dragging method and device
US11012639B2 (en) Apparatus and method for processing an image in device
CN109683716B (en) Visibility improvement method based on eye tracking and electronic device
KR101986873B1 (en) Entry points to image-related applications in a mobile device
EP2770724B1 (en) Apparatus and method for positioning image area using image sensor location
KR102022444B1 (en) Method for synthesizing valid images in mobile terminal having multi camera and the mobile terminal therefor
KR102114377B1 (en) Method for previewing images captured by electronic device and the electronic device therefor
CN104867095B (en) Image processing method and device
KR20190008610A (en) Mobile terminal and Control Method for the Same
US20150271457A1 (en) Display device, image display system, and information processing method
US9538086B2 (en) Method of performing previewing and electronic device for implementing the same
US20170214856A1 (en) Method for controlling motions and actions of an apparatus including an image capture device having a moving device connected thereto using a controlling device
KR20170073068A (en) Mobile terminal and method for controlling the same
WO2022062808A1 (en) Portrait generation method and device
KR102077665B1 (en) Virtual movile device implementing system and control method for the same in mixed reality
WO2022048372A1 (en) Image processing method, mobile terminal, and storage medium
KR101676800B1 (en) Mobile terminal and image processing method for mobile terminal
CN113918070A (en) Synchronous display method and device, readable storage medium and electronic equipment
US20150249695A1 (en) Transmission terminal, transmission system, transmission method, and recording medium storing transmission control program
KR102077675B1 (en) Mobile terminal and control method for the same
WO2011147561A2 (en) Mobile unit, method for operating the same and network comprising the mobile unit
KR20230109762A (en) Media display device control based on gaze
KR20160041710A (en) Glass type mobile terminal

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 250313)

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11724957

Country of ref document: EP

Kind code of ref document: A2

122 Ep: pct application non-entry in european phase

Ref document number: 11724957

Country of ref document: EP

Kind code of ref document: A2