WO2015070623A1 - Information interaction - Google Patents

Information interaction Download PDF

Info

Publication number
WO2015070623A1
WO2015070623A1 PCT/CN2014/081494 CN2014081494W WO2015070623A1 WO 2015070623 A1 WO2015070623 A1 WO 2015070623A1 CN 2014081494 W CN2014081494 W CN 2014081494W WO 2015070623 A1 WO2015070623 A1 WO 2015070623A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
input information
user
piece
information
Prior art date
Application number
PCT/CN2014/081494
Other languages
French (fr)
Inventor
Lin Du
Kuifei Yu
Original Assignee
Beijing Zhigu Rui Tuo Tech Co., Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhigu Rui Tuo Tech Co., Ltd filed Critical Beijing Zhigu Rui Tuo Tech Co., Ltd
Publication of WO2015070623A1 publication Critical patent/WO2015070623A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/163Wearable computers, e.g. on a belt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/34User authentication involving the use of external additional devices, e.g. dongles or smart cards
    • G06F21/35User authentication involving the use of external additional devices, e.g. dongles or smart cards communicating wirelessly
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/03Indexing scheme relating to G06F21/50, monitoring users, programs or devices to maintain the integrity of platforms
    • G06F2221/031Protect user input by software means

Definitions

  • This application relates to the technical field of device interaction, and, more particularly, to interaction with information of a device.
  • a screen lock will be provided usually on a mobile device or wearable device for the reasons of energy saving and prevention of misoperation, however, unlocking the screen can be done in an encrypted or unencrypted way.
  • a user usually needs to remember some special passwords, patterns, action, etc. Although safety can be ensured thereby, these are easily forgettable, bringing inconvenience to the user.
  • such problems also exist in the situation that information such as password, etc., is required to be inputted for further operation.
  • Some identifying information can be directly embedded into a digital carrier by the digital watermarking technique, without influencing the use of the original carrier or being detected and modified easily.
  • the digital watermarking technique is applicable to many aspects, such as for copyright protection, against counterfeit, for authentication, for hiding information, etc. If the digital watermarking technique can be used for helping users to enter the password to acquire the corresponding authorization safely and secretly, the above-mentioned problems that authentication cannot be carried out because the user forgets the password can be solved, thereby enhancing user experience.
  • An example aim of this application is to provide a method for information interaction.
  • this application provides a method, comprising:
  • this application provides a method, comprising:
  • this application provides a device, comprising:
  • a processor coupled to the memory, that executes the executable modules to perform operations of the device, the executable modules comprising:
  • an image acquisition module configured to acquire an image related to a device, the image comprising at least one digital watermark
  • an information acquisition module configured to acquire at least one piece of input information corresponding to the device included in the at least one digital watermark
  • an information providing module configured to send the at least one piece of input information to the device.
  • this application provides a wearable device; the wearable device contains the device for information interaction in the above-mentioned third example embodiment.
  • this application provides a device, comprising:
  • a processor that executes executable modules to perform operations of the device, the executable modules comprising:
  • a watermark embedding module configured to embed at least one digital watermark into an image related to the device for information interaction, wherein the at least one digital watermark comprises at least one piece of input information corresponding to the device for the information interaction;
  • an image providing module configured to provide the image to an external device
  • an information input module configured to receive the at least one piece of input information provided from the external device
  • an execution module configured to execute a corresponding operation according to the at least one piece of input information received by the information input module.
  • this application provides a computer readable storage device, comprising at least one executable instruction, which, in response to execution, causes a system comprising a processor to perform operations, comprising:
  • this application provides a device for information interaction, comprising a processing device and a memory, the memory storing executable instructions, and the processing device being connected with the memory through a communication bus, and when the device for information interaction operates, the processing device executes the executable instructions stored in the memory, and the device for information interaction executes operations comprising:
  • this application provides a computer readable storage device, comprising at least one executable instruction, which, in response to execution, causes a system comprising a processor to perform operations, comprising:
  • this application provides a device for information interaction, comprising a processing device and a memory, the memory storing executable instructions, the processing device being connected with the memory through a communication bus, and when the device for information interaction operates, the processing device executing the executable instructions stored in the memory, the device for information interaction executes operations, comprising:
  • At least one technical solution of the embodiment of this application acquires an image related to a device and obtains input information contained in the image, and then automatically provides the input information to the device. Therefore, the device can be operated correspondingly as needed without requiring the user to remember the input information, which greatly facilitates the user and improves user experience.
  • FIG. 1 is an example flow diagram of a method for information interaction of an embodiment of this application
  • FIG. 2a is an example diagram of the corresponding image in a method for information interaction of an embodiment of this application;
  • FIG. 2b is an example diagram of the corresponding image in a method for information interaction of an embodiment of this application;
  • FIG. 3a is an example flow diagram of another method for information interaction of an embodiment of this application.
  • FIG. 3b is an example flow diagram of another method for information interaction of an embodiment of this application.
  • Fig. 4a is an example diagram of a light spot pattern used in a method for information interaction of an embodiment of this application;
  • FIG. 4b is an example diagram of a eye fundus pattern acquired by a method for information interaction of an embodiment of this application;
  • FIG. 5 is an example structural diagram of a first device for information interaction of an embodiment of this application.
  • FIG. 6a is an example structural diagram of another first device for information interaction of an embodiment of this application.
  • FIG. 6b is an example structural diagram of still another first device for information interaction of an embodiment of this application.
  • FIG. 7a is an example structural diagram of a position detection module in a first device for information interaction of an embodiment of this application;
  • Fig. 7b is an example structural diagram of a position detection module in another first device for information interaction of an embodiment of this application;
  • Fig. 7c and 7d are example diagrams of the corresponding optical paths of the position detection modules during the position detection of the embodiments of this application;
  • Fig. 8 is an example diagram showing that a first device for information interaction of an embodiment of this application is applied to the glasses;
  • Fig. 9 is an example diagram showing that another first device for information interaction of an embodiment of this application is applied to the glasses;
  • Fig. 10 is an example diagram that another first device for information interaction of an embodiment of this application is applied to the glasses;
  • FIG. 11 is an example structural diagram of another device for information interaction of an embodiment of this application.
  • Fig. 12 is an example diagram of a wearable device of an embodiment of this application.
  • FIG. 13 is an example flow diagram of a method for information interaction of an embodiment of this application.
  • FIG. 14 is an example structural diagram of a second device for information interaction of an embodiment of this application.
  • Fig. 15 is an example structural diagram of an electronic terminal of an embodiment of this application.
  • FIG. 16 is an example structural diagram of another second device for information interaction of an embodiment of this application.
  • Fig. 17 is an example schematic application scenario of a device for information interaction of an embodiment of this application.
  • a user often needs to use various input information in daily life, where the input information is the information required to be inputted to the device to complete a certain operation, such as various user authentication information such as a user password or specific hand gesture required to be inputted to the screen-locking interfaces of various electronic devices, a user password required when logging in to accounts of some websites or applications, or password information required in some access control devices, etc.
  • various user authentication information such as a user password or specific hand gesture required to be inputted to the screen-locking interfaces of various electronic devices
  • a user password required when logging in to accounts of some websites or applications
  • password information required in some access control devices etc.
  • the technical solution provided in the following embodiments of this application can help the user to acquire the input information without remembering the same, and complete the corresponding operations automatically.
  • “user environment” is the operational environment related to the user, for example the operational environment of an electronic terminal system entered after the user has logged in through the user login interface of the electronic terminals (such as, a cell phone, a computer, etc.).
  • the operational environment of the electronic terminal system generally comprises multiple applications, for example, the user can start the applications (such as applications of cell phone, e-mail, message, camera, etc.) corresponding to various functional modules in the system after entering the operational environment of the cell phone system through the screen-locking interface of the cell phone.
  • the "user environment” can also be the operational environment of a certain application that the user enters after logging in through the login interface of the application, and the operational environment of the application may also comprise multiple applications of the next level (such as, cell phone applications in above cell phone system), which cell phone applications, after starting, may also comprise some applications of the next level such as phone calling, contacts, call records, etc.
  • the next level such as, cell phone applications in above cell phone system
  • the embodiment of this application provides a method for information interaction, comprising:
  • SI 20 Image acquisition step acquiring an image related to a device, the image containing at least one digital watermark;
  • SI 40 Information acquisition step acquiring at least one piece of input information contained in the at least one digital watermark and corresponding to the device;
  • SI 60 Information providing step providing at least one piece of input information to the device.
  • the embodiment of this application acquires an image related to a device and obtains the input information contained in the image and provides the input information automatically to the device. Therefore the device can be operated correspondingly as needed without requiring the user to remember the input information, which greatly facilitates the user and improves user experience.
  • SI 20 Image acquisition step acquiring an image related to a device.
  • the image related to a device can be, for example, an image displayed on the device, such as an image displayed on the electronic terminal screen of the cell phone, computer, etc.
  • the image is a login interface of a user environment displayed on the device.
  • the image is a screen-locking interface 110 displayed on the device.
  • the image can also be, for example, the image displayed on other devices or a static image printed on the objects (such as, paper or wall), whereby the image is related to the device mentioned above.
  • the image is an image displayed on a picture posted near a door
  • the device is an electronic access control device for the door
  • the electronic watermark of the image contains the input information for the electronic access control device (such as, password information for opening the door).
  • the object seen by the user can be photographed through an intellectual glasses device, for example, when the user sees the image, the intellectual glasses device photographs the image.
  • the image can be acquired through other devices, or through the interaction with a device displaying the image.
  • S140 Information acquisition step acquiring at least one piece of input information which is contained in the at least one digital watermark and corresponds to the device.
  • the digital watermark in the image can be analyzed by a method for extracting an individual personal private key and a public or private watermark to extract the input information.
  • the image can be sent to the external, for example, to a cloud server and/or a third party authority, and the input information in at least one digital watermark can be extracted by the cloud server or the third party authority.
  • the input information is the login information about the user environment.
  • the image is the login interface of a website displayed on an electronic device
  • the input information is the login information, such as the user name, password, etc., corresponding to the website.
  • the input information is the unlock information about the screen-locking interface.
  • Fig. 2a shows the screen-locking interface of a touch screen cell phone device.
  • the user needs to draw a corresponding track on the cell phone screen to unlock the cell phone, so that the user can enter the user environment of the cell phone system for further operations.
  • the screen-locking interface is embedded with the digital watermark; this implementation acquires corresponding unlock information through the digital watermark; and sending same back to the device, so that the device can unlock automatically after receiving the unlock information.
  • SI 60 Information providing step providing the input information to the device.
  • the input information can be sent directly to the device through the interaction between devices locally.
  • the input information can also be sent to a remote end server, and the remote end server provides the input information to the device.
  • the input information from a device can be acquired naturally and conveniently and provided to the device, so that the corresponding function of the device can be started conveniently without user operations.
  • the method in addition to the steps of the embodiment shown in Fig. 1 , in order to improve the security of operation, before the information acquisition step SI 40, the method also comprises:
  • SI 30 Authorization determining step: determining whether the user is an authorized user, and conducting the information acquisition step only when the user is an authorized user.
  • the user can be a user who is using the device.
  • the user After the user acquires the pattern related to a device and having digital watermarks, in order to guarantee the security of user information, the user needs to be authorized, so that only authorized users can acquire the corresponding input information in the digital watermark, while the unauthorized users can only see the image.
  • the authorization determining step SI 30 can be conducted at the remote end (such as a cloud server), that is, the corresponding user information can be sent to the remote end and the determined results can be sent back to a local spot after being determined by the remote end; it can also be directly conducted locally.
  • the authorization determining step can also be set before the information providing step.
  • the method before the information providing step, the method also comprises:
  • SI 50 Authorization determining step: determining whether the user is an authorized user, and conducting the information providing step only when the user is an authorized user.
  • the authorization determining step can also be conducted at the remote end or locally.
  • the first device for information interaction described in the embodiment of this application is the device provided at the user's side; a person skilled in the art could know that the first device for information interaction can also be a cloud device, such as a server, etc.
  • the input information in the image can be extracted at a local spot or at other server sides after receiving the image sent from the user's side, and the user authorization determining step is conducted, and then the input information is sent to the device related to the image after confirming that the user is an authorized user.
  • the method also comprises:
  • SI 80 Projecting step projecting the input information to the eye fundus of the user.
  • the user can know the corresponding information, and on the other hand, in the occasions (for example there are communication problems) that some devices are unable to receive the input information, the user can input into the device manually according to acquired input information.
  • the input information needs to be converted into corresponding display content.
  • the corresponding input information can be provided to the user by projecting the input information to the user's eye fundus.
  • the projection can be performed by directly projecting the input information to the user eye fundus through a projection module.
  • the input information can be projected directly to the user's eye fundus without intermediate display, therefore only the user himself can acquire the input information, while other people are unable to see the information, thereby guaranteeing the information security for the user.
  • the projection can also be performed by displaying the input information in the positions only visible to the user (for example on the display surface of an intellectual glass), and projecting the input information to the user's eye fundus through the display surface.
  • the projecting step comprising:
  • an information projecting step projecting the input information; and a parameter adjusting step: adjusting at least one projection imaging parameter of the optical path between the projection position and the user's eyes, until the input information is imaged clearly in the user's eye fundus.
  • the parameter adjusting step comprises:
  • the imaging parameter comprises the focal length, direction of optical axis, etc. of the optical device.
  • the input information can be properly projected to the eye fundus of the user through this adjustment, for example, the input information is imaged clearly on the user's eye fundus by adjusting the focal length of the optical devices.
  • the three-dimensional display effect of the input information can also be achieved by projecting the same input information to the two eyes respectively with some deviations, and at this time, for example, the effect can be achieved by adjusting the parameter of the optical axis of the optical device.
  • the projecting step SI 80 also comprises:
  • the projecting step SI 80 also comprises:
  • the projected input information is pre-processed, so that the input information projected has a reversed deformation opposite to the deformation, which reversed deformation effect offsets the deformation effect of the curved optical device through the above curved optical device. Therefore the input information received on the user's eye fundus is the effect to be presented.
  • the input information projected to the user's eye needs not to be aligned with the image, for example, when the user is needed to input a set of password information in a certain order in an input box displayed in the image, for example "1234", the set of information only needs to be projected to the user's eye fundus to be seen by the user.
  • the input information is the information generated when completing a specific action in a specific position, for example, when it needs the information generated by drawing a specific track in a specific position on the screen displaying the image
  • the input information needs to be aligned with the image for displaying. Therefore, in a possible implementation of the embodiment of this application, in the projecting step SI 80, the input information can be projected to the user's eye fundus after being aligned with an image seen by the user.
  • the method also comprises:
  • a position detecting step for detecting the position of the user's gazing point relative to the user
  • the projecting step SI 80 aligns the projected input information with the image seen by the user on the eye fundus of the user according to the position of user's gazing point relative to the user.
  • the position corresponding to the user's gazing point is the position of the image.
  • a depth sensor such as infrared distance measurement
  • the detecting the current gazing point of the user through the method iii) comprises:
  • an eye fundus image collection step for collecting an image of the user's eye fundus
  • an adjustable imaging step for adjusting at least one imaging parameter of the optical path between the image collection position of the eye fundus and the user's eye until the clearest image is collected;
  • an image processing step for analyzing the collected image of the eye fundus, obtaining the imaging parameters of the optical path between the image collection position of the eye fundus corresponding to the clearest image and the eye as well as at least one optical parameter of the eye, and calculating the position of the user's current gazing point relative to the user.
  • an image presented on the "eye fundus” is primarily an image presented on the retina, which can be the image of the eye fundus per se, or the image of another object projected to the eye fundus, such as the light spot pattern mentioned below.
  • the clearest image of the eye fundus can be obtained when the optical device is in a certain position or state by adjusting the focal length of an optical device on the optical path between the eye and the collection position and/or its position in the optical path.
  • the adjustment can be continuous and in real time.
  • the optical device can be a lens with adjustable focal length, and is used for completing the adjustment of its focal length by adjusting the refractive index and/or shape of the optical device itself. Specifically, 1) adjusting the focal length by adjusting the curvature of at least one surface of the lens with adjustable focal length, for example, by increasing or decreasing the liquid medium in the cavity composed by two transparent layers to adjust the curvature of the lens with adjustable focal length; 2) adjusting the focal length by changing the refractive index of the lens with adjustable focal length, for example, since the lens with adjustable focal length is filled with a specific liquid crystal medium, adjusting the arrangement mode of the liquid crystal medium by adjusting the voltage of the respective electrode of the liquid crystal medium, thereby changing the refractive index of the lens with adjustable focal length.
  • the optical devices can be: a set of lenses, which are used for completing the adjustment of focal length of the set of lenses by adjusting the relative positions of lenses in the set of lenses.
  • one or more lenses in the set of lenses are the lens with adjustable focal length mentioned above.
  • optical path parameter of the system can also be changed by adjusting the position of the optical device on the optical path.
  • the image processing step further comprises:
  • the clearest image can be collected through adjustment in the adjustable imaging step, but the clearest image needs to be found out through the image processing step, so that the optical parameters of the eyes can be calculated according to the clearest image and the known optical path parameters.
  • the image processing step may also comprise:
  • the projected light spot may have no specific pattern and is only used for illuminating the eye fundus.
  • the projected light spot may also comprise patterns with rich features. Rich features of the pattern can be convenient for detecting, increasing the accuracy of detection.
  • Fig. 4a shows an example drawing of a light spot pattern P, which pattern can be formed by a light spot pattern generator, such as a frosted glass;
  • Fig. 4b shows the image of eye fundus collected with a light spot pattern P projected.
  • the light spot is an infrared light spot invisible to eyes. At this moment, in order to reduce the interference from other spectra, light other than that visible to eyes in the projected light spot can be filtered out.
  • the method of the embodiment of this application may also comprise the following steps:
  • the analysis result for example, comprises the features of the collected image, comprising contrast of the features of image and the texture features, etc.
  • the projection can be stopped periodically when the observer gazes one point continually.
  • the projection can be stopped when the eye fundus of the observer is bright enough, and the distance from focusing point of the current view of eyes to the eyes can be detected through eye fundus information.
  • the brightness of projected light spot can also be controlled according to the ambient light.
  • the image processing step may also comprise:
  • the conducting calibration of the eye fundus image acquiring at least one reference image corresponding to the image presented on the eye fundus. Specifically, the collected image and the reference image are compared and calculated to obtain the clearest image.
  • the clearest image can be the obtained image having the minimum difference from the reference image.
  • difference between the current obtained image and the reference image can be calculated using existing image processing algorithm, for example, using classical automatic focusing algorithm of phase difference.
  • the optical parameters of the eyes may also comprise the direction of optical axis of eyes obtained according to the characteristic of eyes when the clearest image is collected.
  • the characteristics of eyes can be acquired from the clearest image or by other means.
  • the gazing direction of the user's eyes view can be obtained according to the directions of optical axes of eyes.
  • the directions of optical axes of eyes can be obtained according to the characteristic of eye fundus when the clearest image is obtained, and the directions of optical axes of eyes can be determined by the characteristic of the eye fundus at higher accuracy.
  • the size of the light spot pattern may be larger than or smaller than the visible area in the eye fundus, wherein:
  • the directions of optical axes of eyes and line-of-sight direction of observer can be determined through the position of the light spot pattern on the obtained image relative to original light spot pattern (obtaining through image calibration).
  • the direction of optical axis of eye can be obtained through the characteristics of pupil when the clearest image is obtained.
  • the characteristics of pupil can be acquired from the clearest image or by other means. Obtaining the direction of optical axis of eye through characteristics of pupil is prior art, therefore it is not explained here.
  • the method of the embodiment of this application also comprises the calibration step of the direction of optical axis of eye, in order to determine the direction of optical axis of eye more precisely.
  • the known imaging parameters comprise fixed imaging parameters and real-time imaging parameters, wherein the real-time imaging parameters are the information about parameters of the optical device when the clearest image is acquired, and the parameter information can be obtained by recording real-time when the clearest image is acquired.
  • the position of the eyes' gazing point can be obtained by combining the parameters with the distance from focusing point of eye to eye obtained by calculating.
  • the input information can be projected to the user's eye fundus three-dimensionally in the projecting step SI 80.
  • the three-dimensional projection can be realized by adjusting the projection position in such a manner that the user can see the information with parallax, thereby forming the three-dimensional display effect of the same projection information.
  • the input information respectively comprise three-dimensional information corresponding to the user's two eyes, and in the projecting step, corresponding input information can be projected to the two eyes of the user respectively. That is, the input information comprises left eye information corresponding to the user's left eye and right eye information corresponding to the user's right eye, wherein the left eye information can be projected to the user's left eye and the right eye information can be projected to the user's right eye, so that the input information seen by the user has suited three-dimensional display effect and brings better user experience.
  • the user can see the three-dimensional space information through the three-dimensional projection.
  • the input information can only be inputted correctly when the user makes a specific hand gesture in a specific position in the three-dimensional space
  • the user sees the three-dimensional input information and thus knows the specific position and the specific hand gesture, so that the user can make the hand gesture prompted by the input information in the specific position, while other people are unable to know the spatial information even if they see the hand gesture made by the user, thereby improving the secrecy effect of the input information.
  • the embodiment of this application provides a first device for information interaction 500 for, comprising:
  • an image acquisition module 510 used for acquiring an image related to a device, the image containing at least one digital watermark
  • an information acquisition module 520 used for acquiring the at least one piece of input information which is contained in at least one digital watermark and corresponding to the device;
  • an information providing module 530 used for providing the at least one piece of input information to the device.
  • the device of the embodiment of this application acquires an image related to a device and obtains the input information contained in the image and provides the input information automatically to the device. Therefore the device can be operated correspondingly as needed without requiring the user to remember the input information, which greatly facilitates the user and improves user experience.
  • the image acquisition module 510 comprises an image collection sub-module 511 used for acquiring the image by photographing.
  • the image collection sub-module 511 for example can be a camera of the intellectual glasses used for photographing the image seen by the users.
  • the image acquisition module 510 comprises:
  • a first communication sub-module 512 used for obtaining the image by receiving the same.
  • the image can be acquired by another device and then sent to the device of the embodiment of this application; alternatively, the image can be acquired through interaction with a device displaying the image (that is, the device transmits the displayed image information to the device of embodiment of this application).
  • the information acquisition module 520 there are various forms of the information acquisition module 520, for example:
  • the information acquisition module 520 comprises: an information extraction sub-module 521 used for extracting the input information from the image.
  • the information extraction sub-module 521 can analyze the digital watermarks in the image through a method for extracting a personal private key and public or private watermark, and extract the input information.
  • the information acquisition module 520 comprises: a third communication sub-module 522 used for:
  • the image can be sent to the external, for example, to a cloud server and/or a third party authority, and then after the cloud server or the third party authority extracts the at least one digital watermark in the input information, sent back to the third communication sub-module 522 in the embodiment of this application.
  • the functions of the first communication sub-module 512 and the third communication sub-module 522 can be achieved through the same communication module.
  • the device 500 also comprises:
  • an authorization determination module 550 is used for determining whether the user is an authorized user, and starting corresponding operations when the user is an authorized user, specifically, that is, the information acquisition module 520 can acquire the input information only when the user is an authorized user.
  • the authorization determination module 550 determines whether the user is an authorized user, and starts corresponding operations when the user is an authorized user. [00133] In the embodiment of this application, after the input information is extracted through the image, the user needs to be authorized, so that the input information can be provided to the device only when the user is an authorized user.
  • the authorization determination by the authorization determination module 550 can also be conducted at the remote end (such as a cloud server), that is, sending the corresponding user information to the remote end, and sending the results back to the local spot after determination.
  • the authorization determination module 550 comprises:
  • a second communication sub-module 551 used for:
  • the second communication sub-module 551 can be a separate communication interface device, or can be the same module as the first communication sub-module 512 and/or the third communication sub-module 522.
  • the device may comprise no authorization determination module.
  • the device 500 in addition to providing the input information to the device to perform corresponding operations, in order to ensure that the user can see the input information secretly, as shown in Fig. 6b, the device 500 also comprises:
  • a projection module 560 used for projecting the input information to the user's eye fundus.
  • the user can know the corresponding information, and on the other hand, in the occasions (for example there are communication problems) that some devices are unable to receive the input information, the user can input the information into the device manually according to acquired input information.
  • the projection module 560 comprises:
  • an information projecting sub-module 561 used for projecting the input information
  • a parameter adjustment sub-module 562 used for adjusting at least one projection imaging parameter of the optical path between the projection position and the user's eyes, until the input information is imaged clearly on the user's eye fundus.
  • the parameter adjustment sub-module 562 comprises:
  • At least one adjustable lens device with the focal length thereof being adjustable and/or the position thereof on the optical path between the projection position and the user's eyes being adjustable.
  • the projection module 560 comprises:
  • a curved spectral device 563 used for transmitting the input information to the user's eye fundus respectively corresponding to the positions of pupil in different directions of optical axis of the eye.
  • the projection module 560 comprises:
  • a reversed deformation processing sub-module 564 used for conducting the reversed deformation processing on the input information corresponding to the positions of pupil in different directions of optical axis of the eye so that the eye fundus receive the input information to be presented.
  • the projection module 560 comprises:
  • an alignment and adjustment sub-module 565 used for aligning the projected input information with the image seen by the user on the user's eye fundus.
  • the device 500 also comprises:
  • a position detection module 540 used for detecting the position of the user's gazing point relative to the user
  • the alignment and adjustment module 565 used for aligning the projected input information and the image seen by the user on the user's eye fundus according to the position of the user's gazing point relative to the user.
  • the position detection module 540 there are various embodiments of the position detection module 540, such as devices corresponding to methods i) to iii) in the method embodiments.
  • the embodiment of this application further illustrates the position detection module corresponding to method iii) through the implementations corresponding to Figure 7a-7d, 8 and 9:
  • the position detection module 700 comprises:
  • an image collection sub-module for eye fundus 710 used for collecting an image on the user's eye fundus
  • an adjustable imaging sub-module 720 used for adjusting at least one imaging parameter of the optical path between the image collection position of the eye fundus and the user's eye until the clearest image is collected;
  • an image processing sub-module 730 used for analyzing the collected image of the eye fundus, obtaining the imaging parameters of the optical path between the eye fundus image collection position corresponding to the clearest image and the eye as well as at least one optical parameter of the eye, and calculating the position of the user's current gazing point relative to the user.
  • the position detection module 700 By analyzing and processing the image on the eye fundus, the position detection module 700 obtains the optical parameters of the eye when the image collection sub-module of eye fundus obtaining the clearest image, and thus the current position of gazing point of eyes can be obtained by calculating.
  • the image presented on the "eye fundus” is primarily the image presented on the retina, which can be the image of the eye fundus itself or the image of another object projected to the eye fundus.
  • the eye can be human eyes or the eye of other animals.
  • the image collection sub-module for eye fundus 710 is a micro camera; in another possible implementation of the embodiment of this application, the image collection sub-module for eye fundus 710 can also use sensing imaging device directly, such as a CCD or CMOS.
  • the adjustable imaging sub-module 720 comprises: an adjustable lens device 721 located on the optical path between the eyes and the image collection sub-module for eye fundus 710, with the focal length thereof being adjustable and/or the position thereof in the optical path being adjustable. Equivalent focal length of the system from eyes to the image collection sub-module for eye fundus 710 can be adjusted through the adjustable lens device 721 , and the image collection sub-module for eye fundus 710 can acquire the clearest image on the eye fundus under a certain position or state of the adjustable lens device 721 by adjustment of the adjustable lens device 721.
  • the adjustable lens device 721 can be adjusted continuously and in real time in the detection process.
  • the adjustable lens device 721 can be: a lens with adjustable focal length, used for completing the adjustment of the focal length thereof by adjusting the refractive index and/or shape thereof. Specifically: 1) adjusting the focal length by adjusting the curvature of at least one surface of the lens with adjustable focal length, for example, by increasing or decreasing the liquid medium in the cavity composed of two transparent layers to adjust the curvature of the lens with adjustable focal length; 2) adjusting the focal length by changing the refractive index of the lens with adjustable focal length, for example, since the lens with adjustable focal length is filled with a specific liquid crystal medium, the arrangement mode of the liquid crystal medium can be adjusted by adjusting the voltage of the respective electrode of the liquid crystal medium, thereby changing the refractive index of the lens with adjustable focal length.
  • the adjustable lens device 721 comprises: a set of lenses composed of multiple lenses, used for completing the adjustment of focal length of the set of lenses by adjusting the relative positions between the lenses in the set of lenses.
  • the set of lens can also comprise the lenses with adjustable imaging parameters, such as focal length.
  • the optical path parameters of the system can be further changed by adjusting the position of the lens device 721 on the optical path.
  • the adjustable imaging sub-module 720 can also comprise: a spectroscopic unit 722, used for forming the light transmission paths between the eye and the observation object as well as between the eye and the image acquisition sub-module for eye fundus 710. This folds the optical path, reducing the system volume while avoiding influencing other viewing experiences of the user as far as possible.
  • the spectroscopic unit can comprise: a first spectroscopic unit located between the eye and the observation object, used for transmitting the light from the observation object to the eye and used for transferring the light from the eye to the image acquisition sub-module for eye fundus.
  • the first spectroscopic unit can be a spectroscope, a spectroscopic optical waveguide (comprising optical fibers) or other suitable spectroscopic devices.
  • the image processing sub-module 730 of the system comprises an optical path calibrating unit, used for calibrating the optical path of the system; for example, an alignment calibration for the optical axis of the optical path is performed to guarantee the accuracy of the measurement.
  • the image processing sub-module 730 comprises:
  • an image analysis unit 731 used for analyzing the image acquired by the image acquisition sub-module for eye fundus to find the clearest image
  • a parameter calculation unit 732 used for calculating the optical parameters of the eyes based on the clearest image as well as the known imaging parameters of the system when the clearest image is acquired.
  • the image acquisition sub-module for eye fundus is the image acquisition sub-module for eye fundus
  • the optical parameters of the eyes can be obtained by calculating based on the clearest image and the known optical path parameters of the system.
  • the optical parameters of the eye herein can comprise an optical axis direction of the eye.
  • the system can also comprise: a projection sub-module 740 used for projecting the light spot to the eye fundus.
  • the functions of the projection sub-module can be implemented through a micro-projector.
  • the light spot projected herein may have no specific pattern and is only used for illuminating the eye fundus.
  • the projected light spot can comprise a pattern with rich features.
  • the rich features of the pattern can be easy to detect, increasing the accuracy of detection.
  • Fig. 4a is a diagram of a light spot pattern P, and the pattern can be formed by a light spot pattern generator, such as a frosted glass;
  • Fig. 4b shows the eye fundus image taken when the light spot pattern P is projected.
  • the light spot can be an infrared light spot invisible to eyes.
  • a transmission filter for light invisible to eyes can be arranged on the emergent surface of the projection sub-module.
  • a transmission filter for light invisible to eyes can be arranged on the incident surface of the image acquisition sub-module for eye fundus.
  • the image processing sub-module 730 can also comprise:
  • a projection control unit 734 used for controlling the brightness of the projected light spot of the projection sub-module 740 based on the obtained results of the image analysis unit 731.
  • the projection control unit 734 can adaptively adjust the brightness based on the features of the image obtained by the image collection sub-module for eye fundus 710.
  • the features of the image herein comprise the contrast of the image features as well as the texture feature, etc.
  • a special condition of controlling the brightness of projected light spot of the projection sub-module 740 is turning-on or tuning-off the projection sub-module 740, and for example the projection sub-module 740 can be periodically turned off when the user continues to focus on a point.
  • the light emitting source can be turned off when the user eye fundus is bright enough, and the distance from the eyes' current gazing point to the eyes can be detected only by the eye fundus information.
  • the projection control unit 734 can further control the brightness of projected light spot of the projection sub-module 740 based on an ambient light.
  • the image processing sub-module 730 can also comprise: an image calibration unit 733 used for performing the calibration for the eye fundus image to obtain at least one reference image corresponding to the image present in the eye fundus.
  • the image analysis unit 731 compares the image acquired by the image acquisition sub-module for eye fundus 730 with the reference image and calculates, thereby acquiring the clearest image.
  • the clearest image can be an obtained image having the minimum difference from the reference image.
  • the difference between the current obtained image and the reference image can be calculated by an existing image processing algorithm such as a classical automatic focusing algorithm of phase difference.
  • the parameter calculation unit 732 can comprise:
  • a determination subunit 7321 for the direction of optical axis of eye used for obtaining the direction of optical axis of eye according to the characteristics of eye when the clearest image is acquired.
  • the characteristics of eyes herein can be obtained from the clearest image or by other means.
  • the gazing direction of line-of-sight of the user's eye can be obtained according to direction of optical axis of eye.
  • the determination subunit 7321 for the direction of optical axis of eye can comprise: a first determination subunit used for obtaining the direction of optical axis of eye according to the characteristics of the light spot when the clearest image is acquired. Compared with the direction of optical axis of eye obtained through the characteristics of pupil and the surface of eyeball, the direction of optical axis of eyes can be determined by the characteristics of the eye fundus at a higher accuracy.
  • the size of the light spot pattern may be larger than or smaller than the visible area of the eye fundus, wherein:
  • the classical matching algorithm of feature points can be used to determine the direction of optical axis of eyes by detecting the position of the light spot pattern in the image relative to the eye fundus; and
  • the direction of optical axis of eyes can be determined through the position of the light spot pattern in the obtained image relative to the original light spot pattern (obtained by the image calibration unit), thereby determining the line-of-sight direction of the user.
  • the determination subunit 7321 for the direction of optical axis of eye comprise: a second determination subunit used for obtaining the direction of optical axis of eye according to the characteristics of pupil when the clearest image is acquired.
  • the characteristics of pupil herein can be obtained from the clearest image or by other means. Obtaining the direction of optical axis of eye through characteristics of pupil is prior art, therefore it is not explained here.
  • the image processing sub-module 730 can also comprise: a calibration unit 735 for the direction of optical axis of eye used for calibrating the direction of optical axis of eye to more precisely determine the direction of optical axis of eye mentioned above.
  • the known imaging parameters of the system comprise the fixed imaging parameters and the real-time imaging parameters, wherein the real-time imaging parameters are the information about the parameters of the adjustable lens device when the clearest image is obtained, and the information about the parameters can be obtained by real-time recording when the clearest image is acquired.
  • the distance from eyes' gazing point to the eye can be obtained by calculation as follows, particularly:
  • Fig. 7c is a diagram of the eye imaging, and by combining a lens imaging formula in the classical optics theory, the formula 1) can be obtained from Fig. 7c:
  • do and de are respectively the distance from the current observation object 7010 of eyes and from a real image 7020 on the retina to the eye-equivalent lens 7030; fe is an equivalent focal length of the eye-equivalent lens 7030; and X is a line-of-sight direction of the eye (which may be obtained through the direction of optical axis of eye).
  • Fig. 7d is a diagram of the distance from the eyes' gazing point to the eye obtained based on the known optical parameters of the system and the optical parameters of the eyes, the light spot 7040 in Fig.7d are converted into a virtual image (not shown in Fig. 7d) through the adjustable lens device 721, assuming that the distance from the virtual image to the lens is x (not shown in Fig.7d), the following system of equations can be obtained by combining with formula (1):
  • d p is an optical equivalence distance from the light spot 7040 to the adjustable lens device 721 ; d; is an optical equivalence distance from the adjustable lens device 721 to the eye-equivalent lens 7030; and f p is the focal length value of the adjustable lens device 721.
  • the position of eyes' gazing point can be easily obtained based on the distance from the observation object 7010 to the eye obtained according to the above-mentioned calculations as well as the direction of optical axis of eye recorded previously, thereby providing a basis for a subsequent further interaction related to the eye.
  • Fig. 8 is an embodiment where the position detecting module 800 is applied to the glasses G in a possible implementation of the embodiment of this application, which comprises the recorded contents of the implementation shown in Fig. 7b, specifically: seen from the Fig. 8, in this implementation, the module 800 of this implementation is integrated on the right side (not limited thereto) of the glasses G, comprising:
  • a micro camera 810 whose function is the same as that of the image collection sub-module for eye fundus recorded in the implementation of Fig. 7b, and which is located on the right outer position of the glasses G in order not to influence the sight when the user views the object normally;
  • a first spectroscope 820 whose function is the same as that of the first spectroscopic unit recorded in the implementation of Fig. 7b, and which is located at the intersection point of the gazing direction of the eye A and the incidence direction of the camera 810 at a certain angle of inclination, so as to transmit the light of the observation object entering the eye A and reflect the light from the eye to the camera 810;
  • a lens with adjustable focal length 830 whose function is the same as that of the lens with adjustable focal length recorded in the implementation of Fig. 7b, and which is located between the first spectroscope 820 and the camera 810 to adjust the focal length value in real time, such that the camera 810 can take the clearest eye fundus image at a certain focal length value.
  • the eye fundus brightness is insufficient, thus it is preferable to illuminate the eye fundus, and in this implementation, the eye fundus is illuminated by a light emitting source 840.
  • the light emitting source 840 herein may be a light emitting source of light invisible to the human eyes, such as a near-infrared light emitting source which may have a slight impact on eye A and is relatively sensitive to the camera 810.
  • the light emitting source 840 is located outside of the right eyeglasses frame, therefore the transmission of the light emitted by the light emitting source 840 to the eye fundus requires a second spectroscope 850 along with the first spectroscope 820.
  • the second spectroscope 850 is located in front of the incident surface of the camera 810, therefore the light from the eye fundus to the second spectroscope 850 is required to be transmitted.
  • the first spectroscope 820 may have the properties of high infrared reflectivity and high transmission to visible light.
  • the above properties can be achieved by arranging an infrared reflective film on the side of the first spectroscope 820 toward the eye A.
  • the position detection module 800 is located on the side of the glasses G away from the eye A, therefore the lens may be regarded as a part of the eye A when the optical parameters of the eye are calculated, without needing to know the optical property of the lens.
  • the position detection module 800 may be located on the side of glasses G near the eye A; in this case, it is required to obtain the optical property parameters of the lens in advance and take the influencing factors of the lens into consideration when the distance of the gazing point is calculated.
  • the light emitted from the light emitting source 840 passes through the lens of the glasses G after the reflection of the second spectroscope 850, the projection of the lens with adjustable focal length 830 and the reflection of the first spectroscope 820, enters the user's eyes and finally arrives at the retina of eye fundus.
  • the eye fundus image is taken by the camera 810, through the pupil of eye A via the optical path composed of the first spectroscope 820, the lens with adjustable focal length 830 and the second spectroscope 850.
  • the other parts of the device of the embodiment of this application are also embodied on the glasses G, and because the position detection module and the projection module may simultaneously comprise a device having projection function (the information projection sub-module of the projection module and the projection sub-module of the position detection module, as described above) and an imaging device with adjustable imaging parameters (the parameter adjustment sub-module of the projection module and the adjustable imaging sub-module of the position detection module, as described above), accordingly, in a possible implementation of the embodiment of this application, the functions of the position detection module and the projection module are achieved by the same device.
  • the light emitting source 840 may be used for aiding the projection of the input information as the information projection sub-module of the projection module in addition to the illumination of the position detection module.
  • the light emitting source 840 may simultaneously project the invisible light for illuminating the position detection module and the visible light for aiding the projection of the input information, respectively; in another possible implementation, the light emitting source 840 may also switch between the projection of the invisible light and the visible light asynchronously; and in still another possible implementation, the position detection module may use the input information to achieve the function of illuminating the eye fundus.
  • the first spectroscope 820, the second spectroscope 850 and the lens with adjustable focal length 830 may be used as the parameter adjustment sub-module of the projection module and as the adjustable imaging sub-module of the position detection module.
  • its focal length may be adjusted region by region, different regions correspond respectively to the position detection module and the projection module, and the focal lengths may also be different.
  • the focal length of the lens with adjustable focal length 830 is adjusted as a whole, however other optical devices are arranged on the front end of a light sensing unit (such as CCD, etc.) of the micro camera 810 of the position detection module, to achieve the auxiliary adjustment of the imaging parameters of the position detection module.
  • a light sensing unit such as CCD, etc.
  • it is configured such that the optical path from the light emitting plane (where the input information is projected out) of the light emitting source 840 to the eyes is the same as that from the eyes to the micro camera 810, and when the lens with adjustable focal length 830 is adjusted to the clearest eye fundus image received by the micro camera 810, the input information projected by the light emitting source 840 just is imaged clearly in the eye fundus.
  • the functions of the position detection module and the projection module of the first device for information interaction of the embodiment of this application may be achieved by a set of means, such that the overall system has simple structure, small volume, and improved portability.
  • FIG. 9 The structural diagram of the position detection module 900 of another implementation of the embodiment of this application is shown in Fig. 9. It can be seen from Fig. 9 that this implementation is similar to the implementation shown in Fig. 8, comprising the micro camera 910, the second spectroscope 920, and the lens with adjustable focal length 930, except that the projection sub-module 940 of this implementation is a projection sub-module 940 for projecting light spot pattern and the first spectroscope of the implementation shown in Fig. 8 is replaced by a curved spectroscope 950 as the curved spectroscopic device.
  • the curved spectroscope 950 corresponds respectively to the positions of the pupil in different directions of the eyes' optical axes, and the image presented in the eye fundus is transmitted to the eye fundus image collection sub-module.
  • the camera may take the images mixed and superimposed in all angles of the eyeball, but only the eye fundus part passing through the pupil can image clearly in the camera, while the other parts will be out of focus and unable to image clearly; therefore, imaging in the eye fundus part will not be interfered seriously, and the feature of the eye fundus part may still be detected.
  • the eye fundus image may be acquired well when the eyes gaze in different directions, such that the position detection module of this implementation has a wider range of application and higher detection accuracy.
  • the other parts of the first device for information interaction of the embodiment of this application are embodied on the glasses G.
  • the position detection module and the projection module may also be reused.
  • the projection sub-module 940 may switch between the projection of the light spot pattern and the input information synchronously or asynchronously; alternatively, the projected input information is used by the position detection module as the light spot pattern for detection.
  • the first spectroscope 920, the second spectroscope 950 and the lens with adjustable focal length 930 may be used as the parameter adjustment sub-module of the projection module and as the adjustable imaging sub-module of the position detection module.
  • the second spectroscope 950 is also used respectively corresponding to the positions of the pupil in different directions of the eye' optical axis, to transmit in the optical path between the projection module and the eye fundus. Because the input information projected by the projection sub-module 940 is deformed after passing through the second curved spectroscope 950, in this implementation, the projection module comprises:
  • a reversed deformation processing module (not shown in Fig. 9) used for performing the reversed deformation processing corresponding to the curved spectroscopic device on the input information so that the input information to be presented is received by the eye fundus.
  • the projection module is used for projecting the input information to the user eye fundus in a three-dimensional way.
  • the input information comprises the three-dimensional information respectively corresponding to the two eyes of the user, and the projection module projects respectively the corresponding input information to the two eyes of the user.
  • the first device for information interaction 1000 requires to provide two sets of projection modules respectively corresponding to the two eyes of the user and comprises:
  • the structure of the second projection module is similar to the structure recorded in the embodiment shown in Fig. 10 which integrates the function of the position detection module, also has the structure which may simultaneously achieve the function of the position detection module and the function of the project module, and comprises the micro camera 1021, the second spectroscope 1022, the second lens with adjustable focal length 1023, and the first spectroscope 1024 (the position detection sub-module is not shown in Fig. 10) with the functions thereof being the same as the embodiment shown in Fig.
  • the projection sub-module of this implementation is the second projection sub-module 1025 which may project the input information corresponding to the right eye. It may be used for detecting the position of the gazing point of the user's eye and clearly projecting the input information corresponding to the right eye to the right eye fundus.
  • the structure of the first projection module is similar to that of the second projection module 1020, except that it neither has the micro camera nor integrates the function of the position detection module.
  • the first projection module comprises:
  • a first projection sub-module 1011 used for projecting the input information corresponding to the left eye to the left eye fundus
  • a first lens with adjustable focal length 1013 used for adjusting the imaging parameters between the first projection sub-module 1011 and the eye fundus, such that the corresponding input information may be clearly presented on the left eye fundus and that the user can see the input information presented in the image;
  • a third spectroscope 1012 used for transmitting in the optical path between the first projection sub-module 1011 and the first lens with adjustable focal length 1013;
  • a fourth spectroscope 1014 used for transmitting in the optical path between the first lens with adjustable focal length 1013 and the left eye fundus.
  • the input information seen by the user has the appropriate three-dimensional display effect, bringing better user experience. Furthermore, when the input information inputted to the user contains three-dimensional space information, the user may see the three-dimensional space information by means of the three-dimensional projection.
  • the input information can only be inputted correctly when the user makes a specific hand gesture in a specific position in the three-dimensional space
  • the user sees the three-dimensional input information and thus knows the specific position and the specific hand gesture, so that the user can make the hand gesture prompted by the input information in the specific position, while other people are unable to know the spatial information even if they see the hand gesture made by the user, thereby improving the secrecy effect of the input information.
  • Fig. 11 is the structural diagram of still another first device for information interaction 1100 provided by the embodiment of this application; and the specific embodiments of this application have no restriction on the specific realization of the first device for information interaction 1100.
  • the first device for information interaction 1100 may comprise:
  • a processor 1110 a communications interface 1120, a memory 1130, and a communication bus 1140.
  • the processor 1110, the communications interface 1120, and the memory 1130 communicate with each other via the communication bus 1140.
  • the communications interface 1120 is used for communicating with network elements such as client.
  • the processor 1110 is used for executing a program 1132, specifically executing the relevant steps of the above-mentioned method embodiment.
  • the program 1132 may comprise program codes which comprise computer operating instructions.
  • the processor 1110 may be a central processing unit CPU, a specific integrated circuit ASIC (Application Specific Integrated Circuit) or one or more integrated circuits configured to implement the embodiment of this application.
  • ASIC Application Specific Integrated Circuit
  • the memory 1130 is used for storing the program 1132.
  • the memory 1130 may contain a high-speed RAM memory and may also comprise a non-volatile memory, such as at least one disk storage.
  • the program 1132 may be used to make the first device for information interaction 1100 perform the following steps:
  • a computer readable medium comprising computer readable instructions which perform the following operations when executed: executing the steps SI 20, SI 40 and SI 60 of the method in the above-mentioned embodiment.
  • the embodiment of this application also provides a wearable device 1200 containing the first device for information interaction 1210 recorded by the above-mentioned embodiment.
  • the wearable device may be a pair of glasses.
  • this pair of glasses may have a structure as shown in Figs 8 to 10.
  • the embodiment of this application provides a method for information interaction, comprising:
  • SI 310 a watermark embedding step: embedding at least one digital watermark into an image related to a device, the digital watermark containing the input information corresponding to the device;
  • SI 320 an image providing step: providing the image to external;
  • SI 330 an information input step: receiving the at least one piece of input information provided from the external;
  • SI 340 an execution step: executing the operation corresponding to the at least one piece of input information.
  • the digital watermark is provided to the external after embedded such that the external device may acquire the corresponding input information according to the image and then return the same to the method of this application; the information input step of the method of this application automatically carries out the corresponding operations after the input information provided from the external is received, without manual operation of the user, which is convenient for use by the user.
  • the digital watermark may be classified according to its symmetry into symmetrical watermarks and asymmetric watermarks. The embedding key and detection key of a conventional symmetrical watermark are identical, such that the watermark would be removed from a digital carrier easily, once the detection method and key are disclosed.
  • the asymmetrical watermarking technology uses a private key to embed a watermark and uses a public key to extract and verify the watermark, such that it is difficult for an attacker to use the public key for destroy or remove the watermark embedded with the private key. Therefore, in the embodiment of this application, an asymmetric digital watermark may be used.
  • the embedded input information to be contained in the digital watermark may be preset by the user according to his or her personalized requirement or actively configured for the user by the system.
  • SI 320 may comprise:
  • the step SI 320 may also be as follows: sending the image to the corresponding device by interacting between devices, in the method of the embodiment of this application.
  • the image is a login interface of a user environment
  • the image is a login interface of a user's electronic bank account, the input information being the name and password of the electronic bank account; after the input information is received, the user's electronic bank account is logged in such that the user can enter the user environment of the electronic bank account and in turn use the corresponding function.
  • the image is a screen-locking interface
  • the operation corresponding to the input information is unlocking the corresponding screen according to the input information.
  • the input information is the unlock information corresponding to the screen-locking interface; after the input information is received, the cell phone screen is unlocked, and the user may use the corresponding function of the cell phone system in the user environment.
  • the method may also comprise:
  • an authorization determining step for determining whether a user is an authorized user, and conducting the execution step only when the user is an authorized user.
  • the embodiment of this application provides a second device for information interaction 1400, comprising:
  • a watermark embedding module 1410 used for embedding at least one digital watermark into an image related to the second device for information interaction 1400, the at least one digital watermark containing the input information corresponding to the second device for information interaction 1400;
  • an image providing module 1420 used for providing the image to external
  • an information input module 1430 used for receiving the input information provided from the external
  • an execution module 1440 used for executing the corresponding operation according to the received input information.
  • the device of the embodiment of this application provides the digital watermark to the external after embedded such that the external device may acquire the corresponding input information according to the image and then return the same to the device of the embodiment of this application; the device of the embodiment of this application automatically carries out the corresponding operation by the execution module 1440 after the input information provided from the external is received, without manual operation of the user, which is convenient for use by the user.
  • the image providing module 1420 of the embodiment of this application comprises:
  • a display sub-module 1421 used for displaying the image.
  • the image providing module 1420 may also be, for example, an interaction interface, and the image is transferred to other devices (such as the above-mentioned first device for information interaction) by interaction.
  • the image may be a login interface of a user environment
  • the execution module 1440 is used for logging in to the user environment according to the input information.
  • the image may be a screen-locking interface
  • the execution module 1440 is used for unlocking the corresponding screen according to the input information.
  • the device 1400 may also comprise:
  • an authorization determination module 1450 used for determining whether a user is an authorized user, and triggering the corresponding operation by the execution module only when the user is an authorized user.
  • the embodiment of this application also provides an electronic terminal 1500 comprising the above-mentioned device for information interaction 1510.
  • the electronic terminal 1500 is an electronic device such as a cell phone, a tablet computer, a computer, an electronic entrance guard, and on-board electronic device.
  • Fig. 16 is the structural diagram of still another second device for information interaction 1600 provided by the embodiment of this application; and the specific embodiments of this application have no restriction on the specific realization of the second device for information interaction 1600.
  • the second device for information interaction 1600 may comprise:
  • a processor 1610 for executing instructions stored in a memory 1630, and a communication bus 1640.
  • a communications interface 1620 for communicating with a processor 1610, a communications interface 1620, a memory 1630, and a communication bus 1640.
  • a communication bus 1640 for communicating with a processor 1610, a communications interface 1620, a memory 1630, and a communication bus 1640.
  • the processor 1610, the communications interface 1620, and the memory 1630 communicate with each other via the communication bus 1640.
  • the communications interface 1620 is used for communicating with network elements such as client.
  • the processor 1610 is used for executing a program 1632, specifically executing the relevant steps of the method embodiment shown in Fig. 13.
  • the program 1632 may comprise program codes which comprise computer operating instructions.
  • the processor 1610 may be a central processing unit CPU, a specific integrated circuit ASIC (Application Specific Integrated Circuit) or one or more integrated circuits configured to implement the embodiment of this application.
  • ASIC Application Specific Integrated Circuit
  • the memory 1630 is used for storing the program 1632.
  • the memory 1630 may contain a high-speed RAM memory and may also comprise a non-volatile memory, such as at least one disk storage.
  • the program 1632 may be used to make the second device for information interaction 1600 performs the following steps:
  • a watermark embedding step embedding at least one digital watermark into an image related to a device, the digital watermark containing the at least one piece of input information corresponding to the device;
  • an image providing step providing the image to external;
  • an information input step receiving the at least one piece of input information provided from the external and executing the operation corresponding to the input information.
  • a computer readable medium comprising computer readable instructions which implement the following operations when executed: the operations of executing the steps S1310, S1320, S1330 and S1340 of the method in the above-mentioned embodiment.
  • Fig. 17 is an application example diagram of a first and a second device for information interaction of an embodiment of this application.
  • the electronic device, the cell phone device 1710, recorded in the embodiment shown in Fig. 15 and the wearable device, the intellectual glasses 1720, recorded in the embodiment shown in Fig. 12 are comprised.
  • the intellectual glasses 1720 comprise the first device for information interaction described in the embodiments shown in Figs. 5 to 11 ; the function of the image acquisition module (mainly the image collection sub-module) of the first device for information interaction is achieved by the camera 1721 on the intellectual glasses 1720; the information acquisition module (not shown in Fig. 17) and the information providing module (not shown in Fig. 17) of the device for information interaction may be integrated in the original processing module of the intellectual glasses 1720 or arranged on the frame (for example, arranged on the legs of glasses, or become a part of the frame) of the intellectual glasses 1720, for realizing their functions.
  • the function of the image acquisition module mainly the image collection sub-module of the first device for information interaction is achieved by the camera 1721 on the intellectual glasses 1720
  • the information acquisition module (not shown in Fig. 17) and the information providing module (not shown in Fig. 17) of the device for information interaction may be integrated in the original processing module of the intellectual glasses 1720 or arranged on the frame (for example, arranged on the
  • the cell phone device 1710 comprises the second device for information interaction shown in Fig. 14.
  • the function of the display sub-module of the second device for information interaction is achieved by the display module of the cell phone device 1710; and the watermark embedding module, the information input module, and the execution module may be integrated in the existing processing module and communication module of the cell phone device 1710 or arranged in the cell phone device 1710 as a separate module, for realizing their functions.
  • the image is the screen-locking interface 1711 (for example, the image shown in Fig. 2a) of the cell phone device 1710, and the input information is the corresponding unlock information.
  • the watermark embedding module embeds the digital watermark with unlock information into the screen-locking interface 1711 of the cell phone device 1710 in advance, and when a user needs to use the cell phone device 1710, the digital watermark is displayed by the display module of the cell phone device 1710 by a specific operation (for example, pressing the power supply button of the cell phone device 1710).
  • the user will look at the display screen of the cell phone device 1710 so that the camera 1721 of the intellectual glasses 1720 can acquire the image displayed on the screen-locking interface 1711 , automatically acquire the unlock information according to the image by the information acquisition module of the second device for information interaction, and then send the unlock information to the cell phone device 1710 by the information providing device (for example, a wireless communication interface between devices).
  • the information providing device for example, a wireless communication interface between devices.
  • the corresponding unlock operation is carried out by the execution module such that the cell phone device 1710 can be released from the unlocked status without any other actions, so as to enter the user environment of the cell phone system.
  • the device and method for information interaction of the embodiments of this application can make the corresponding operations natural and convenient for the user (the cell phone will be automatically unlocked only by glancing the screen-locking interface of the cell phone by the user), providing better user experience.
  • the functions may be stored in a computer-readable storage medium.
  • a computer-readable storage medium which is stored in a readable storage medium and comprises various instructions for causing a computer apparatus (which may be a personal computer, a server, a network apparatus, or the like) to execute all or some of the steps in the method in the individual embodiments of the present invention.
  • the aforementioned storage medium comprises any medium that may store program codes, such as a USB-disk, a removable hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Human Computer Interaction (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Ophthalmology & Optometry (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Image Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

This application discloses a method and device for information interaction,the method on the one hand comprises: acquiring an image related to a device, the image containing at least one digital watermark; acquiring at least one piece of input information corresponding to the device included in the at least one digital watermark; and providing the input information to the device. On the other hand, the method comprises: embedding at least one digital watermark into an image related to a device, and the digital watermark containing the input information corresponding to the device; providing the image to the external; and receiving the input information provided from the external and executing the operation corresponding to the input information. In the embodiments of this application, a device may be operated if needed by a user without requiring the user to remember the input information, which greatly facilitates the user and improves user experience.

Description

INFORMATION INTERACTION
Related Application
[0001] The present application claims the priority of Chinese Patent Application
No. 201310573092.8, entitled "Method and Device for Information Interaction", filed on Nov. 15, 2013, which is hereby incorporated by reference herein in its entirety.
Technical Field
[0002] This application relates to the technical field of device interaction, and, more particularly, to interaction with information of a device.
Background
[0003] A screen lock will be provided usually on a mobile device or wearable device for the reasons of energy saving and prevention of misoperation, however, unlocking the screen can be done in an encrypted or unencrypted way. When unlocking an encrypted screen, a user usually needs to remember some special passwords, patterns, action, etc. Although safety can be ensured thereby, these are easily forgettable, bringing inconvenience to the user. Certainly, in addition to the condition of the screen unlocking mentioned above, such problems also exist in the situation that information such as password, etc., is required to be inputted for further operation.
[0004] Some identifying information (e.g., a digital watermark) can be directly embedded into a digital carrier by the digital watermarking technique, without influencing the use of the original carrier or being detected and modified easily. The digital watermarking technique is applicable to many aspects, such as for copyright protection, against counterfeit, for authentication, for hiding information, etc. If the digital watermarking technique can be used for helping users to enter the password to acquire the corresponding authorization safely and secretly, the above-mentioned problems that authentication cannot be carried out because the user forgets the password can be solved, thereby enhancing user experience.
Summary
[0005] The following presents a simplified summary in order to provide a basic understanding of some example embodiments disclosed herein. This summary is not an extensive overview. It is intended to neither identify key or critical elements nor delineate the scope of the example embodiments disclosed. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
[0006] An example aim of this application is to provide a method for information interaction.
[0007] To this or related ends, in a first example embodiment, this application provides a method, comprising:
acquiring, by a system comprising a processor, an image related to a device, the image comprising at least one digital watermark;
acquiring at least one piece of input information corresponding to the device and included in the at least one digital watermark; and
initiating providing the at least one piece of input information to the device.
[0008] In a second example embodiment, this application provides a method, comprising:
embedding at least one digital watermark into an image related to a device, the at least one digital watermark comprising at least one piece of input information corresponding to the device;
providing the image to an external device;
receiving the at least one piece of input information from the external device; and
executing an operation corresponding to the at least one piece of input information.
[0009] In a third example embodiment, this application provides a device, comprising:
a memory that stores executable modules; and
a processor, coupled to the memory, that executes the executable modules to perform operations of the device, the executable modules comprising:
an image acquisition module configured to acquire an image related to a device, the image comprising at least one digital watermark;
an information acquisition module configured to acquire at least one piece of input information corresponding to the device included in the at least one digital watermark; and
an information providing module configured to send the at least one piece of input information to the device.
[0010] In a fourth example embodiment, this application provides a wearable device; the wearable device contains the device for information interaction in the above-mentioned third example embodiment.
[0011] In a fifth example embodiment, this application provides a device, comprising:
a processor that executes executable modules to perform operations of the device, the executable modules comprising:
a watermark embedding module configured to embed at least one digital watermark into an image related to the device for information interaction, wherein the at least one digital watermark comprises at least one piece of input information corresponding to the device for the information interaction;
an image providing module configured to provide the image to an external device;
an information input module configured to receive the at least one piece of input information provided from the external device; and
an execution module configured to execute a corresponding operation according to the at least one piece of input information received by the information input module.
[0012] In a sixth example embodiment, this application provides a computer readable storage device, comprising at least one executable instruction, which, in response to execution, causes a system comprising a processor to perform operations, comprising:
acquiring an image related to a device, the image comprising a digital watermark;
acquiring input information corresponding to the device included in the digital watermark; and
facilitating providing the input information.
[0013] In a seventh example embodiment, this application provides a device for information interaction, comprising a processing device and a memory, the memory storing executable instructions, and the processing device being connected with the memory through a communication bus, and when the device for information interaction operates, the processing device executes the executable instructions stored in the memory, and the device for information interaction executes operations comprising:
acquiring an image related to a device, the image comprising at least one digital watermark;
acquiring at least one piece of input information corresponding to the device included in the at least one digital watermark; and
providing the at least one piece of input information.
[0014] In a eighth example embodiment, this application provides a computer readable storage device, comprising at least one executable instruction, which, in response to execution, causes a system comprising a processor to perform operations, comprising:
embedding a digital watermark into an image related to a device, the digital watermark comprising input information corresponding to the device;
providing the image to an external device;
receiving the input information provided from the external device; and executing an operation corresponding to the input information.
[0015] In a ninth example embodiment, this application provides a device for information interaction, comprising a processing device and a memory, the memory storing executable instructions, the processing device being connected with the memory through a communication bus, and when the device for information interaction operates, the processing device executing the executable instructions stored in the memory, the device for information interaction executes operations, comprising:
embedding at least one digital watermark into an image related to a device, the at least one digital watermark comprising at least one piece of input information corresponding to the device;
providing the image to an external interface;
acquiring the at least one piece of input information provided from the external interface; and
executing an operation corresponding to the at least one piece of input information.
[0016] At least one technical solution of the embodiment of this application acquires an image related to a device and obtains input information contained in the image, and then automatically provides the input information to the device. Therefore, the device can be operated correspondingly as needed without requiring the user to remember the input information, which greatly facilitates the user and improves user experience. Brief Description of the Drawings
[0017] Fig. 1 is an example flow diagram of a method for information interaction of an embodiment of this application;
[0018] Fig. 2a is an example diagram of the corresponding image in a method for information interaction of an embodiment of this application;
[0019] Fig. 2b is an example diagram of the corresponding image in a method for information interaction of an embodiment of this application;
[0020] Fig. 3a is an example flow diagram of another method for information interaction of an embodiment of this application;
[0021] Fig. 3b is an example flow diagram of another method for information interaction of an embodiment of this application;
[0022] Fig. 4a is an example diagram of a light spot pattern used in a method for information interaction of an embodiment of this application;
[0023] Fig. 4b is an example diagram of a eye fundus pattern acquired by a method for information interaction of an embodiment of this application;
[0024] Fig. 5 is an example structural diagram of a first device for information interaction of an embodiment of this application;
[0025] Fig. 6a is an example structural diagram of another first device for information interaction of an embodiment of this application;
[0026] Fig. 6b is an example structural diagram of still another first device for information interaction of an embodiment of this application;
[0027] Fig. 7a is an example structural diagram of a position detection module in a first device for information interaction of an embodiment of this application;
[0028] Fig. 7b is an example structural diagram of a position detection module in another first device for information interaction of an embodiment of this application;
[0029] Fig. 7c and 7d are example diagrams of the corresponding optical paths of the position detection modules during the position detection of the embodiments of this application;
[0030] Fig. 8 is an example diagram showing that a first device for information interaction of an embodiment of this application is applied to the glasses;
[0031] Fig. 9 is an example diagram showing that another first device for information interaction of an embodiment of this application is applied to the glasses;
[0032] Fig. 10 is an example diagram that another first device for information interaction of an embodiment of this application is applied to the glasses;
[0033] Fig. 11 is an example structural diagram of another device for information interaction of an embodiment of this application;
[0034] Fig. 12 is an example diagram of a wearable device of an embodiment of this application;
[0035] Fig. 13 is an example flow diagram of a method for information interaction of an embodiment of this application;
[0036] Fig. 14 is an example structural diagram of a second device for information interaction of an embodiment of this application;
[0037] Fig. 15 is an example structural diagram of an electronic terminal of an embodiment of this application;
[0038] Fig. 16 is an example structural diagram of another second device for information interaction of an embodiment of this application;
[0039] Fig. 17 is an example schematic application scenario of a device for information interaction of an embodiment of this application.
Detailed Description
[0040] The methods and devices of this application are described in detail as follows in combination with the drawings and embodiments.
[0041] A user often needs to use various input information in daily life, where the input information is the information required to be inputted to the device to complete a certain operation, such as various user authentication information such as a user password or specific hand gesture required to be inputted to the screen-locking interfaces of various electronic devices, a user password required when logging in to accounts of some websites or applications, or password information required in some access control devices, etc. The user is required to remember the various input information, otherwise it would cause great inconvenience. The technical solution provided in the following embodiments of this application can help the user to acquire the input information without remembering the same, and complete the corresponding operations automatically.
[0042] In the following descriptions of the embodiment of this application, the
"user environment" is the operational environment related to the user, for example the operational environment of an electronic terminal system entered after the user has logged in through the user login interface of the electronic terminals (such as, a cell phone, a computer, etc.). The operational environment of the electronic terminal system generally comprises multiple applications, for example, the user can start the applications (such as applications of cell phone, e-mail, message, camera, etc.) corresponding to various functional modules in the system after entering the operational environment of the cell phone system through the screen-locking interface of the cell phone. Alternatively, for example the "user environment" can also be the operational environment of a certain application that the user enters after logging in through the login interface of the application, and the operational environment of the application may also comprise multiple applications of the next level (such as, cell phone applications in above cell phone system), which cell phone applications, after starting, may also comprise some applications of the next level such as phone calling, contacts, call records, etc.
[0043] As shown in Fig. 1 , the embodiment of this application provides a method for information interaction, comprising:
SI 20 Image acquisition step: acquiring an image related to a device, the image containing at least one digital watermark;
SI 40 Information acquisition step: acquiring at least one piece of input information contained in the at least one digital watermark and corresponding to the device; and
SI 60 Information providing step: providing at least one piece of input information to the device.
[0044] The embodiment of this application acquires an image related to a device and obtains the input information contained in the image and provides the input information automatically to the device. Therefore the device can be operated correspondingly as needed without requiring the user to remember the input information, which greatly facilitates the user and improves user experience.
[0045] The embodiment of this application further describes the steps through the following implementations:
SI 20 Image acquisition step: acquiring an image related to a device.
[0046] In the embodiment of this application, the image related to a device can be, for example, an image displayed on the device, such as an image displayed on the electronic terminal screen of the cell phone, computer, etc. In an implementation, the image is a login interface of a user environment displayed on the device. As shown in Fig. 2a and 2b, in a further implementation, the image is a screen-locking interface 110 displayed on the device.
[0047] In other implementations of the embodiment of this application, the image can also be, for example, the image displayed on other devices or a static image printed on the objects (such as, paper or wall), whereby the image is related to the device mentioned above. For example, the image is an image displayed on a picture posted near a door, the device is an electronic access control device for the door, and the electronic watermark of the image contains the input information for the electronic access control device (such as, password information for opening the door).
[0048] There are various ways to acquire the image in the embodiment of this application, for example:
1) Acquiring the image by photographing.
[0049] In the embodiment of this application, for example the object seen by the user can be photographed through an intellectual glasses device, for example, when the user sees the image, the intellectual glasses device photographs the image.
2) Acquiring the image by receiving the same.
[0050] In a possible implementation of the embodiment of this application, the image can be acquired through other devices, or through the interaction with a device displaying the image.
[0051] S140 Information acquisition step: acquiring at least one piece of input information which is contained in the at least one digital watermark and corresponds to the device.
[0052] In the embodiment of this application, there are various methods for acquiring the input information, such as one or more of the following:
1) Extracting the input information from the image.
[0053] In this implementation, for example, the digital watermark in the image can be analyzed by a method for extracting an individual personal private key and a public or private watermark to extract the input information.
2) Sending the image to external; and receiving the input information in the image from the external.
[0054] In this implementation, the image can be sent to the external, for example, to a cloud server and/or a third party authority, and the input information in at least one digital watermark can be extracted by the cloud server or the third party authority.
[0055] In this implementation, when the image is the login interface of a user environment displayed on the device, the input information is the login information about the user environment. For example, the image is the login interface of a website displayed on an electronic device, and the input information is the login information, such as the user name, password, etc., corresponding to the website.
[0056] In this implementation, when the image is the screen-locking interface of the device shown in Fig. 2a or 2b, the input information is the unlock information about the screen-locking interface. Fig. 2a shows the screen-locking interface of a touch screen cell phone device. In prior art, the user needs to draw a corresponding track on the cell phone screen to unlock the cell phone, so that the user can enter the user environment of the cell phone system for further operations. In this implementation, the screen-locking interface is embedded with the digital watermark; this implementation acquires corresponding unlock information through the digital watermark; and sending same back to the device, so that the device can unlock automatically after receiving the unlock information.
[0057] SI 60 Information providing step: providing the input information to the device.
[0058] In the embodiment of this application, the input information can be sent directly to the device through the interaction between devices locally. In other implementations, the input information can also be sent to a remote end server, and the remote end server provides the input information to the device.
[0059] Through the method of the embodiment of this application, the input information from a device can be acquired naturally and conveniently and provided to the device, so that the corresponding function of the device can be started conveniently without user operations.
[0060] As shown in Fig. 3a, in a possible implementation of the embodiment of this application, in addition to the steps of the embodiment shown in Fig. 1 , in order to improve the security of operation, before the information acquisition step SI 40, the method also comprises:
SI 30 Authorization determining step: determining whether the user is an authorized user, and conducting the information acquisition step only when the user is an authorized user.
[0061] In this implementation, the user can be a user who is using the device. [0062] After the user acquires the pattern related to a device and having digital watermarks, in order to guarantee the security of user information, the user needs to be authorized, so that only authorized users can acquire the corresponding input information in the digital watermark, while the unauthorized users can only see the image. The authorization determining step SI 30 can be conducted at the remote end (such as a cloud server), that is, the corresponding user information can be sent to the remote end and the determined results can be sent back to a local spot after being determined by the remote end; it can also be directly conducted locally.
[0063] In other possible implementations of the embodiment of this application, the authorization determining step can also be set before the information providing step. As shown in Fig. 3b, in this implementation, before the information providing step, the method also comprises:
SI 50 Authorization determining step: determining whether the user is an authorized user, and conducting the information providing step only when the user is an authorized user.
[0064] In the embodiment of this application, after extracting the input information for the device through the image, the user needs to be authorized, so that only when the user is an authorized user, the input information can be provided to the device. Similar to the embodiment shown in Fig. 3a, in this embodiment, the authorization determining step can also be conducted at the remote end or locally.
[0065] In the descriptions of above embodiments, the first device for information interaction described in the embodiment of this application is the device provided at the user's side; a person skilled in the art could know that the first device for information interaction can also be a cloud device, such as a server, etc. The input information in the image can be extracted at a local spot or at other server sides after receiving the image sent from the user's side, and the user authorization determining step is conducted, and then the input information is sent to the device related to the image after confirming that the user is an authorized user.
[0066] In addition to providing the input information to the device so as to implement corresponding operations, in order to ensure that the user can see the input information secretly, as shown in Fig. 3b, in some embodiments, after the authorization determining step SI 50, the method also comprises:
SI 80 Projecting step: projecting the input information to the eye fundus of the user.
[0067] In this way, on one hand, the user can know the corresponding information, and on the other hand, in the occasions (for example there are communication problems) that some devices are unable to receive the input information, the user can input into the device manually according to acquired input information. Certainly, when projecting the input information to the user eye fundus, the input information needs to be converted into corresponding display content.
[0068] In the embodiment of this application, in order to ensure that the user can obtain the input information confidentially, the corresponding input information can be provided to the user by projecting the input information to the user's eye fundus.
[0069] In a possible implementation, the projection can be performed by directly projecting the input information to the user eye fundus through a projection module.
[0070] In this implementation, the input information can be projected directly to the user's eye fundus without intermediate display, therefore only the user himself can acquire the input information, while other people are unable to see the information, thereby guaranteeing the information security for the user.
[0071] In another possible implementation, the projection can also be performed by displaying the input information in the positions only visible to the user (for example on the display surface of an intellectual glass), and projecting the input information to the user's eye fundus through the display surface.
[0072] When using near-to-eye devices (such as intellectual glasses) to display the input information in a near-to-eye position, it is difficult for other users to see the input information, therefore the implementation can also guarantee the information security for the user effectively.
[0073] Since the first way can send the input information directly to the user's eye fundus without intermediate display, it has higher privacy.
[0074] The implementation is further illustrated below, the projecting step comprising:
an information projecting step: projecting the input information; and a parameter adjusting step: adjusting at least one projection imaging parameter of the optical path between the projection position and the user's eyes, until the input information is imaged clearly in the user's eye fundus.
[0075] In a possible implementation of the embodiment of this application, the parameter adjusting step comprises:
adjusting at least one imaging parameter and/or the position of at least one optical device on the optical path between the projection position and the user's eyes.
[0076] Here the imaging parameter comprises the focal length, direction of optical axis, etc. of the optical device. The input information can be properly projected to the eye fundus of the user through this adjustment, for example, the input information is imaged clearly on the user's eye fundus by adjusting the focal length of the optical devices. Alternatively, in the implementations mentioned below, when three-dimensional display is needed, in addition to generating the left and right eye images with parallax directly when generating the input information, the three-dimensional display effect of the input information can also be achieved by projecting the same input information to the two eyes respectively with some deviations, and at this time, for example, the effect can be achieved by adjusting the parameter of the optical axis of the optical device.
[0077] Since the line-of-sight direction of user's eyes may change when viewing the input information, the input information needs to be projected well to the user's eye fundus in different line-of-sight directions. Therefore in one possible implementation of the embodiment of this application, the projecting step SI 80 also comprises:
transmitting the input information to the user's eye fundus respectively corresponding to positions of pupil in different directions of optical axis of the eye.
[0078] In a possible implementation of the embodiment of this application, achieving the functions of above steps may need a curved optical device, such as a curved spectroscope, but generally the content to be displayed will become deformed after being transmitted through the curved optical device. Therefore, in a possible implementation of the embodiment of this application, the projecting step SI 80 also comprises:
conducting an anti-deformation processing corresponding to the positions of pupil in different directions of optical axis of the eye, so that the eye fundus receives the input information to be presented.
[0079] For example, the projected input information is pre-processed, so that the input information projected has a reversed deformation opposite to the deformation, which reversed deformation effect offsets the deformation effect of the curved optical device through the above curved optical device. Therefore the input information received on the user's eye fundus is the effect to be presented.
[0080] In a possible implementation, the input information projected to the user's eye needs not to be aligned with the image, for example, when the user is needed to input a set of password information in a certain order in an input box displayed in the image, for example "1234", the set of information only needs to be projected to the user's eye fundus to be seen by the user. But, in some cases, for example, when the input information is the information generated when completing a specific action in a specific position, for example, when it needs the information generated by drawing a specific track in a specific position on the screen displaying the image, the input information needs to be aligned with the image for displaying. Therefore, in a possible implementation of the embodiment of this application, in the projecting step SI 80, the input information can be projected to the user's eye fundus after being aligned with an image seen by the user.
[0081] In order to achieve the above alignment function, in a possible implementation, the method also comprises:
a position detecting step for detecting the position of the user's gazing point relative to the user;
[0082] the projecting step SI 80 aligns the projected input information with the image seen by the user on the eye fundus of the user according to the position of user's gazing point relative to the user.
[0083] Here, since the user is watching the image at this time, such as, the screen-locking interface of a cell phone of the user, the position corresponding to the user's gazing point is the position of the image.
[0084] In this implementation, there are various methods for detecting the position of the user's gazing point, for example, comprising one or more of the following:
[0085] i) Using one pupil direction detector to detect the direction of optical axis of one eye, obtaining the depth of scene gazed by the eye through a depth sensor (such as infrared distance measurement), to obtain the position of the gazing point on the line-of-sight of the eye. This technology is prior art, and it is not explained in the implementation.
[0086] ii) Detecting the directions of optical axes of the two eyes respectively, obtaining the line-of-sight directions of the user's two eyes according to the directions of optical axes of the two eyes, and obtaining the position of gazing point on the line-of-sight of the eyes by the intersection of the line-of-sight directions of the two eyes. The technology is prior art, and it is not explained here.
[0087] iii) Obtaining the position of gazing point on the line-of-sight of the eye according to the optical parameters of the optical path between the image collection position, where the clearest image presented on the imaging surface of the eye is collected, and the eye as well as the optical parameters of the eye, the embodiment of this application will give the detailed process of the method below, and it is not explained here.
[0088] Certainly, a person skilled in the art could know that in addition to the above methods for detecting the gazing point, other methods that can be used for detecting the gazing point of the user's eyes may also be used in the method of the embodiment of this application.
[0089] The detecting the current gazing point of the user through the method iii) comprises:
an eye fundus image collection step for collecting an image of the user's eye fundus;
an adjustable imaging step for adjusting at least one imaging parameter of the optical path between the image collection position of the eye fundus and the user's eye until the clearest image is collected;
an image processing step for analyzing the collected image of the eye fundus, obtaining the imaging parameters of the optical path between the image collection position of the eye fundus corresponding to the clearest image and the eye as well as at least one optical parameter of the eye, and calculating the position of the user's current gazing point relative to the user.
[0090] Obtaining the optical parameters of eyes when the clearest image is collected by analyzing and processing the image of the eye fundus, so as to calculate the position of the current focusing point on the line-of-view, provides a basis for further detecting the observation behavior of the observer based on the precise position of the focusing point.
[0091] Here an image presented on the "eye fundus" is primarily an image presented on the retina, which can be the image of the eye fundus per se, or the image of another object projected to the eye fundus, such as the light spot pattern mentioned below.
[0092] In the adjustable imaging step, the clearest image of the eye fundus can be obtained when the optical device is in a certain position or state by adjusting the focal length of an optical device on the optical path between the eye and the collection position and/or its position in the optical path. The adjustment can be continuous and in real time.
[0093] In a possible implementation of the embodiment of this application, the optical device can be a lens with adjustable focal length, and is used for completing the adjustment of its focal length by adjusting the refractive index and/or shape of the optical device itself. Specifically, 1) adjusting the focal length by adjusting the curvature of at least one surface of the lens with adjustable focal length, for example, by increasing or decreasing the liquid medium in the cavity composed by two transparent layers to adjust the curvature of the lens with adjustable focal length; 2) adjusting the focal length by changing the refractive index of the lens with adjustable focal length, for example, since the lens with adjustable focal length is filled with a specific liquid crystal medium, adjusting the arrangement mode of the liquid crystal medium by adjusting the voltage of the respective electrode of the liquid crystal medium, thereby changing the refractive index of the lens with adjustable focal length.
[0094] In another possible implementation of the embodiment of this application, the optical devices can be: a set of lenses, which are used for completing the adjustment of focal length of the set of lenses by adjusting the relative positions of lenses in the set of lenses. Alternately, one or more lenses in the set of lenses are the lens with adjustable focal length mentioned above.
[0095] In addition to the above two methods for changing the optical path parameters of the system by the characteristics of the optical device, optical path parameter of the system can also be changed by adjusting the position of the optical device on the optical path.
[0096] In addition, in the method of the embodiment of this application, the image processing step further comprises:
analyzing the image collected in the eye fundus image collection step, to find the clearest image; and
calculating the optical parameters of the eyes according to the clearest image and known imaging parameters when obtaining the clearest image.
[0097] The clearest image can be collected through adjustment in the adjustable imaging step, but the clearest image needs to be found out through the image processing step, so that the optical parameters of the eyes can be calculated according to the clearest image and the known optical path parameters.
[0098] In the embodiment of this application, the image processing step may also comprise:
projecting a light spot to the eye fundus. The projected light spot may have no specific pattern and is only used for illuminating the eye fundus. The projected light spot may also comprise patterns with rich features. Rich features of the pattern can be convenient for detecting, increasing the accuracy of detection. Fig. 4a shows an example drawing of a light spot pattern P, which pattern can be formed by a light spot pattern generator, such as a frosted glass; Fig. 4b shows the image of eye fundus collected with a light spot pattern P projected.
[0099] In order not to influence the normal view of eyes, the light spot is an infrared light spot invisible to eyes. At this moment, in order to reduce the interference from other spectra, light other than that visible to eyes in the projected light spot can be filtered out.
[00100] Accordingly, the method of the embodiment of this application may also comprise the following steps:
controlling the brightness of the projected light spot according to the result obtained by analyzing in the above steps. The analysis result, for example, comprises the features of the collected image, comprising contrast of the features of image and the texture features, etc.
[00101] It's important to note that a special situation of controlling the brightness of projected light spot is starting and stopping projecting, for example, the projection can be stopped periodically when the observer gazes one point continually. The projection can be stopped when the eye fundus of the observer is bright enough, and the distance from focusing point of the current view of eyes to the eyes can be detected through eye fundus information.
[00102] In addition, the brightness of projected light spot can also be controlled according to the ambient light.
[00103] In the method of the embodiment of this application, the image processing step may also comprise:
conducting calibration of the eye fundus image, acquiring at least one reference image corresponding to the image presented on the eye fundus. Specifically, the collected image and the reference image are compared and calculated to obtain the clearest image. Here, the clearest image can be the obtained image having the minimum difference from the reference image. In the method of the implementation, difference between the current obtained image and the reference image can be calculated using existing image processing algorithm, for example, using classical automatic focusing algorithm of phase difference.
[00104] The optical parameters of the eyes may also comprise the direction of optical axis of eyes obtained according to the characteristic of eyes when the clearest image is collected. Here the characteristics of eyes can be acquired from the clearest image or by other means. The gazing direction of the user's eyes view can be obtained according to the directions of optical axes of eyes. Specifically, the directions of optical axes of eyes can be obtained according to the characteristic of eye fundus when the clearest image is obtained, and the directions of optical axes of eyes can be determined by the characteristic of the eye fundus at higher accuracy.
[00105] When projecting light spot pattern to the eye fundus, the size of the light spot pattern may be larger than or smaller than the visible area in the eye fundus, wherein:
[00106] When the area of the light spot pattern is smaller than or equal to the visible area in the eye fundus, classical matching algorithm of feature points (such as Scale Invariant Feature Transform (SIFT) algorithm) can be used to determine the direction of optical axis of eye by detecting the position of the light spot pattern in the image relative to the eye fundus.
[00107] When the area of the light spot pattern is larger than or equal to the visible area in the eye fundus, the directions of optical axes of eyes and line-of-sight direction of observer can be determined through the position of the light spot pattern on the obtained image relative to original light spot pattern (obtaining through image calibration).
[00108] In another implementation of the embodiment of this application, the direction of optical axis of eye can be obtained through the characteristics of pupil when the clearest image is obtained. Here the characteristics of pupil can be acquired from the clearest image or by other means. Obtaining the direction of optical axis of eye through characteristics of pupil is prior art, therefore it is not explained here.
[00109] In addition, the method of the embodiment of this application also comprises the calibration step of the direction of optical axis of eye, in order to determine the direction of optical axis of eye more precisely.
[00110] In the method of the embodiment of this application, the known imaging parameters comprise fixed imaging parameters and real-time imaging parameters, wherein the real-time imaging parameters are the information about parameters of the optical device when the clearest image is acquired, and the parameter information can be obtained by recording real-time when the clearest image is acquired. [00111] After the current optical parameters of eyes has been obtained, the position of the eyes' gazing point can be obtained by combining the parameters with the distance from focusing point of eye to eye obtained by calculating.
[00112] In order to show the input information having three-dimensional display effect and more realistic to the user, in another possible implementation of the embodiment of this application, the input information can be projected to the user's eye fundus three-dimensionally in the projecting step SI 80.
[00113] As described above, in a possible implementation, the three-dimensional projection can be realized by adjusting the projection position in such a manner that the user can see the information with parallax, thereby forming the three-dimensional display effect of the same projection information.
[00114] In another possible implementation, the input information respectively comprise three-dimensional information corresponding to the user's two eyes, and in the projecting step, corresponding input information can be projected to the two eyes of the user respectively. That is, the input information comprises left eye information corresponding to the user's left eye and right eye information corresponding to the user's right eye, wherein the left eye information can be projected to the user's left eye and the right eye information can be projected to the user's right eye, so that the input information seen by the user has suited three-dimensional display effect and brings better user experience.
[00115] In addition, when the input information inputted to the user contains three-dimensional space information, the user can see the three-dimensional space information through the three-dimensional projection. For example, in the case where the input information can only be inputted correctly when the user makes a specific hand gesture in a specific position in the three-dimensional space, with the above method of the embodiment of this application, the user sees the three-dimensional input information and thus knows the specific position and the specific hand gesture, so that the user can make the hand gesture prompted by the input information in the specific position, while other people are unable to know the spatial information even if they see the hand gesture made by the user, thereby improving the secrecy effect of the input information.
[00116] It should be understood that in various embodiments of this application, the numbers of order in above various processes do not present the execution sequence, and the execution sequence of various processes shall be determined according to their functions and internal logic, which shall not impose any restriction to the implementation of the embodiment of this application.
[00117] As shown in Fig. 5, the embodiment of this application provides a first device for information interaction 500 for, comprising:
an image acquisition module 510 used for acquiring an image related to a device, the image containing at least one digital watermark;
an information acquisition module 520 used for acquiring the at least one piece of input information which is contained in at least one digital watermark and corresponding to the device;
an information providing module 530 used for providing the at least one piece of input information to the device.
[00118] The device of the embodiment of this application acquires an image related to a device and obtains the input information contained in the image and provides the input information automatically to the device. Therefore the device can be operated correspondingly as needed without requiring the user to remember the input information, which greatly facilitates the user and improves user experience.
[00119] The embodiment of this application further describes the modules of the first device for information interaction 500 through the following implementation:
[00120] In the implementation of the embodiment of this application, there are various forms of the image acquisition module 510, for example:
[00121] As shown in Fig. 6a, the image acquisition module 510 comprises an image collection sub-module 511 used for acquiring the image by photographing.
[00122] The image collection sub-module 511, for example can be a camera of the intellectual glasses used for photographing the image seen by the users.
[00123] As shown in Fig. 6b, in another implementation of the embodiment of this application, the image acquisition module 510 comprises:
a first communication sub-module 512 used for obtaining the image by receiving the same.
[00124] In this implementation, the image can be acquired by another device and then sent to the device of the embodiment of this application; alternatively, the image can be acquired through interaction with a device displaying the image (that is, the device transmits the displayed image information to the device of embodiment of this application).
[00125] In the embodiment of this application, there are various forms of the information acquisition module 520, for example:
as shown in Fig. 6a, the information acquisition module 520 comprises: an information extraction sub-module 521 used for extracting the input information from the image.
[00126] In this implementation, the information extraction sub-module 521, for example, can analyze the digital watermarks in the image through a method for extracting a personal private key and public or private watermark, and extract the input information.
[00127] As shown in Fig. 6b, in another implementation of the embodiment of this application, the information acquisition module 520 comprises: a third communication sub-module 522 used for:
sending the image to external; and receiving the input information in the image from the external.
[00128] In this implementation, the image can be sent to the external, for example, to a cloud server and/or a third party authority, and then after the cloud server or the third party authority extracts the at least one digital watermark in the input information, sent back to the third communication sub-module 522 in the embodiment of this application.
[00129] Here, the functions of the first communication sub-module 512 and the third communication sub-module 522 can be achieved through the same communication module.
[00130] As shown in Fig. 6a, in a possible implementation of the embodiment of this application, the device 500 also comprises:
an authorization determination module 550 is used for determining whether the user is an authorized user, and starting corresponding operations when the user is an authorized user, specifically, that is, the information acquisition module 520 can acquire the input information only when the user is an authorized user.
[00131] In this implementation, after the user acquires a pattern related to a device and having digital watermarks, in order to guarantee the security of user information, the user needs to be authorized, so that only an authorized user can acquire the corresponding input information in the digital watermarks, and the unauthorized user can only see the image.
[00132] As shown in Fig. 6b, in some other implementations of the embodiment of this application, the authorization determination module 550 determines whether the user is an authorized user, and starts corresponding operations when the user is an authorized user. [00133] In the embodiment of this application, after the input information is extracted through the image, the user needs to be authorized, so that the input information can be provided to the device only when the user is an authorized user.
[00134] In this embodiment, the authorization determination by the authorization determination module 550 can also be conducted at the remote end (such as a cloud server), that is, sending the corresponding user information to the remote end, and sending the results back to the local spot after determination. At this moment, as shown in Fig. 6b, the authorization determination module 550 comprises:
a second communication sub-module 551 used for:
sending corresponding information about a user to external; and receiving the result whether the user is an authorized user from the external.
[00135] The second communication sub-module 551 can be a separate communication interface device, or can be the same module as the first communication sub-module 512 and/or the third communication sub-module 522.
[00136] Certainly, a person skilled in the art could know that when the user is not required to be authorized, the device may comprise no authorization determination module.
[00137] In the embodiment of this application, in addition to providing the input information to the device to perform corresponding operations, in order to ensure that the user can see the input information secretly, as shown in Fig. 6b, the device 500 also comprises:
a projection module 560 used for projecting the input information to the user's eye fundus.
[00138] In this way, on one hand, the user can know the corresponding information, and on the other hand, in the occasions (for example there are communication problems) that some devices are unable to receive the input information, the user can input the information into the device manually according to acquired input information.
[00139] As shown in Fig. 6b, in this implementation, the projection module 560 comprises:
an information projecting sub-module 561 used for projecting the input information; and
a parameter adjustment sub-module 562 used for adjusting at least one projection imaging parameter of the optical path between the projection position and the user's eyes, until the input information is imaged clearly on the user's eye fundus.
[00140] In an implementation, the parameter adjustment sub-module 562 comprises:
at least one adjustable lens device, with the focal length thereof being adjustable and/or the position thereof on the optical path between the projection position and the user's eyes being adjustable.
[00141] As shown in Fig. 6b, in an implementation, the projection module 560 comprises:
a curved spectral device 563 used for transmitting the input information to the user's eye fundus respectively corresponding to the positions of pupil in different directions of optical axis of the eye.
[00142] In an implementation, the projection module 560 comprises:
a reversed deformation processing sub-module 564 used for conducting the reversed deformation processing on the input information corresponding to the positions of pupil in different directions of optical axis of the eye so that the eye fundus receive the input information to be presented.
[00143] In an implementation, the projection module 560 comprises:
an alignment and adjustment sub-module 565 used for aligning the projected input information with the image seen by the user on the user's eye fundus.
[00144] In an implementation, the device 500 also comprises:
a position detection module 540 used for detecting the position of the user's gazing point relative to the user;
the alignment and adjustment module 565 used for aligning the projected input information and the image seen by the user on the user's eye fundus according to the position of the user's gazing point relative to the user.
[00145] The functions of the sub-modules in the above projection module refer to the description of corresponding steps in the above method embodiments, and examples will be given in the embodiments in Fig. 7a-7d, 8 and 9 below.
[00146] In the embodiment of this application, there are various embodiments of the position detection module 540, such as devices corresponding to methods i) to iii) in the method embodiments. The embodiment of this application further illustrates the position detection module corresponding to method iii) through the implementations corresponding to Figure 7a-7d, 8 and 9: [00147] As shown in Fig. 7a, in a possible implementation of the embodiment of this application, the position detection module 700 comprises:
an image collection sub-module for eye fundus 710 used for collecting an image on the user's eye fundus;
an adjustable imaging sub-module 720 used for adjusting at least one imaging parameter of the optical path between the image collection position of the eye fundus and the user's eye until the clearest image is collected;
an image processing sub-module 730 used for analyzing the collected image of the eye fundus, obtaining the imaging parameters of the optical path between the eye fundus image collection position corresponding to the clearest image and the eye as well as at least one optical parameter of the eye, and calculating the position of the user's current gazing point relative to the user.
[00148] By analyzing and processing the image on the eye fundus, the position detection module 700 obtains the optical parameters of the eye when the image collection sub-module of eye fundus obtaining the clearest image, and thus the current position of gazing point of eyes can be obtained by calculating.
[00149] Here the image presented on the "eye fundus" is primarily the image presented on the retina, which can be the image of the eye fundus itself or the image of another object projected to the eye fundus. Here the eye can be human eyes or the eye of other animals.
[00150] As shown in Fig. 7b, in a possible implementation of the embodiment of this application, the image collection sub-module for eye fundus 710 is a micro camera; in another possible implementation of the embodiment of this application, the image collection sub-module for eye fundus 710 can also use sensing imaging device directly, such as a CCD or CMOS.
[00151] In a possible implementation of the embodiment of this application, the adjustable imaging sub-module 720 comprises: an adjustable lens device 721 located on the optical path between the eyes and the image collection sub-module for eye fundus 710, with the focal length thereof being adjustable and/or the position thereof in the optical path being adjustable. Equivalent focal length of the system from eyes to the image collection sub-module for eye fundus 710 can be adjusted through the adjustable lens device 721 , and the image collection sub-module for eye fundus 710 can acquire the clearest image on the eye fundus under a certain position or state of the adjustable lens device 721 by adjustment of the adjustable lens device 721. In this implementation, the adjustable lens device 721 can be adjusted continuously and in real time in the detection process.
[00152] In a possible implementation of the embodiment of this application, the adjustable lens device 721 can be: a lens with adjustable focal length, used for completing the adjustment of the focal length thereof by adjusting the refractive index and/or shape thereof. Specifically: 1) adjusting the focal length by adjusting the curvature of at least one surface of the lens with adjustable focal length, for example, by increasing or decreasing the liquid medium in the cavity composed of two transparent layers to adjust the curvature of the lens with adjustable focal length; 2) adjusting the focal length by changing the refractive index of the lens with adjustable focal length, for example, since the lens with adjustable focal length is filled with a specific liquid crystal medium, the arrangement mode of the liquid crystal medium can be adjusted by adjusting the voltage of the respective electrode of the liquid crystal medium, thereby changing the refractive index of the lens with adjustable focal length.
[00153] In a possible implementation of the embodiment of this application, the adjustable lens device 721 comprises: a set of lenses composed of multiple lenses, used for completing the adjustment of focal length of the set of lenses by adjusting the relative positions between the lenses in the set of lenses. The set of lens can also comprise the lenses with adjustable imaging parameters, such as focal length.
[00154] In addition to the above-mentioned two methods in which the optical path parameters of the system are changed by adjusting own characteristics of the adjustable lens device 721, the optical path parameters of the system can be further changed by adjusting the position of the lens device 721 on the optical path.
[00155] In a possible implementation of the embodiment of this application, in order not to influence the viewing experience of the user on an observation object and in order to make the system portably applied to a wearable device, the adjustable imaging sub-module 720 can also comprise: a spectroscopic unit 722, used for forming the light transmission paths between the eye and the observation object as well as between the eye and the image acquisition sub-module for eye fundus 710. This folds the optical path, reducing the system volume while avoiding influencing other viewing experiences of the user as far as possible.
[00156] In this implementation, the spectroscopic unit can comprise: a first spectroscopic unit located between the eye and the observation object, used for transmitting the light from the observation object to the eye and used for transferring the light from the eye to the image acquisition sub-module for eye fundus.
[00157] The first spectroscopic unit can be a spectroscope, a spectroscopic optical waveguide (comprising optical fibers) or other suitable spectroscopic devices.
[00158] In a possible implementation of the embodiment of this application, the image processing sub-module 730 of the system comprises an optical path calibrating unit, used for calibrating the optical path of the system; for example, an alignment calibration for the optical axis of the optical path is performed to guarantee the accuracy of the measurement.
[00159] In a possible implementation of the embodiment of this application, the image processing sub-module 730 comprises:
an image analysis unit 731 used for analyzing the image acquired by the image acquisition sub-module for eye fundus to find the clearest image; and
a parameter calculation unit 732 used for calculating the optical parameters of the eyes based on the clearest image as well as the known imaging parameters of the system when the clearest image is acquired.
[00160] In this implementation, the image acquisition sub-module for eye fundus
710 can acquire the clearest image by the adjustable imaging sub-module 720, but the clearest image is required to be found by the image analysis unit 731. At this moment, the optical parameters of the eyes can be obtained by calculating based on the clearest image and the known optical path parameters of the system. The optical parameters of the eye herein can comprise an optical axis direction of the eye.
[00161] In a possible implementation of the embodiment of this application, the system can also comprise: a projection sub-module 740 used for projecting the light spot to the eye fundus. In a possible implementation, the functions of the projection sub-module can be implemented through a micro-projector.
[00162] The light spot projected herein may have no specific pattern and is only used for illuminating the eye fundus.
[00163] In an implementation of the embodiment of this application, the projected light spot can comprise a pattern with rich features. The rich features of the pattern can be easy to detect, increasing the accuracy of detection. Fig. 4a is a diagram of a light spot pattern P, and the pattern can be formed by a light spot pattern generator, such as a frosted glass; Fig. 4b shows the eye fundus image taken when the light spot pattern P is projected. [00164] In order not to influence the normal view of the eyes, the light spot can be an infrared light spot invisible to eyes.
[00165] At this moment, in order to reduce the interference from other spectra:
[00166] a transmission filter for light invisible to eyes can be arranged on the emergent surface of the projection sub-module.
[00167] A transmission filter for light invisible to eyes can be arranged on the incident surface of the image acquisition sub-module for eye fundus.
[00168] In a possible implementation of the embodiment of this application, the image processing sub-module 730 can also comprise:
a projection control unit 734 used for controlling the brightness of the projected light spot of the projection sub-module 740 based on the obtained results of the image analysis unit 731.
[00169] For example, the projection control unit 734 can adaptively adjust the brightness based on the features of the image obtained by the image collection sub-module for eye fundus 710. The features of the image herein comprise the contrast of the image features as well as the texture feature, etc.
[00170] Here, a special condition of controlling the brightness of projected light spot of the projection sub-module 740 is turning-on or tuning-off the projection sub-module 740, and for example the projection sub-module 740 can be periodically turned off when the user continues to focus on a point. The light emitting source can be turned off when the user eye fundus is bright enough, and the distance from the eyes' current gazing point to the eyes can be detected only by the eye fundus information.
[00171] In addition, the projection control unit 734 can further control the brightness of projected light spot of the projection sub-module 740 based on an ambient light.
[00172] In a possible implementation of the embodiment of this application, the image processing sub-module 730 can also comprise: an image calibration unit 733 used for performing the calibration for the eye fundus image to obtain at least one reference image corresponding to the image present in the eye fundus.
[00173] The image analysis unit 731 compares the image acquired by the image acquisition sub-module for eye fundus 730 with the reference image and calculates, thereby acquiring the clearest image. Here, the clearest image can be an obtained image having the minimum difference from the reference image. In this implementation, the difference between the current obtained image and the reference image can be calculated by an existing image processing algorithm such as a classical automatic focusing algorithm of phase difference.
[00174] In a possible implementation of the embodiment of this application, the parameter calculation unit 732 can comprise:
a determination subunit 7321 for the direction of optical axis of eye used for obtaining the direction of optical axis of eye according to the characteristics of eye when the clearest image is acquired.
[00175] The characteristics of eyes herein can be obtained from the clearest image or by other means. The gazing direction of line-of-sight of the user's eye can be obtained according to direction of optical axis of eye.
[00176] In a possible implementation of the embodiment of this application, the determination subunit 7321 for the direction of optical axis of eye can comprise: a first determination subunit used for obtaining the direction of optical axis of eye according to the characteristics of the light spot when the clearest image is acquired. Compared with the direction of optical axis of eye obtained through the characteristics of pupil and the surface of eyeball, the direction of optical axis of eyes can be determined by the characteristics of the eye fundus at a higher accuracy.
[00177] When projecting the light spot pattern to the eye fundus, the size of the light spot pattern may be larger than or smaller than the visible area of the eye fundus, wherein:
[00178] when the area of the light spot pattern is less than or equal to the visible area of the eye fundus, the classical matching algorithm of feature points (such as Scale Invariant Feature Transform (SIFT) algorithm) can be used to determine the direction of optical axis of eyes by detecting the position of the light spot pattern in the image relative to the eye fundus; and
[00179] when the area of the light spot pattern is greater than or equal to the visible area of the eye fundus, the direction of optical axis of eyes can be determined through the position of the light spot pattern in the obtained image relative to the original light spot pattern (obtained by the image calibration unit), thereby determining the line-of-sight direction of the user.
[00180] In a possible implementation of another embodiment of this application, the determination subunit 7321 for the direction of optical axis of eye comprise: a second determination subunit used for obtaining the direction of optical axis of eye according to the characteristics of pupil when the clearest image is acquired. The characteristics of pupil herein can be obtained from the clearest image or by other means. Obtaining the direction of optical axis of eye through characteristics of pupil is prior art, therefore it is not explained here.
[00181] In a possible implementation of the embodiment of this application, the image processing sub-module 730 can also comprise: a calibration unit 735 for the direction of optical axis of eye used for calibrating the direction of optical axis of eye to more precisely determine the direction of optical axis of eye mentioned above.
[00182] In this implementation, the known imaging parameters of the system comprise the fixed imaging parameters and the real-time imaging parameters, wherein the real-time imaging parameters are the information about the parameters of the adjustable lens device when the clearest image is obtained, and the information about the parameters can be obtained by real-time recording when the clearest image is acquired.
[00183] The distance from eyes' gazing point to the eye can be obtained by calculation as follows, particularly:
[00184] Fig. 7c is a diagram of the eye imaging, and by combining a lens imaging formula in the classical optics theory, the formula 1) can be obtained from Fig. 7c:
Figure imgf000029_0001
[00186] wherein do and de are respectively the distance from the current observation object 7010 of eyes and from a real image 7020 on the retina to the eye-equivalent lens 7030; fe is an equivalent focal length of the eye-equivalent lens 7030; and X is a line-of-sight direction of the eye (which may be obtained through the direction of optical axis of eye).
[00187] Fig. 7d is a diagram of the distance from the eyes' gazing point to the eye obtained based on the known optical parameters of the system and the optical parameters of the eyes, the light spot 7040 in Fig.7d are converted into a virtual image (not shown in Fig. 7d) through the adjustable lens device 721, assuming that the distance from the virtual image to the lens is x (not shown in Fig.7d), the following system of equations can be obtained by combining with formula (1):
Figure imgf000030_0001
[00189] wherein dp is an optical equivalence distance from the light spot 7040 to the adjustable lens device 721 ; d; is an optical equivalence distance from the adjustable lens device 721 to the eye-equivalent lens 7030; and fp is the focal length value of the adjustable lens device 721.
[00190] Seen from (1) and (2), the distance from the current observation object
7010 (the eyes' gazing point) to the eye-equivalent lens 7030 dc is shown in formula (3):
Figure imgf000030_0002
[00191] The position of eyes' gazing point can be easily obtained based on the distance from the observation object 7010 to the eye obtained according to the above-mentioned calculations as well as the direction of optical axis of eye recorded previously, thereby providing a basis for a subsequent further interaction related to the eye.
[00192] Fig. 8 is an embodiment where the position detecting module 800 is applied to the glasses G in a possible implementation of the embodiment of this application, which comprises the recorded contents of the implementation shown in Fig. 7b, specifically: seen from the Fig. 8, in this implementation, the module 800 of this implementation is integrated on the right side (not limited thereto) of the glasses G, comprising:
a micro camera 810, whose function is the same as that of the image collection sub-module for eye fundus recorded in the implementation of Fig. 7b, and which is located on the right outer position of the glasses G in order not to influence the sight when the user views the object normally;
a first spectroscope 820, whose function is the same as that of the first spectroscopic unit recorded in the implementation of Fig. 7b, and which is located at the intersection point of the gazing direction of the eye A and the incidence direction of the camera 810 at a certain angle of inclination, so as to transmit the light of the observation object entering the eye A and reflect the light from the eye to the camera 810; and
a lens with adjustable focal length 830, whose function is the same as that of the lens with adjustable focal length recorded in the implementation of Fig. 7b, and which is located between the first spectroscope 820 and the camera 810 to adjust the focal length value in real time, such that the camera 810 can take the clearest eye fundus image at a certain focal length value.
[00193] In this implementation, the image processing sub-module is not shown in
Fig. 8, and the function of the image processing sub-module is the same as that of the image processing sub-module shown in Fig. 7b.
[00194] Generally, the eye fundus brightness is insufficient, thus it is preferable to illuminate the eye fundus, and in this implementation, the eye fundus is illuminated by a light emitting source 840. In order not to influence user's experience, the light emitting source 840 herein may be a light emitting source of light invisible to the human eyes, such as a near-infrared light emitting source which may have a slight impact on eye A and is relatively sensitive to the camera 810.
[00195] In this implementation, the light emitting source 840 is located outside of the right eyeglasses frame, therefore the transmission of the light emitted by the light emitting source 840 to the eye fundus requires a second spectroscope 850 along with the first spectroscope 820. In this implementation, the second spectroscope 850 is located in front of the incident surface of the camera 810, therefore the light from the eye fundus to the second spectroscope 850 is required to be transmitted.
[00196] It can be seen that in this implementation, in order to improve user's experience and enhance the clarity of collection by the camera 810, the first spectroscope 820 may have the properties of high infrared reflectivity and high transmission to visible light. For example, the above properties can be achieved by arranging an infrared reflective film on the side of the first spectroscope 820 toward the eye A.
[00197] It can be seen from Fig. 8 that in this implementation, the position detection module 800 is located on the side of the glasses G away from the eye A, therefore the lens may be regarded as a part of the eye A when the optical parameters of the eye are calculated, without needing to know the optical property of the lens.
[00198] In other implementations of the embodiment of this application, the position detection module 800 may be located on the side of glasses G near the eye A; in this case, it is required to obtain the optical property parameters of the lens in advance and take the influencing factors of the lens into consideration when the distance of the gazing point is calculated.
[00199] In this embodiment, the light emitted from the light emitting source 840 passes through the lens of the glasses G after the reflection of the second spectroscope 850, the projection of the lens with adjustable focal length 830 and the reflection of the first spectroscope 820, enters the user's eyes and finally arrives at the retina of eye fundus. The eye fundus image is taken by the camera 810, through the pupil of eye A via the optical path composed of the first spectroscope 820, the lens with adjustable focal length 830 and the second spectroscope 850.
[00200] In a possible implementation, the other parts of the device of the embodiment of this application are also embodied on the glasses G, and because the position detection module and the projection module may simultaneously comprise a device having projection function (the information projection sub-module of the projection module and the projection sub-module of the position detection module, as described above) and an imaging device with adjustable imaging parameters (the parameter adjustment sub-module of the projection module and the adjustable imaging sub-module of the position detection module, as described above), accordingly, in a possible implementation of the embodiment of this application, the functions of the position detection module and the projection module are achieved by the same device.
[00201] As shown in Fig. 8, in a possible implementation of the embodiment of this application, the light emitting source 840 may be used for aiding the projection of the input information as the information projection sub-module of the projection module in addition to the illumination of the position detection module. In a possible implementation, the light emitting source 840 may simultaneously project the invisible light for illuminating the position detection module and the visible light for aiding the projection of the input information, respectively; in another possible implementation, the light emitting source 840 may also switch between the projection of the invisible light and the visible light asynchronously; and in still another possible implementation, the position detection module may use the input information to achieve the function of illuminating the eye fundus.
[00202] In a possible implementation of the embodiment of this application, the first spectroscope 820, the second spectroscope 850 and the lens with adjustable focal length 830 may be used as the parameter adjustment sub-module of the projection module and as the adjustable imaging sub-module of the position detection module. Herein, for the lens with adjustable focal length 830 in a possible implementation, its focal length may be adjusted region by region, different regions correspond respectively to the position detection module and the projection module, and the focal lengths may also be different. Alternatively, the focal length of the lens with adjustable focal length 830 is adjusted as a whole, however other optical devices are arranged on the front end of a light sensing unit (such as CCD, etc.) of the micro camera 810 of the position detection module, to achieve the auxiliary adjustment of the imaging parameters of the position detection module. Besides, in another possible implementation, it is configured such that the optical path from the light emitting plane (where the input information is projected out) of the light emitting source 840 to the eyes is the same as that from the eyes to the micro camera 810, and when the lens with adjustable focal length 830 is adjusted to the clearest eye fundus image received by the micro camera 810, the input information projected by the light emitting source 840 just is imaged clearly in the eye fundus.
[00203] It can be seen that the functions of the position detection module and the projection module of the first device for information interaction of the embodiment of this application may be achieved by a set of means, such that the overall system has simple structure, small volume, and improved portability.
[00204] The structural diagram of the position detection module 900 of another implementation of the embodiment of this application is shown in Fig. 9. It can be seen from Fig. 9 that this implementation is similar to the implementation shown in Fig. 8, comprising the micro camera 910, the second spectroscope 920, and the lens with adjustable focal length 930, except that the projection sub-module 940 of this implementation is a projection sub-module 940 for projecting light spot pattern and the first spectroscope of the implementation shown in Fig. 8 is replaced by a curved spectroscope 950 as the curved spectroscopic device.
[00205] Herein, the curved spectroscope 950 corresponds respectively to the positions of the pupil in different directions of the eyes' optical axes, and the image presented in the eye fundus is transmitted to the eye fundus image collection sub-module. The camera may take the images mixed and superimposed in all angles of the eyeball, but only the eye fundus part passing through the pupil can image clearly in the camera, while the other parts will be out of focus and unable to image clearly; therefore, imaging in the eye fundus part will not be interfered seriously, and the feature of the eye fundus part may still be detected. Hence, compared with the implementation shown in Fig. 8, in this implementation, the eye fundus image may be acquired well when the eyes gaze in different directions, such that the position detection module of this implementation has a wider range of application and higher detection accuracy. [00206] In a possible implementation of the embodiment of this application, the other parts of the first device for information interaction of the embodiment of this application are embodied on the glasses G. In this implementation, the position detection module and the projection module may also be reused. Similarly to the embodiment shown in Fig. 8, the projection sub-module 940 may switch between the projection of the light spot pattern and the input information synchronously or asynchronously; alternatively, the projected input information is used by the position detection module as the light spot pattern for detection. Similarly to the embodiment shown in Fig. 8, in a possible implementation of the embodiment of this application, the first spectroscope 920, the second spectroscope 950 and the lens with adjustable focal length 930 may be used as the parameter adjustment sub-module of the projection module and as the adjustable imaging sub-module of the position detection module.
[00207] In this case, the second spectroscope 950 is also used respectively corresponding to the positions of the pupil in different directions of the eye' optical axis, to transmit in the optical path between the projection module and the eye fundus. Because the input information projected by the projection sub-module 940 is deformed after passing through the second curved spectroscope 950, in this implementation, the projection module comprises:
[00208] a reversed deformation processing module (not shown in Fig. 9) used for performing the reversed deformation processing corresponding to the curved spectroscopic device on the input information so that the input information to be presented is received by the eye fundus.
[00209] In an implementation, the projection module is used for projecting the input information to the user eye fundus in a three-dimensional way.
[00210] The input information comprises the three-dimensional information respectively corresponding to the two eyes of the user, and the projection module projects respectively the corresponding input information to the two eyes of the user.
[00211] As shown in Fig. 10, under the circumstance that a three-dimensional display is needed, the first device for information interaction 1000 requires to provide two sets of projection modules respectively corresponding to the two eyes of the user and comprises:
the first projection module corresponding to the user's left eye; and the second projection module corresponding to the user's right eye. [00212] The structure of the second projection module is similar to the structure recorded in the embodiment shown in Fig. 10 which integrates the function of the position detection module, also has the structure which may simultaneously achieve the function of the position detection module and the function of the project module, and comprises the micro camera 1021, the second spectroscope 1022, the second lens with adjustable focal length 1023, and the first spectroscope 1024 (the position detection sub-module is not shown in Fig. 10) with the functions thereof being the same as the embodiment shown in Fig. 10, except that the projection sub-module of this implementation is the second projection sub-module 1025 which may project the input information corresponding to the right eye. It may be used for detecting the position of the gazing point of the user's eye and clearly projecting the input information corresponding to the right eye to the right eye fundus.
[00213] The structure of the first projection module is similar to that of the second projection module 1020, except that it neither has the micro camera nor integrates the function of the position detection module. As shown in Fig. 10, the first projection module comprises:
a first projection sub-module 1011 used for projecting the input information corresponding to the left eye to the left eye fundus;
a first lens with adjustable focal length 1013 used for adjusting the imaging parameters between the first projection sub-module 1011 and the eye fundus, such that the corresponding input information may be clearly presented on the left eye fundus and that the user can see the input information presented in the image;
a third spectroscope 1012 used for transmitting in the optical path between the first projection sub-module 1011 and the first lens with adjustable focal length 1013; and
a fourth spectroscope 1014 used for transmitting in the optical path between the first lens with adjustable focal length 1013 and the left eye fundus.
[00214] With this embodiment, the input information seen by the user has the appropriate three-dimensional display effect, bringing better user experience. Furthermore, when the input information inputted to the user contains three-dimensional space information, the user may see the three-dimensional space information by means of the three-dimensional projection. For example, in the case where the input information can only be inputted correctly when the user makes a specific hand gesture in a specific position in the three-dimensional space, with the above method of the embodiment of this application, the user sees the three-dimensional input information and thus knows the specific position and the specific hand gesture, so that the user can make the hand gesture prompted by the input information in the specific position, while other people are unable to know the spatial information even if they see the hand gesture made by the user, thereby improving the secrecy effect of the input information.
[00215] Fig. 11 is the structural diagram of still another first device for information interaction 1100 provided by the embodiment of this application; and the specific embodiments of this application have no restriction on the specific realization of the first device for information interaction 1100. As shown in Fig. 11, the first device for information interaction 1100 may comprise:
a processor 1110, a communications interface 1120, a memory 1130, and a communication bus 1140. In the device:
the processor 1110, the communications interface 1120, and the memory 1130 communicate with each other via the communication bus 1140.
[00216] The communications interface 1120 is used for communicating with network elements such as client.
[00217] The processor 1110 is used for executing a program 1132, specifically executing the relevant steps of the above-mentioned method embodiment.
[00218] Specifically, the program 1132 may comprise program codes which comprise computer operating instructions.
[00219] The processor 1110 may be a central processing unit CPU, a specific integrated circuit ASIC (Application Specific Integrated Circuit) or one or more integrated circuits configured to implement the embodiment of this application.
[00220] The memory 1130 is used for storing the program 1132. The memory 1130 may contain a high-speed RAM memory and may also comprise a non-volatile memory, such as at least one disk storage. Specifically, the program 1132 may be used to make the first device for information interaction 1100 perform the following steps:
acquiring an image related to a device, the image containing at least one digital watermark;
acquiring the at least one piece of input information corresponding to the device contained in the at least one digital watermark; and
providing the at least one piece of input information to the device. [00221] For the specific realization of each step of the program 1132, see the corresponding step in the above-mentioned embodiment and the corresponding description in each section, and the details are not described here. It may be clearly understood by persons skilled in the art that, for the purpose of convenient and brief description, for a detailed working process of the foregoing device and module, reference may be made to the corresponding process in the above-mentioned method embodiment, and the details are not described here again.
[00222] Furthermore, a computer readable medium is also provided, comprising computer readable instructions which perform the following operations when executed: executing the steps SI 20, SI 40 and SI 60 of the method in the above-mentioned embodiment.
[00223] As shown in Fig. 12, the embodiment of this application also provides a wearable device 1200 containing the first device for information interaction 1210 recorded by the above-mentioned embodiment.
[00224] The wearable device may be a pair of glasses. In some implementations, this pair of glasses may have a structure as shown in Figs 8 to 10.
[00225] As shown in Fig.13, the embodiment of this application provides a method for information interaction, comprising:
SI 310 a watermark embedding step: embedding at least one digital watermark into an image related to a device, the digital watermark containing the input information corresponding to the device;
SI 320 an image providing step: providing the image to external;
SI 330 an information input step: receiving the at least one piece of input information provided from the external; and
SI 340 an execution step: executing the operation corresponding to the at least one piece of input information.
[00226] In the method of the embodiment of this application, the digital watermark is provided to the external after embedded such that the external device may acquire the corresponding input information according to the image and then return the same to the method of this application; the information input step of the method of this application automatically carries out the corresponding operations after the input information provided from the external is received, without manual operation of the user, which is convenient for use by the user. [00227] In the embodiment of this application, the digital watermark may be classified according to its symmetry into symmetrical watermarks and asymmetric watermarks. The embedding key and detection key of a conventional symmetrical watermark are identical, such that the watermark would be removed from a digital carrier easily, once the detection method and key are disclosed. The asymmetrical watermarking technology uses a private key to embed a watermark and uses a public key to extract and verify the watermark, such that it is difficult for an attacker to use the public key for destroy or remove the watermark embedded with the private key. Therefore, in the embodiment of this application, an asymmetric digital watermark may be used.
[00228] In the embodiment of this application, the embedded input information to be contained in the digital watermark may be preset by the user according to his or her personalized requirement or actively configured for the user by the system.
[00229] In a possible implementation of the embodiment of this application, the step
SI 320 may comprise:
displaying the image.
[00230] Certainly, in other implementations of the embodiment of this application, the step SI 320 may also be as follows: sending the image to the corresponding device by interacting between devices, in the method of the embodiment of this application.
[00231] In a possible implementation of the embodiment of this application, the image is a login interface of a user environment;
[00232] and the operation corresponding to the input information is logging in to the user environment according to the input information.
[00233] For example, the image is a login interface of a user's electronic bank account, the input information being the name and password of the electronic bank account; after the input information is received, the user's electronic bank account is logged in such that the user can enter the user environment of the electronic bank account and in turn use the corresponding function.
[00234] Further, in a possible implementation of the embodiment of this application, the image is a screen-locking interface;
and the operation corresponding to the input information is unlocking the corresponding screen according to the input information.
[00235] Taking the image of the screen-locking interface of a cell phone shown in
Fig. 2a as an example, the input information is the unlock information corresponding to the screen-locking interface; after the input information is received, the cell phone screen is unlocked, and the user may use the corresponding function of the cell phone system in the user environment.
[00236] In a possible implementation of the embodiment of this application, prior to the execution step, the method may also comprise:
an authorization determining step for determining whether a user is an authorized user, and conducting the execution step only when the user is an authorized user.
[00237] That is to say, not all input information received by the device would trigger the execution of the corresponding operation, and the corresponding operation is executed only when the user is an authorized user. A specific case is that the device is currently set to never perform the execution correspondingly for any received input information; in this case, all users are unauthorized.
[00238] It should be understood that in various embodiments of this application, the numbers of order in the above processes are not the execution sequence, and the execution sequence of processes shall be determined according to their functions and internal logic, which shall not impose any restriction to the implementation of the embodiment of this application.
[00239] As shown in Fig. 14, the embodiment of this application provides a second device for information interaction 1400, comprising:
a watermark embedding module 1410 used for embedding at least one digital watermark into an image related to the second device for information interaction 1400, the at least one digital watermark containing the input information corresponding to the second device for information interaction 1400;
an image providing module 1420 used for providing the image to external; an information input module 1430 used for receiving the input information provided from the external; and
an execution module 1440 used for executing the corresponding operation according to the received input information.
[00240] The device of the embodiment of this application provides the digital watermark to the external after embedded such that the external device may acquire the corresponding input information according to the image and then return the same to the device of the embodiment of this application; the device of the embodiment of this application automatically carries out the corresponding operation by the execution module 1440 after the input information provided from the external is received, without manual operation of the user, which is convenient for use by the user.
[00241] Corresponding to the description of the method embodiment shown in Fig.
13, the image providing module 1420 of the embodiment of this application comprises:
a display sub-module 1421 used for displaying the image.
[00242] Corresponding to the method shown in Fig. 13, the image providing module 1420 may also be, for example, an interaction interface, and the image is transferred to other devices (such as the above-mentioned first device for information interaction) by interaction.
[00243] In a possible implementation of the embodiment of this application, the image may be a login interface of a user environment; and
the execution module 1440 is used for logging in to the user environment according to the input information.
[00244] In a possible implementation of the embodiment of this application, the image may be a screen-locking interface; and
the execution module 1440 is used for unlocking the corresponding screen according to the input information.
[00245] In a possible implementation of the embodiment of this application, the device 1400 may also comprise:
an authorization determination module 1450 used for determining whether a user is an authorized user, and triggering the corresponding operation by the execution module only when the user is an authorized user.
[00246] For the realization of the functions of the above-mentioned modules, see the corresponding description of the method embodiment shown in Fig. 13, and the details are not described here again.
[00247] As shown in Fig. 15, the embodiment of this application also provides an electronic terminal 1500 comprising the above-mentioned device for information interaction 1510.
[00248] In a possible implementation of the embodiment of this application, the electronic terminal 1500 is an electronic device such as a cell phone, a tablet computer, a computer, an electronic entrance guard, and on-board electronic device.
[00249] Fig. 16 is the structural diagram of still another second device for information interaction 1600 provided by the embodiment of this application; and the specific embodiments of this application have no restriction on the specific realization of the second device for information interaction 1600. As shown in Fig. 16, the second device for information interaction 1600 may comprise:
a processor 1610, a communications interface 1620, a memory 1630, and a communication bus 1640. In the device:
the processor 1610, the communications interface 1620, and the memory 1630 communicate with each other via the communication bus 1640.
[00250] The communications interface 1620 is used for communicating with network elements such as client.
[00251] The processor 1610 is used for executing a program 1632, specifically executing the relevant steps of the method embodiment shown in Fig. 13.
[00252] Specifically, the program 1632 may comprise program codes which comprise computer operating instructions.
[00253] The processor 1610 may be a central processing unit CPU, a specific integrated circuit ASIC (Application Specific Integrated Circuit) or one or more integrated circuits configured to implement the embodiment of this application.
[00254] The memory 1630 is used for storing the program 1632. The memory 1630 may contain a high-speed RAM memory and may also comprise a non-volatile memory, such as at least one disk storage. Specifically, the program 1632 may be used to make the second device for information interaction 1600 performs the following steps:
a watermark embedding step: embedding at least one digital watermark into an image related to a device, the digital watermark containing the at least one piece of input information corresponding to the device;
an image providing step: providing the image to external; and
an information input step: receiving the at least one piece of input information provided from the external and executing the operation corresponding to the input information.
[00255] For the specific realization of each step of the program 1632, see the corresponding step in the above-mentioned embodiment and the corresponding description in each section, and the details are not described here. It may be clearly understood by persons skilled in the art that, for the purpose of convenient and brief description, for a detailed working process of the foregoing device and module, reference may be made to the corresponding process in the above-mentioned method embodiment, and the details are not described here again.
[00256] Furthermore, a computer readable medium is also provided, comprising computer readable instructions which implement the following operations when executed: the operations of executing the steps S1310, S1320, S1330 and S1340 of the method in the above-mentioned embodiment.
[00257] Fig. 17 is an application example diagram of a first and a second device for information interaction of an embodiment of this application. In the embodiment of this application, the electronic device, the cell phone device 1710, recorded in the embodiment shown in Fig. 15 and the wearable device, the intellectual glasses 1720, recorded in the embodiment shown in Fig. 12 are comprised.
[00258] In the embodiment of this application, the intellectual glasses 1720 comprise the first device for information interaction described in the embodiments shown in Figs. 5 to 11 ; the function of the image acquisition module (mainly the image collection sub-module) of the first device for information interaction is achieved by the camera 1721 on the intellectual glasses 1720; the information acquisition module (not shown in Fig. 17) and the information providing module (not shown in Fig. 17) of the device for information interaction may be integrated in the original processing module of the intellectual glasses 1720 or arranged on the frame (for example, arranged on the legs of glasses, or become a part of the frame) of the intellectual glasses 1720, for realizing their functions.
[00259] In the embodiment of this application, the cell phone device 1710 comprises the second device for information interaction shown in Fig. 14. The function of the display sub-module of the second device for information interaction is achieved by the display module of the cell phone device 1710; and the watermark embedding module, the information input module, and the execution module may be integrated in the existing processing module and communication module of the cell phone device 1710 or arranged in the cell phone device 1710 as a separate module, for realizing their functions. In the embodiment of this application, the image is the screen-locking interface 1711 (for example, the image shown in Fig. 2a) of the cell phone device 1710, and the input information is the corresponding unlock information.
[00260] In this embodiment, the watermark embedding module embeds the digital watermark with unlock information into the screen-locking interface 1711 of the cell phone device 1710 in advance, and when a user needs to use the cell phone device 1710, the digital watermark is displayed by the display module of the cell phone device 1710 by a specific operation (for example, pressing the power supply button of the cell phone device 1710). Generally, in this case, the user will look at the display screen of the cell phone device 1710 so that the camera 1721 of the intellectual glasses 1720 can acquire the image displayed on the screen-locking interface 1711 , automatically acquire the unlock information according to the image by the information acquisition module of the second device for information interaction, and then send the unlock information to the cell phone device 1710 by the information providing device (for example, a wireless communication interface between devices). After the unlock information is received by the cell phone device 1710, the corresponding unlock operation is carried out by the execution module such that the cell phone device 1710 can be released from the unlocked status without any other actions, so as to enter the user environment of the cell phone system.
[00261] It can be seen from the above description that the device and method for information interaction of the embodiments of this application can make the corresponding operations natural and convenient for the user (the cell phone will be automatically unlocked only by glancing the screen-locking interface of the cell phone by the user), providing better user experience.
[00262] Common persons skilled in the art should appreciate that, in combination with the examples described in the embodiments here, units and method steps can be implemented by electronic hardware, or a combination of computer software and electronic hardware. Whether the functions are executed by hardware or software depends on the particular applications and design constraint conditions of technical solutions. Professional persons skilled in the art may use different methods to implement the described functions for each particular application, but it should not be considered that the implementation goes beyond the scope of this application.
[00263] When being implemented in the form of a software functional unit and sold or used as a separate product, the functions may be stored in a computer-readable storage medium. Based on this understanding, the part of the technical solutions of this application which contributes to the present invention over the prior art may be embodied in a form of a computer software product which is stored in a readable storage medium and comprises various instructions for causing a computer apparatus (which may be a personal computer, a server, a network apparatus, or the like) to execute all or some of the steps in the method in the individual embodiments of the present invention. The aforementioned storage medium comprises any medium that may store program codes, such as a USB-disk, a removable hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk.
[00264] The forgoing implementations are only used for illustrating this application, not limiting this application, and various modifications and improvements may be made by those skilled in the art within the spirit and scope of this application; accordingly, all equivalent technical solutions fall within the scope of this application and the scope of patent protection of this application should be limited by Claims.

Claims

Claims What is claimed is:
1. A method, comprising:
acquiring, by a system comprising a processor, an image related to a device, the image comprising at least one digital watermark;
acquiring at least one piece of input information corresponding to the device and included in the at least one digital watermark; and
initiating providing the at least one piece of input information to the device.
2. The method of Claim 1, wherein the image is a login interface of a user environment displayed by the device.
3. The method of Claim 2, wherein the image is a screen-locking interface displayed by the device.
4. The method of Claim 1, wherein the acquiring the image related to the device comprises:
acquiring the image by photographing.
5. The method of Claim 1, wherein the acquiring an image related to the device comprises:
acquiring the image by receiving the image from an external device.
6. The method of Claim 1 , further comprising:
prior to the acquiring the at least one piece of input information corresponding to the device included in the at least one digital watermark, determining whether a user is an authorized user,
wherein the acquiring the at least one piece of input information corresponding to the device included in the at least one digital watermark comprises:
corresponding to a determination of the user being the authorized user, acquiring the at least one piece of input information corresponding to the device included in the at least one digital watermark.
7. The method of Claim 1 , further comprising:
prior to the providing the at least one piece of input information to the device, determining whether a user is an authorized user, wherein the providing the at least one piece of input information to the device comprises:
corresponding to a determination of the user being the authorized user, initiating providing the at least one piece of input information to the device.
8. The method of Claim 6 or 7, further comprising:
determining, by the system, whether the user is the authorized user.
9. The method of Claim 6 or 7, further comprising:
determining whether the user is the authorized user based on an authorization determination performed by a remote device.
10. The method of Claim 1, wherein the acquiring the at least one piece of input information corresponding to the device included in the at least one digital watermark comprises:
extracting the at least one piece of input information included in the at least one digital watermark.
11. The method of Claim 1 , wherein the acquiring the at least one piece of input information corresponding to the device included in the at least one digital watermark comprises:
sending the image to an external device; and
receiving the at least one piece of input information included in the at least one digital watermark from the external device.
12. The method of Claim 2, wherein the at least one piece of input information is login information about the user environment.
13. The method of Claim 3, wherein the at least one piece of input information is unlock information about the screen-locking interface.
14. The method of Claim 1 , further comprising:
after the acquiring the at least one piece of input information corresponding to the device included in the at least one digital watermark, projecting the at least one piece of input information to an eye fundus of a user.
15. The method of Claim 14, wherein the projecting the at least one piece of input information to the eye fundus comprises:
projecting the at least one piece of input information to the eye fundus after the eye fundus has been aligned with another image determined to have been seen by the user.
16. A method, comprising:
embedding at least one digital watermark into an image related to a device, the at least one digital watermark comprising at least one piece of input information corresponding to the device;
providing the image to an external device;
receiving the at least one piece of input information from the external device; and executing an operation corresponding to the at least one piece of input information.
17. The method of Claim 16, wherein the providing the image to the external device comprises:
displaying the image.
18. The method of Claim 17, wherein the image is a login interface of a user environment;
and the operation corresponding to the at least one piece of input information is logging in to the user environment of the at least one piece of input information.
19. The method of Claim 18, wherein the image is a screen-locking interface, and the operation corresponding to the at least one piece of input information is unlocking the corresponding screen according to the at least one piece of input information.
20. The method of Claim 16, further comprising: prior to the executing the operation corresponding to the at least one piece of input information, determining whether a user is an authorized user, wherein the executing the operation corresponding to the at least one piece of input information comprises:
corresponding to a determination that the user is the authorized user, executing the operation corresponding to the at least one piece of input information.
21. A device, comprising:
a memory that stores executable modules; and
a processor, coupled to the memory, that executes the executable modules to perform operations of the device, the executable modules comprising:
an image acquisition module configured to acquire an image related to a device, the image comprising at least one digital watermark;
an information acquisition module configured to acquire at least one piece of input information corresponding to the device included in the at least one digital watermark; and an information providing module configured to send the at least one piece of input information to the device.
22. The device of Claim 21 , wherein the image acquisition module comprises:
an image collection sub-module configured to acquire the image by photographing.
23. The device of Claim 21 , wherein the image acquisition module comprises:
a first communication sub-module configured to acquire the image by receiving the image from an external device.
24. The device of Claim 21 , wherein the executable modules further comprise: an authorization determination module configured to determine whether a user is an authorized user,
wherein the information acquisition module is further configured to, corresponding to a determination of the user being the authorized user, acquire the at least one piece of input information corresponding to the device included in the at least one digital watermark.
25. The device of Claim 21 , wherein the executable modules further comprise: an authorization determination module configured to determine whether a user is an authorized user,
wherein the information providing module is further configured to, corresponding to a determination of the user being the authorized user, provide the at least one piece of input information to the device.
26. The device of Claim 24 or 25, wherein the authorization determination module comprises:
a second communication sub-module configured to:
send the corresponding information about the user to an external device; and receive a result of whether the user is the authorized user from the external device.
27. The device of Claim 21 , wherein the information acquisition module comprises: an information extraction sub-module configured to extract the at least one piece of input information from the image.
28. The device of Claim 21 , wherein the information acquisition module comprises: a third communication sub-module configured to:
send the image to an external device; and
receive the at least one piece of input information included in the at least one digital watermark from the external device.
29. The device of Claim 21 , wherein the executable modules further comprise: a projection module configured to project the at least one piece of input information to an eye fundus of a user.
30. The device of Claim 29, wherein the projection module comprises:
an alignment and adjustment module configured to align the at least one piece of input information with another image seen by the user at the eye fundus.
31. A wearable device, wherein the wearable device comprises the device for information interaction according to Claim 20.
32. The device of Claim 21, wherein the device is included in a wearable device that is a pair of glasses.
33. A device, comprising:
a processor that executes executable modules to perform operations of the device, the executable modules comprising:
a watermark embedding module configured to embed at least one digital watermark into an image related to the device for information interaction, wherein the at least one digital watermark comprises at least one piece of input information corresponding to the device for the information interaction;
an image providing module configured to provide the image to an external device; an information input module configured to receive the at least one piece of input information provided from the external device; and
an execution module configured to execute a corresponding operation according to the at least one piece of input information received by the information input module.
34. The device of Claim 33, wherein the image providing module comprises:
a display sub-module configured to display the image.
35. The device of Claim 34, wherein the image is a login interface of a user environment and the execution module is configured to log in to the user environment according to the at least one piece of input information.
36. The device of Claim 35, wherein the image is a screen-locking interface, and the execution module is configured to unlock a corresponding screen according to the at least one piece of input information.
37. The device of Claim 33, wherein the executable modules further comprise: an authorization determination module configured to determine whether a user is an authorized user, wherein the execution module is further configured to, corresponding to a determination of the user being the authorized user, execute the corresponding operation according to the at least one piece of input information received by the information input module.
38. The device of Claim 33, wherein the device is an electronic terminal.
39. A computer readable storage device, comprising at least one executable instruction, which, in response to execution, causes a system comprising a processor to perform operations, comprising:
acquiring an image related to a device, the image comprising a digital watermark; acquiring input information corresponding to the device included in the digital watermark; and
facilitating providing the input information.
40. A device for information interaction, characterized by comprising a processing device and a memory, the memory storing executable instructions, and the processing device being connected with the memory through a communication bus, and when the device for information interaction operates, the processing device executes the executable instructions stored in the memory, and the device for information interaction executes operations comprising:
acquiring an image related to a device, the image comprising at least one digital watermark;
acquiring at least one piece of input information corresponding to the device included in the at least one digital watermark; and
providing the at least one piece of input information.
41. A computer readable storage device, comprising at least one executable instruction, which, in response to execution, causes a system comprising a processor to perform operations, comprising:
embedding a digital watermark into an image related to a device, the digital watermark comprising input information corresponding to the device;
providing the image to an external device;
receiving the input information provided from the external device; and
executing an operation corresponding to the input information.
42. A device for information interaction, comprising a processing device and a memory, the memory storing executable instructions, the processing device being connected with the memory through a communication bus, and when the device for information interaction operates, the processing device executing the executable instructions stored in the memory, the device for information interaction executes operations, comprising:
embedding at least one digital watermark into an image related to a device, the at least one digital watermark comprising at least one piece of input information corresponding to the device;
providing the image to an external interface;
acquiring the at least one piece of input information provided from the external interface; and
executing an operation corresponding to the at least one piece of input information.
PCT/CN2014/081494 2013-11-15 2014-07-02 Information interaction WO2015070623A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310573092.8 2013-11-15
CN201310573092.8A CN103631503B (en) 2013-11-15 2013-11-15 Information interacting method and information interactive device

Publications (1)

Publication Number Publication Date
WO2015070623A1 true WO2015070623A1 (en) 2015-05-21

Family

ID=50212630

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/081494 WO2015070623A1 (en) 2013-11-15 2014-07-02 Information interaction

Country Status (2)

Country Link
CN (1) CN103631503B (en)
WO (1) WO2015070623A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3445077A1 (en) * 2017-08-16 2019-02-20 Beijing Xiaomi Mobile Software Co., Ltd. Unlocking mobile terminal in augmented reality
CN113116358A (en) * 2019-12-30 2021-07-16 华为技术有限公司 Display method and device of electrocardiogram, terminal equipment and storage medium

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103677631A (en) * 2013-11-15 2014-03-26 北京智谷睿拓技术服务有限公司 Information interaction method and information interaction device
CN103631503B (en) * 2013-11-15 2017-12-22 北京智谷睿拓技术服务有限公司 Information interacting method and information interactive device
KR20170011617A (en) * 2015-07-23 2017-02-02 엘지전자 주식회사 Mobile terminal and control method for the mobile terminal

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102970307A (en) * 2012-12-21 2013-03-13 网秦无限(北京)科技有限公司 Password safety system and password safety method
CN103116717A (en) * 2013-01-25 2013-05-22 东莞宇龙通信科技有限公司 User login method and system
CN103616998A (en) * 2013-11-15 2014-03-05 北京智谷睿拓技术服务有限公司 User information acquiring method and user information acquiring device
CN103631503A (en) * 2013-11-15 2014-03-12 北京智谷睿拓技术服务有限公司 Information interaction method and information interaction device
CN103678971A (en) * 2013-11-15 2014-03-26 北京智谷睿拓技术服务有限公司 User information extracting method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6978376B2 (en) * 2000-12-15 2005-12-20 Authentica, Inc. Information security architecture for encrypting documents for remote access while maintaining access control
CN101449265A (en) * 2006-03-15 2009-06-03 杰里·M·惠特克 Mobile global virtual browser with heads-up display for browsing and interacting with the World Wide Web
JP5158007B2 (en) * 2009-04-28 2013-03-06 ソニー株式会社 Information processing apparatus, information processing method, and program
CN103368617A (en) * 2013-06-28 2013-10-23 东莞宇龙通信科技有限公司 Intelligent equipment interactive system and intelligent equipment interactive method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102970307A (en) * 2012-12-21 2013-03-13 网秦无限(北京)科技有限公司 Password safety system and password safety method
CN103116717A (en) * 2013-01-25 2013-05-22 东莞宇龙通信科技有限公司 User login method and system
CN103616998A (en) * 2013-11-15 2014-03-05 北京智谷睿拓技术服务有限公司 User information acquiring method and user information acquiring device
CN103631503A (en) * 2013-11-15 2014-03-12 北京智谷睿拓技术服务有限公司 Information interaction method and information interaction device
CN103678971A (en) * 2013-11-15 2014-03-26 北京智谷睿拓技术服务有限公司 User information extracting method and device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3445077A1 (en) * 2017-08-16 2019-02-20 Beijing Xiaomi Mobile Software Co., Ltd. Unlocking mobile terminal in augmented reality
US11051170B2 (en) 2017-08-16 2021-06-29 Beijing Xiaomi Mobile Software Co., Ltd. Unlocking mobile terminal in augmented reality
CN113116358A (en) * 2019-12-30 2021-07-16 华为技术有限公司 Display method and device of electrocardiogram, terminal equipment and storage medium

Also Published As

Publication number Publication date
CN103631503B (en) 2017-12-22
CN103631503A (en) 2014-03-12

Similar Documents

Publication Publication Date Title
EP0922271B1 (en) Personal identification
WO2018040307A1 (en) Vivo detection method and device based on infrared visible binocular image
US20170323167A1 (en) Systems And Methods Of Biometric Analysis With A Specularity Characteristic
WO2015070623A1 (en) Information interaction
CN106682540A (en) Intelligent peep-proof method and device
CN106503680B (en) Guidance for mobile terminal iris recognition indicates man-machine interface system and method
KR101645084B1 (en) Hand attached -type wearable device for iris recognition in outdoors and/or indoors
CN103678971B (en) User information extracting method and user information extraction element
CN109726694B (en) Iris image acquisition method and device
JP2008241822A (en) Image display device
JP2007135149A (en) Mobile portable terminal
KR101231068B1 (en) An apparatus for collecting biometrics and a method thereof
CN103616998B (en) User information acquiring method and user profile acquisition device
CN108140114A (en) Iris recognition
WO2017113286A1 (en) Authentication method and apparatus
KR20180134280A (en) Apparatus and method of face recognition verifying liveness based on 3d depth information and ir information
KR20150139183A (en) Wrist-type wearable device for vein recognition
US20160155000A1 (en) Anti-counterfeiting for determination of authenticity
KR20090132839A (en) System and method for issuing photo-id card
CN108135468A (en) Use the ophthalmologic operation of light field microscope inspection
WO2015070624A1 (en) Information interaction
CN103761653B (en) Method for anti-counterfeit and false proof device
JP2001215109A (en) Iris image input apparatus
TWM463878U (en) Living body identification system and identity authentication device
JP2004233425A (en) Image display device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14862935

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14862935

Country of ref document: EP

Kind code of ref document: A1