EP3009918A1 - Verfahren zur Steuerung der Anzeige von Text zur Unterstützung des Lesens auf einer Anzeigevorrichtung und Vorrichtung zur Durchführung des Verfahrens und computerlesbares Speichermedium - Google Patents

Verfahren zur Steuerung der Anzeige von Text zur Unterstützung des Lesens auf einer Anzeigevorrichtung und Vorrichtung zur Durchführung des Verfahrens und computerlesbares Speichermedium Download PDF

Info

Publication number
EP3009918A1
EP3009918A1 EP14306616.5A EP14306616A EP3009918A1 EP 3009918 A1 EP3009918 A1 EP 3009918A1 EP 14306616 A EP14306616 A EP 14306616A EP 3009918 A1 EP3009918 A1 EP 3009918A1
Authority
EP
European Patent Office
Prior art keywords
user
text
archived
display screen
reading
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP14306616.5A
Other languages
English (en)
French (fr)
Inventor
Volker Schäferjohann
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Thomson Licensing SAS
Original Assignee
Thomson Licensing SAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing SAS filed Critical Thomson Licensing SAS
Priority to EP14306616.5A priority Critical patent/EP3009918A1/de
Priority to US15/518,476 priority patent/US10452136B2/en
Priority to PCT/EP2015/072849 priority patent/WO2016058847A1/en
Publication of EP3009918A1 publication Critical patent/EP3009918A1/de
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0483Interaction with page-structured environments, e.g. book metaphor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor

Definitions

  • the invention relates to a method for controlling the displaying of text on a display device and an apparatus adapted for carrying out the method according to the invention.
  • the invention also concerns a computer readable storage medium for storing software which is used to implement the method according the invention.
  • the invention may also be used for controlling a computer mouse with the help of view direction control.
  • This invention aims to support the reader in the task of reading text on a display device. Particularly when reading on large display screens, such as 21", 24", 27" and bigger, the disclosed invention can improve the reading experience much.
  • an area in the text to which the gaze is directed is highlighted, respectively.
  • the gaze direction will be determined in the reading device and the highlight region will be controlled with gaze direction control.
  • the skilled reader captures in a 1/3 s the focused words. It is usually only one to three words which are detected during an eye fixation. If this area is highlighted, as the eye gets better orientation in the text, the text can be read more quickly or at least with less effort. The highlighted area moves with this technique with the view. The reader's eye/brain is thus released from the complicated task of line tracking and can better concentrate on word recognition.
  • the text section in which the focus progresses can be highlighted for the reader through frame, underlining, character color, background color, different font, bigger line spacing, colour gradient, text effect like 3D effect or the like in any combination.
  • the process of highlighting text during reading may be in the form of starting highlighting text when the user's gaze is directed to the beginning of a sentence. Then as the user's gaze progresses, more and more sections of the sentence will be highlighted successively. This proceeds further until the whole sentence is highlighted. This has the advantage that user's gaze may quickly jump back to the beginning of the sentence without need to search for it if the user feels that he has not understood the sentence and needs to read it once more.
  • the highlighting region precedes the actual user's gaze by one or more screen segment. This helps to support the reader because the gaze can follow the highlighting region as it advances which provides even better orientation in the text.
  • a known eye tracking technique may be used. Such a technique is e.g. described in US 6,553,281 B1 . Since these eye tracking technique deliver accurate results but work with IR light radiation transmitted to the user's eyes and use IR cameras for detection the reflected IR light, they are pretty expensive to implement in consumer electronic devices like personal computers, notebook computers, tablet computers, smart phones, Ebook readers or TV sets. A disadvantage might also be that the customers don't like to be exposed to the additional IR radiation that is directed to the eyes.
  • a normal camera e.g. a webcam with which the above mentioned devices are anyhow equipped and determine the gaze direction with the help of image processing techniques.
  • head position processing it is determined to which segment the user's gaze, is directed to.
  • contour extraction in the captured image is based on contour extraction in the captured image and a step of comparing the extracted contour with archived contours which have been extracted and archived during said training phase, and wherein the best matching archived contour image determines to which segment the user's gaze is directed to.
  • the second technique is based on "T"-symbol extraction which reduces the amount of data to be handled further down.
  • the so-called "T” symbol is built by the line on which the eyes lie in combination with a line along the back of the nose.
  • the extraction of the "T” symbol also helps to eliminate disturbing effects of other sections of the viewer's face, e.g. if he is speaking or making other movements with the mouth like chewing a chewing gum or others.
  • correction coordinates are determined as a result of comparing the extracted eye position with the archived eye positions and correction coordinates determine by which the view direction determination based on head position detection is to be corrected to result in a higher accuracy gaze direction determination.
  • the technique is thus a combination of a coarse determination of the gaze direction with head position detection and a correction of such result with the eye position detection for a refinement of the gaze direction determination.
  • an interpolation technique is also proposed. This is carried out between the best matching archived eye position and the straight looking eye position from the additional training phase, for determining the correction coordinates for the current extracted eye position. Using this technique reduces the burden for the user in the additional training phase.
  • the vertical viewing direction is accurately detected.
  • Starting from the identified vertical viewing direction is the corresponding line of text to which the gaze is directed, highlighted for the reader.
  • a text section that is highlighted simply the sentence in which the focused area of the eye is positioned will be highlighted.
  • a predetermined number of lines of text or a whole paragraph is highlighted according to the determined gaze direction.
  • a line of sight controlled text highlighting is proposed. If the user is reading text on screen, then his view will be directed to the text passage he is actually reading. This view direction will be detected according to the invention by the computer and the corresponding focused text passage will be highlighted.
  • Fig. 1 shows the principal arrangement when a user is reading text on a display device in front of him, e.g. a computer monitor. The view is directed on a focus point 18 on screen 20.
  • the webcam 10 is observing the user when reading and captures a picture.
  • the connected computer determines the eye focus point on screen and highlights the corresponding text passage (this may be a single word or a plurality of words) to support the user in his reading. Because the read text passage is highlighted, the reader can easily find out the place where to continue reading in cases where his gaze temporarily loses the focus point.
  • the reader is released from the operation of line tracking and can better concentrate on word recognition.
  • Fig. 2 shows an arrangement with a plurality of cameras 10, 11, 12, 13 which are positioned in the center of each side of the displays 21 housing. All cameras observe the reader and capture pictures during text reading. There are therefore four perspectives in which the reader is observed. This helps to increase the accuracy of view direction detection. The position of the striking points can be calculated more accurately.
  • the calculation method is known in principle and is, for example, already used in the new "Torlinientechnik" at the 2014 FIFA World Cup.
  • Fig. 3 shows another arrangement where a stereo camera 14 is positioned in the center of the upper border of the display device 21.
  • the stereo camera is able to take a 3D image. Then it can be calculated based on a depth map in particular, the tilt of the head in horizontal and vertical direction in an accurate fashion. If the eye position is straight, the eye focus point can be estimated with high accuracy.
  • Fig. 4 shows a first embodiment of a segmentation of the display screen 20.
  • the screen is divided into rectangular segments S1 to S9.
  • the screen is in the format of a 16:9 aspect ratio.
  • a cross is depicted in the center of segment S1.
  • This segmentation will be shown on screen during a training phase for view direction detection.
  • the segmentation in 9 equally sized segments shall illustrate the principle solution according to the invention.
  • This segmentation however can be regarded as being insufficient for the purpose of aiding the user in text reading, particularly if the user is not reading over the full screen along the long side of the screen.
  • the principle however is correct and can also be applied when a finer segmentation of screen is used.
  • An example of a finer segmentation of the display screen is shown in Fig.
  • Fig. 6 shows a block diagram of the computer 5.
  • the computer monitor 21 is connected with the computer 5.
  • Connected with the computer are keyboard 16 and mouse 15.
  • the image processing for view direction determination is preferably performed in the graphical processing unit GPU.
  • the archived images will be archived in hard disc drive HDD. Instead of a hard disc drive a solid state drive SSD can also be used.
  • IF is the interface section and MEM is the computer's memory.
  • Reference number 17 denotes a head set or microphone for allowing voice control and a specialized training method which will be explained later on.
  • Fig.5 the computer depicted in Fig.5 is an example and it is expressively mentioned that the invention may be used in similar devices such as personal computers, notebook computers, tablet computers, smart phones, Ebook readers or TV sets.
  • Such devices are equipped with display screens in the form of LCD display panels, LED display panels, OLED display panel, E-ink display panels, Plasma display panels and the like.
  • FIG. 7 illustrates the captured pictures captured during training phase when the user is staring at the depicted cross in the center of each segment S1 to S9 depicted in Fig. 4 . It is seen that the camera 10 is seeing the user's face with different tilts corresponding to the required head movement when looking straight at the cross in each segment. Ideally the user does not move the eyes when looking at the cross but is turning the head in the right direction for looking at the cross straight. Since the gaze direction determination is based on image comparison, it is proposed to use image processing before archiving a picture during training phase. Any image processing which reduces the amount of data in the picture by delivering good results during comparison is suitable. As a first example of data reduction image processing it is mentioned a contour processing.
  • FIG. 8 shows the extracted contours of the user's face for the images of Fig. 7 .
  • One such technique for contour extraction is, for example, described in patent U.S. 7,176,939 B2 .
  • this technique by means of a gradient filter, the regions of the image in which high gradients occur are determined. The contours are very prominent.
  • other image processing method for highlighting the contours may also be considered for this.
  • the program MS Word offers an image processing function with which the contours can be highlighted.
  • Many image editing programs employ techniques to highlight contours. Other techniques are referred to as edge detection or edge processing technology and would be effective for the solution of the same problem in consideration. If only the contours for the image are used for comparison, further disturbing influences for the comparison are eliminated.
  • the contour images can be divided into two parts, right and left half. This helps for eliminating ambiguous comparison results as will be explained below.
  • the training phase in more detail.
  • the user is prompted in the training phase to direct attention to a particular position 18 displayed on screen.
  • This is illustrated in Fig. 4 .
  • It is the displayed cross in the center of segment S1 at which the user shall look straight.
  • the camera 10 takes a picture and saves it. This is done for the different areas S1 to S9 in case of Fig. 4 or S1 to S180 in Fig. 5 .
  • An example of captured images during training is shown in Fig. 7 .
  • the same pictures are shown in Fig. 8 after extraction of the contours.
  • the operation of the computer in training phase with a corresponding computer program is illustrated in Fig. 9 .
  • step 91 an index i is set to the value 1.
  • step 92 a cross is displayed on screen 20 in the center of the segment with index i. The user looks at the cross and the camera takes a picture. in step 93. It follows the picture contour processing for the captured image in step 94. The resulting image with extracted contours contains less data and is archived in step 95.
  • step 96 index i is incremented.
  • step 97 it is checked if the index i now exceeds the value 9. If not, the program branches back to step 92 and the next segment will be treated. If yes, the training phase is ended in step 98.
  • a captured current image is compared with the archived images of the training phase.
  • the training image with the closest match then specifies what segment of the screen the gaze is directed to.
  • Fig. 10 The operation of the computer 5 in working phase with a corresponding computer program is illustrated in Fig. 10 .
  • the program starts at step 100.
  • step 101 the camera 10 takes a picture. It follows the contour processing in step 102.
  • the resulting image with extracted contours is divided in left and right half like depicted in Fig. 8 .
  • step 104 the computer 5 compares the left half of the contour image with the left halves of the archived contour images from training phase.
  • the best matching image from training phase is selected.
  • the simplest comparison operation looks at the sum of the video values in the two images to be compared.
  • Another form is to build the difference in video values for each two corresponding pixels and summing up the square of the differences. The image with the smallest total square difference is the best matching image and is selected.
  • step 106 and 107 the same steps are performed for the right half of the captured image.
  • step 108 it will be checked if the picture index determined as the best match for the left picture is equal to the picture index determined as the best match for the right picture.
  • the splitting of the contour images in left and right halves has the advantage that not the wrong side of the screen is mistakenly selected when the pixel differences of a whole picture would sum up to a similar value regardless of whether the reader is looking at the left side of the screen or the right side of the screen.
  • the two determined picture indexes match up in query 108, the text in the corresponding segment of the screen is highlighted in step 109.
  • the segments are made small enough that only 1 to 5 words in a text line are highlighted.
  • an embodiment that highlights more text during reading in a segment is also considered according to the invention. The decisive point is that the highlighted region follows the user's gaze direction during reading.
  • the process ends in step 110. If the two picture indices don't match up in query 108, the text highlighting step 109 is omitted and the process ends in step 110 directly.
  • the picture contour extraction corresponds to the image processing which is used for gaze direction determination.
  • an evaluation of eye position, nose position, positions of prominent points of spectacles and or other distinctive points of the head or face of the operator may be done.
  • a webcam with telephoto lens or zoom lens it is more advantageous to use instead of the webcams today often used with wide angle lens a webcam with telephoto lens or zoom lens.
  • Such lenses capture the operators face in bigger size and more planar, such that the head movements are easier to detect.
  • an interpolation technique may be used. That means that the training would not be performed in all screen segments S1 to S180 but only in a reduced number of segments distributed over the screen. For the intermediate screen segments positioned between two trained segments, the head positions would be calculated with interpolation and correspondingly archived. Particularly in the embodiment with "T"-symbol extraction this interpolation technique works well, since the "T"-symbol position is a linear function of the head turning in upper/lower direction starting from the screen center and head turning in the left/right direction starting from the screen center.
  • a problem with the above explained embodiments in which the view direction is determined based on head position detection is that, when reading text, the user does not look at each text passage straight. Usually the user can also use eye movement for focusing text. In that case the gaze direction determination based on head position tracking might not be accurate enough.
  • an improved embodiment will be explained in the following. That solution starts out with the head position detection as explained above. The corresponding algorithm delivers an estimation of the view point on screen. Then by eye position detection and a corresponding translation into a corrected position of the estimated view point, the final view point is determined.
  • Fig. 12 shows different eye positions and how they appear in the image captured by the camera 10. Since the camera 10 captures the whole face of the reader, for determination of the eye position, there is a need of zooming in the image and extraction of the eye region in the face.
  • the extreme eye positions when the user is looking at the upper left and right corner, lower left and right corner, center upper border, center lower border, center left border and center right border without head movement are depicted in Fig. 12 .
  • the centers of the pupils of the eyes are determined relative to the borders of the eyes, e.g. to the borders of the eye lids which normally can easily be extracted with image processing, e.g. edge detection.
  • image processing e.g. edge detection
  • Fig. 13 The operation of the computer 5 in the additional training phase with a corresponding computer program is illustrated in Fig. 13 .
  • the start of the additional training phase is in step 1300.
  • a cross is shown on screen in the center of segment 5 in Fig. 4 .
  • An image is captured in step 1302.
  • step 1303 it is displayed a message "Look at upper left corner without head movement!.
  • a cross might be shown in the upper left corner, too.
  • the user looks at the upper left corner and an image is captured in step 1304.
  • What follows in step 1305 is the eye position extraction with image processing as above mentioned.
  • the extracted eye position will be archived in step 1306.
  • step 137 the user is prompted to look at upper right corner, lower left corner, lower right corner, center of upper border, center of lower border, center of left border and center of right border. In each case an image is captured, the eye position will be extracted and archived like for the upper left corner.
  • the additional training phase is then ended in step 137.
  • step 1400 The start of the working phase is with step 1400.
  • the user looks at the screen during reading and an image is captured with camera 10 in 1401.
  • image processing e.g. contour extraction or "T"-symbol extraction
  • the head position is determined in step 1402.
  • the best matching image is determined in step 1403.
  • This is translated into the segment of the screen to which the best matching archived image belongs in step 1404.
  • the eye position is extracted from the current image in 1405. Since the extracted eye position most of the time lies in between the extreme positions archived in the additional training phase and the center position, the correction coordinates are determined with the help of interpolation.
  • correction coordinates provide the information how many segments to the left or right and how many segments up or down starting from the determined view point in step 1402 the focus point of the user's gaze lies on screen.
  • the correction coordinates are calculated in step 1406.
  • step 1407 the correction coordinates are applied to the determined segment on screen based on head position detection and the final view point is determined this way.
  • the highlighting of text then occurs in the final determined screen segment in step 1408.
  • the process ends in step 1409.
  • text reading is a continuous task, instead of ending the process in 1409 the whole process may start anew from the beginning with step 1400 until a predetermined ending criterion is fulfilled.
  • Fig 15 The way the text is highlighted during reading is principally illustrated in Fig 15 .
  • the user's gaze is tracked and the highlighted region 22 moves over the screen line-wise from left to right and top to down. This way it is very convenient for the user's eye to read the text since the task of line-tracking is done by the computer. Since the focus area is highlighted, even word recognition is eased for the user.
  • Fig. 16 shows the start of reading the second paragraph of a page. Just the first expression "Fig" is highlighted. In Fig. 17 the next text section is highlighted as the reader is carrying on reading. It should be noted, that in this embodiment the first expression remains highlighted as well. In Fig. 18 it is shown that the whole first sentence is highlighted when the user fixates the end of the first sentence. This way the user may quickly jump back to the beginning of the sentence in case he thinks that he did not understand the sentence fully, which happens quite often in reading complicated texts. If the user's gaze jumps back to the beginning of the sentence, the highlighting of the sentence starts out anew from the beginning.
  • Fig. 20 shows an intermediate state, when the user has read the first four words of the second sentence.
  • Fig. 21 shows the highlighting of the complete second sentence. In all Figs. 16 to 21 for improving legibility the font in the highlighted text section has been modified from originally Calibri (Body) 11 pts to Microsoft Sans Serif 11 pts.
  • the highlighted text region can precede the actual gaze direction in one or more segments of the screen. This can be useful since it provides a better orientation to the user's eye, so that the gaze may follow the preceding highlighting region.
  • the text section in which the focus progresses can be highlighted for the reader through frame, underlining, character color, background color, different font, bigger line spacing, colour gradient, text effect like 3D effect or the like in any combination.
  • the user when the user is starting reading with an unusual position of his head, and the best matching archived image does not correctly reflect the starting position of reading, the user is allowed to override the proposed starting position by reading aloud the paragraph number on screen from where he wants to start. So by saying the words "second paragraph", the computer starts highlighting text from the beginning of the second text paragraph on screen. As an alternative he may read the page and line where he wants to start or the column and line where he wants to start reading. Further alternatively the user may select the starting position with the help of the computer mouse or touchpad.
  • the invention may also be used for controlling with view direction control the mouse pointer. If the viewing direction is not detected accurately enough for pointing at a menu option on screen, then the pointer is set to the appropriate segment S1 to S180 of the screen image. The exact positioning of the mouse cursor is then made with the conventional mouse or touch pad. By pre-selecting the image area using gaze control to the executive mouse movements are reduced. When the focus of the eyes can be calculated accurately enough, to dispense with the mouse or the touchpad is possible. Then only the mouse buttons remain, but these can then be provided on the keyboard.
  • a pause button is sufficient.
  • Such pause buttons are provided on the keyboards today usually for controlling the DVD drive of the computer and can be used with the same for the new purpose.
  • the cursor then remains at the same position on the screen during intermission. The user can let his gaze wander freely, without the mouse pointer changing its position.
  • Such a pause button may also be used for text reading with gaze determination.
  • the user is prompted during training to read a displayed text aloud.
  • a displayed text aloud During reading the view is migrating further from eye fixation to eye fixation. It is recorded here a sequence of images. This can be done in form of a conventional video recording. The number of images which is taken, then corresponds to the usual speed of 25 frames / s. Same time, since the tone is also recorded via the microphone 17, the computer through sound analysis can determine the exact time at which the reader has targeted specific text passages and then archive the associated matching picture or the corresponding image processed picture. During operation, these archived images are accessible and are compared with the image for the current viewing direction.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
EP14306616.5A 2014-10-13 2014-10-13 Verfahren zur Steuerung der Anzeige von Text zur Unterstützung des Lesens auf einer Anzeigevorrichtung und Vorrichtung zur Durchführung des Verfahrens und computerlesbares Speichermedium Withdrawn EP3009918A1 (de)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP14306616.5A EP3009918A1 (de) 2014-10-13 2014-10-13 Verfahren zur Steuerung der Anzeige von Text zur Unterstützung des Lesens auf einer Anzeigevorrichtung und Vorrichtung zur Durchführung des Verfahrens und computerlesbares Speichermedium
US15/518,476 US10452136B2 (en) 2014-10-13 2015-10-02 Method for controlling the displaying of text for aiding reading on a display device, and apparatus adapted for carrying out the method, computer program, and computer readable storage medium
PCT/EP2015/072849 WO2016058847A1 (en) 2014-10-13 2015-10-02 Method for controlling the displaying of text for aiding reading on a display device, and apparatus adapted for carrying out the method, computer program, and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP14306616.5A EP3009918A1 (de) 2014-10-13 2014-10-13 Verfahren zur Steuerung der Anzeige von Text zur Unterstützung des Lesens auf einer Anzeigevorrichtung und Vorrichtung zur Durchführung des Verfahrens und computerlesbares Speichermedium

Publications (1)

Publication Number Publication Date
EP3009918A1 true EP3009918A1 (de) 2016-04-20

Family

ID=51862236

Family Applications (1)

Application Number Title Priority Date Filing Date
EP14306616.5A Withdrawn EP3009918A1 (de) 2014-10-13 2014-10-13 Verfahren zur Steuerung der Anzeige von Text zur Unterstützung des Lesens auf einer Anzeigevorrichtung und Vorrichtung zur Durchführung des Verfahrens und computerlesbares Speichermedium

Country Status (1)

Country Link
EP (1) EP3009918A1 (de)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2369673A (en) * 2000-06-09 2002-06-05 Canon Kk Image processing apparatus calibration
US6553281B1 (en) 1997-08-26 2003-04-22 Heinrich-Hertz-Institut Fuer Nachrichtentechnik Berlin Gmbh Device for determining a fixation point
US20060256083A1 (en) * 2005-11-05 2006-11-16 Outland Research Gaze-responsive interface to enhance on-screen user reading tasks
US20060281969A1 (en) * 2005-06-02 2006-12-14 Vimicro Corporation System and method for operation without touch by operators
US7176939B2 (en) 2003-10-07 2007-02-13 Thomson Licensing Method for processing video pictures for false contours and dithering noise compensation
WO2010142455A2 (en) * 2009-06-12 2010-12-16 Star Nav Method for determining the position of an object in an image, for determining an attitude of a persons face and method for controlling an input device based on the detection of attitude or eye gaze
EP2302615A2 (de) * 2009-08-14 2011-03-30 LG Electronics Tragbare elektronische Vorrichtung und Beleuchtungsteuerverfahren dafür
US20110298702A1 (en) * 2009-12-14 2011-12-08 Kotaro Sakata User interface device and input method
US20120189160A1 (en) * 2010-08-03 2012-07-26 Canon Kabushiki Kaisha Line-of-sight detection apparatus and method thereof
US20120256967A1 (en) * 2011-04-08 2012-10-11 Baldwin Leo B Gaze-based content display
US20120293528A1 (en) * 2011-05-18 2012-11-22 Larsen Eric J Method and apparatus for rendering a paper representation on an electronic display
US20130021373A1 (en) * 2011-07-22 2013-01-24 Vaught Benjamin I Automatic Text Scrolling On A Head-Mounted Display
EP2573650A1 (de) * 2010-05-20 2013-03-27 Nec Corporation Tragbares informationsverarbeitungsendgerät
WO2014155133A1 (en) * 2013-03-28 2014-10-02 Eye Tracking Analysts Ltd Eye tracking calibration

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6553281B1 (en) 1997-08-26 2003-04-22 Heinrich-Hertz-Institut Fuer Nachrichtentechnik Berlin Gmbh Device for determining a fixation point
GB2369673A (en) * 2000-06-09 2002-06-05 Canon Kk Image processing apparatus calibration
US7176939B2 (en) 2003-10-07 2007-02-13 Thomson Licensing Method for processing video pictures for false contours and dithering noise compensation
US20060281969A1 (en) * 2005-06-02 2006-12-14 Vimicro Corporation System and method for operation without touch by operators
US20060256083A1 (en) * 2005-11-05 2006-11-16 Outland Research Gaze-responsive interface to enhance on-screen user reading tasks
WO2010142455A2 (en) * 2009-06-12 2010-12-16 Star Nav Method for determining the position of an object in an image, for determining an attitude of a persons face and method for controlling an input device based on the detection of attitude or eye gaze
EP2302615A2 (de) * 2009-08-14 2011-03-30 LG Electronics Tragbare elektronische Vorrichtung und Beleuchtungsteuerverfahren dafür
US20110298702A1 (en) * 2009-12-14 2011-12-08 Kotaro Sakata User interface device and input method
EP2573650A1 (de) * 2010-05-20 2013-03-27 Nec Corporation Tragbares informationsverarbeitungsendgerät
US20120189160A1 (en) * 2010-08-03 2012-07-26 Canon Kabushiki Kaisha Line-of-sight detection apparatus and method thereof
US20120256967A1 (en) * 2011-04-08 2012-10-11 Baldwin Leo B Gaze-based content display
US20120293528A1 (en) * 2011-05-18 2012-11-22 Larsen Eric J Method and apparatus for rendering a paper representation on an electronic display
US20130021373A1 (en) * 2011-07-22 2013-01-24 Vaught Benjamin I Automatic Text Scrolling On A Head-Mounted Display
WO2014155133A1 (en) * 2013-03-28 2014-10-02 Eye Tracking Analysts Ltd Eye tracking calibration

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PHIBANG NGYUEN; JULIEN FLEUREAU: "Calibration-free Gaze Tracking Using Particle Filter", CHRISTEL CHAMARET AND PHILIPPE GUILLOTEL, MULTIMEDIA AND EXPO (ICME), 2013 IEEE INTERNATIONAL CONFERENCE ON, 15 July 2013 (2013-07-15)

Similar Documents

Publication Publication Date Title
US10452136B2 (en) Method for controlling the displaying of text for aiding reading on a display device, and apparatus adapted for carrying out the method, computer program, and computer readable storage medium
CN108463787B (zh) 多视角图像的注视校正
EP2813922B1 (de) Sichtbarkeitsverbesserungsverfahren basierend auf Augenverfolgung, computerlesbares Speichermedium und elektronische Vorrichtung
US9703373B2 (en) User interface control using gaze tracking
US20160004692A1 (en) Systems and methods for displaying foreign character sets and their translations in real time on resource-constrained mobile devices
KR101642402B1 (ko) 촬영 구도를 유도하는 디지털 영상 촬영 장치 및 방법
US10956733B2 (en) Image processing apparatus and image processing method
CN110290324B (zh) 设备成像方法、装置、存储介质及电子设备
US8692846B2 (en) Image processing apparatus, method for retouching images based upon user applied designated areas and annotations
KR20140002007A (ko) 정보 처리 장치, 정보 처리 방법 및 기록 매체
CN112969436A (zh) 电子视觉辅助设备中自主增强的免手动控制
CN110531853B (zh) 一种基于人眼注视点检测的电子书阅读器控制方法及***
WO2017212878A1 (ja) バーチャルメイク装置、およびバーチャルメイク方法
CN112585566A (zh) 用于与具有内置摄像头的设备进行交互的手遮脸输入感测
EP1668885B1 (de) Kamera, computer, projektor und bildverarbeitung zur projektion eines grösseneingestellten bildes
EP3933668A1 (de) Verfahren zur positionierung des gesichtsbildes eines intelligenten spiegels
JP3307075B2 (ja) 撮影装置
EP3009918A1 (de) Verfahren zur Steuerung der Anzeige von Text zur Unterstützung des Lesens auf einer Anzeigevorrichtung und Vorrichtung zur Durchführung des Verfahrens und computerlesbares Speichermedium
JP2016042261A (ja) 画像処理装置、画像処理方法および画像処理プログラム
CN110969161B (zh) 图像处理方法、电路、视障辅助设备、电子设备和介质
WO2020107186A1 (en) Systems and methods for taking telephoto-like images
US20230215311A1 (en) Automatic user interface reconfiguration based on external capture
JP2018031822A (ja) 表示装置及び方法、並びにコンピュータプログラム及び記録媒体
Hild et al. Interacting with target tracking algorithms in a gaze-enhanced motion video analysis system
Sandnes et al. A smart handheld magnifier for reflowing printed text notices in public spaces

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20161021