US20100321482A1 - Eye/head controls for camera pointing - Google Patents
Eye/head controls for camera pointing Download PDFInfo
- Publication number
- US20100321482A1 US20100321482A1 US12/817,604 US81760410A US2010321482A1 US 20100321482 A1 US20100321482 A1 US 20100321482A1 US 81760410 A US81760410 A US 81760410A US 2010321482 A1 US2010321482 A1 US 2010321482A1
- Authority
- US
- United States
- Prior art keywords
- eye
- camera
- video
- user
- video camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/0304—Detection arrangements using opto-electronic means
- G06F3/0325—Detection arrangements using opto-electronic means using a plurality of light emitters or reflectors or a plurality of detectors forming a reference frame from which to derive the orientation of the object, e.g. by triangulation or on the basis of reference deformation in the picked up image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/296—Synchronisation thereof; Control thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
- H04N13/383—Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/56—Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/66—Remote control of cameras or camera parts, e.g. by remote control devices
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/69—Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/695—Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/90—Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/44—Receiver circuitry for the reception of television signals according to analogue transmission standards
- H04N5/57—Control of contrast or brightness
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/183—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
- H04N7/185—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source from a mobile camera, e.g. for remote control
Definitions
- Embodiments of the present invention relate to systems and methods for controlling the orientation of a camera. More particularly, embodiments of the present invention relate to systems and methods for controlling the orientation of a camera using an eye tracking system to monitor a person's gazepoint.
- the operator In many tele-operation applications where a human operator is controlling a remote robotic tool, the operator has two tasks. Firstly, and most obviously, he operates the robot itself, including its primary tool or tools, such as the robot hand(s) or arm(s). Secondly, if the robot is equipped with a vision system that provides the operator a close up view of the robot's work area from the robot's perspective, the operator may be able, or required, to control the robot's vision system.
- FIG. 1 is a schematic diagram of a system for remotely controlling a setting of a video camera, in accordance with various embodiments.
- FIG. 2 shows a matrix of possible eye or head activity variables that an eyetracker can measure from its user, versus as set of alternative camera control settings that can be used to control a remote video camera, in accordance with various embodiments.
- FIG. 3 is a flowchart showing a method for remotely controlling a setting of a video camera, in accordance with various embodiments.
- FIG. 3 is a schematic diagram of a system for remotely determining the location of a target in a three-dimensional real space, in accordance with various embodiments.
- FIG. 4 is a schematic diagram of a system of distinct software modules that performs a method for remotely controlling a setting of a video camera, in accordance with various embodiments.
- FIG. 5 is a schematic diagram of a system for remotely determining the location of a target in a three-dimensional real space, in accordance with various embodiments.
- FIG. 6 is a flowchart showing a method for remotely determining the location of a target in a three-dimensional real space, in accordance with various embodiments.
- FIG. 7 is a schematic diagram of a system of distinct software modules that performs a method for remotely determining the location of a target in a three-dimensional real space, in accordance with various embodiments.
- systems and methods provide a robot operator means to control remote vision systems without using his hands, which are typically occupied in controlling the robot itself.
- remote vision it is often desirable for the operator to be able to control the robot vision system as if he were controlling his own eyes at the scene.
- a camera's pan and tilt angles are manipulated in direct response to the operator's own eye orientations. In this manner, the camera automatically rotates to point directly toward the object the operator is looking at on his display screen. Additionally, eye and/or head movements may be used to direct other camera activity such as camera roll; zoom; x, y and z position with respect to the host platform; camera separation and convergence angle in stereoscopic cameras; lens iris or aperture; and scene illumination. Minimum or no hand control of the camera is required.
- methods for controlling remote video cameras are based on the natural eye or head activities of the system operator and include velocity modes of control, combinations of eye and head movements, and the control of stereoscopic cameras.
- Eye or head activity variables are defined as the set of all the dynamic head and eye activities that a user exercises in the course of looking at things. Eye or head activity variables include, but are not limited to, eye rotation (pan and tilt angles), gaze convergence, pupil constriction and dilation, head rotation (pan, tilt and roll), and head translation (horizontal, vertical, and longitudinal).
- “camera control settings” refer to the set of the video camera's controllable parameters, including, but not limited to, pan, tilt, zoom, focus range, iris, parallax or convergence angle (for stereoscopic camera pairs), and camera-body separation (also for stereoscopic camera pairs).
- One approach for controlling the camera viewing angle is to provide the camera with position actuators that allow the camera to move (translate) right-left, up-down and forward-back. As the operator moves his head, the camera moves proportionately with it. This method, however, requires the operator to maintain a given positional perspective, he must maintain his head at a given location. That position may not be comfortable, or it might not be optimum for him to view the video screen, despite the camera's perspective on the work scene.
- an improved method allows the operator to keep his head within a small comfortable range while allowing the camera to move through a large range, and allowing the camera to remain fixed at any desired point throughout that range.
- the operator's head deviations from a reference point are translated into velocity commands to the camera's position actuators. If the operator positions his head to the right of the nominal “reference”, or “resting” point, for example, the camera control system provides a velocity command to the camera position actuator that causes the camera to move at a velocity in proportion to the distance that the operator's head is positioned from the reference point. (Again, a dead zone, or a low-gain zone, allows the user some freedom of head movement without camera movement.)
- a gaze-controlled camera allows a robot operator to control the robot's vision system.
- the robot operator does not control the pan-tilt of the robot camera manually. Rather, a gaze-based control system commands the cameras automatically. The commands are based on the operator's natural eye activity as he observes the display while performing his task. No manual action, other than normal eye activity, is required to control the remote camera.
- a GCC includes an eyetracker.
- the eyetracker typically mounted below the operator display, uses one or more video cameras to observe the operator's eyes, and it continually calculates the coordinates of the operator's gazepoint within the display.
- an automatic control system uses the operator's gazepoint activity to generate camera pan-tilt commands. When the user fixates on an object, the controller rotates the camera to center that object in the display.
- GCC exploits two facts: 1) people naturally point their eyes at what they are interested in, and 2) their gaze is measurable by an unobtrusive instrument.
- the robot camera's pointing direction By slaving the robot camera's pointing direction to the operator's eye as he observes the camera scene, the camera automatically rotates toward what the user wants to see—without the user having to take manual action.
- the operator drives the camera right, left, up, or down simply by doing what he does anyway—looking at the object he is interested in.
- a camera control algorithm can maintain a balance between two potentially conflicting goals.
- the cameras must move slowly to maintain highly stable images.
- the user wants to scan a wide area, i.e. shift his gaze by large angles the camera must move rapidly.
- the algorithm balances these objectives by commanding the camera with angular velocities proportional to the angular offset of the operator's gaze with respect to the center of the display.
- the camera rotates slowly.
- the camera rotates quickly.
- the robot When a person uses a robot to perform a task, rather than using his hands to do the work directly, the robot adds two key elements of workload to the task. First, the indirect tele-operation of the robot makes it more difficult for the operator to maneuver within the environment and to manipulate objects. Second, if the robot has a camera providing remote vision, the operator has the additional task of controlling the direction the camera is pointed.
- GCCs can reduce the manual and cognitive workload of a robot operator.
- a GCC can eliminate the manual workload of controlling the robot camera, leaving the operator's hands free to operate the robot's body and manipulators. With his hands completely dedicated to controlling the robot body and manipulator, the operator can drive the robot continuously, without interruption from camera control.
- a GCC can minimize the operator's cognitive workload.
- his conscious attention is (ideally) focused fully on the physical work at hand. While his visual activity during the task is absolutely essential to the success of his effort, the operator's actions of controlling his eye rotations do not add to the cognitive workload required for him to execute the task.
- a separate (but very powerful) portion of his brain handles the ocular control functions in parallel, without adding workload to the conscious part of the brain handling the central cognitive task.
- the camera control does add cognitive workload, because the operator orchestrates the manual camera control functions using the same part of his brain that performs the robot control functions. In complex tasks, the additional vision-control workload can seriously interfere with the task's main cognitive workload, i.e. operating the robot and/or its manipulator.
- a GCC eliminates the cognitive workload associated with manual camera control. It directly harnesses the camera's pan-tilt control to the operator's own eye activity, exploiting the brain's existing ocular control functions to implement the remote camera control automatically. Rather than requiring the operator to use the conscious resources of his brain to execute remote vision control, GCC utilizes the brain's separate, unconscious ocular control function to do the job. Thus GCC not only relieves his hands from the camera control task, but also eliminates the cognitive workload associated with manual camera control. Without the distraction of manual camera control, the operator can concentrate his full attention on his ultimate task and, consequently, perform the task with fewer errors.
- gaze-controlled pan-tilt improves robot operation two ways: 1) eliminating manual camera-control workload improves task execution speed, and 2) eliminating cognitive camera-control workload reduces operator performance errors.
- Camera pan/tilt orientation is controlled by gaze direction, for example.
- camera control settings include camera zoom, focus range, parallax, camera-body separation, and iris diameter, in addition to pan and tilt. Due to the constraints of generating three-dimensional (3-D) images that are easily and properly perceived by the human visual system, controls for these parameters are highly inter-dependent. Options for operator control of these parameters include gaze pan/tilt, gaze parallax (indicative of gazepoint range), head position, and head pan/tilt.
- zoom control is accomplished by commanding image magnification based on the operator's longitudinal head position.
- An eyetracker measures the operator's head position with respect to a set point in the middle of the stereo display's eye box. If the user moves his head forward of the set point (or more likely forward of a dead zone around the set point), the lenses are given velocity commands to zoom in, and vice versa. This concept is based a person's natural tendency to move his head forward when he wants a closer look at something and to move his head back when he wants a wider perspective. Zoom factors for the two cameras, for example, must be programmed to match each other, so both eyes experience equal image magnification.
- camera parallax is made to match the operator's eye parallax to optimize the human's 3-D perception of the scene being viewed. That is, the tow-in angle between the two camera axes is controlled to match the tow-in angle between the visual axes of the operator's two eyes.
- Eye parallax is measured with a binocular version eyetracker, for example, and the camera controller computes camera parallax commands to follow the eyes' parallax.
- Camera parallax control is fully automatic—no conscious operator control, e.g. through head position or orientation, is required.
- matching the camera and eye parallax angles is geometrically equivalent to matching relative object and image ranges. That is, if the camera and eye parallax angles are matched, the range of an object within the camera frame of reference is matched to the range of its image within the stereoscopic display frame.
- an algorithm for controlling camera parallax based on eye parallax is implemented as follows: If the operator's 3-D gaze point remains focused on an object or objects beyond the current camera convergence range, for example, the cameras are directed to converge further out. Bringing the camera convergence range in the real world into alignment with the eye convergence range within the display frame, brings the camera parallax into alignment with eye parallax. Using eye convergence range to control camera convergence range extends the concept of gazepoint control from 2-D (pan/tilt) to 3-D (pan/tilt/range).
- the focus ranges of the camera lenses are adjusted to match the equivalent range of the operator's gazepoint within the 3-D display. Based on the assumption that a human focuses his eyes at the same range where the two eyes' gaze lines converge, the control algorithm for the camera focus ranges makes the lens focus ranges follow the camera convergence range.
- camera focus control is fully automatic—no conscious operator control, e.g. through head position or orientation, is required.
- Precise focus range control is required only with low-light, high-speed lenses that have short depth of field. With large depth of field, lens focus control only need be approximate to obtain adequate stereo images.
- the robot could change the lateral distance between the two camera bodies, thereby changing the operator's apparent viewing distance.
- the stereo display makes it appear to the operator that he is moving in and viewing the scene from a position closer than the cameras really are.
- the camera's iris and/or the camera's illumination level on the scene is/are controlled by the user's pupil activity.
- a large eye pupil often indicates that the scene is under illuminated, and the eye dilates its pupil to accommodate low available light. Conversely, if the scene is very bright, the eye's pupil constricts to allow the retina to work with the high level of incident light.
- An eyetracker that measures the user's pupil diameter may direct the camera to adjust its iris and/or scene illuminator to provide optimum lighting conditions that permit comfortable pupil diameters.
- a large eye pupil indicating restricted light, directs the camera iris diameter to increase and/or causes the camera's illuminator to intensify.
- a small pupil directs decreased iris diameter and/or decreased scene illumination.
- pupil size may be used to control the display brightness—as well as the camera's iris or illumination control settings.
- a scene illuminator is, for example, a light source that is part of a video camera.
- a scene illuminator can be a device that is separate from the video camera.
- camera-body separation is programmatically tied directly to zoom for apparent distance viewing.
- the camera bodies are simultaneously controlled to move apart.
- Increased zoom provides image enlargement, and the corresponding increased camera-body separation provides apparent range reduction (with respect to the fixed human interocular distance).
- control of the camera body separation is fully automatic—no conscious operator control, e.g. through head position or orientation, is required.
- all camera controls from the computer to the pan, tilt, zoom, focus, parallax and camera-body separation actuators take the form of velocity commands.
- the sensor feedback required from the camera-control subsystem back to the computer are position signals indicating the current values of the individual control-variable states.
- FIG. 1 is a schematic diagram of a system 100 for remotely controlling a setting of a video camera 110 , in accordance with various embodiments.
- System 100 includes actuator 120 , video display 130 , eyetracker 140 , and processor 150 .
- Actuator 120 is physically connected to video camera 110 and controls at least one setting of video camera 110 .
- Actuator 120 can be a mechanical device and/or an electronic device.
- Actuator 120 can be a separate device from video camera 110 or it can be integrated as part of video camera 110 .
- Actuator 120 can include, for example, a gimbal mechanism and/or a slide mechanism to orient and position camera 110 .
- Actuator 120 can also include a mechanism to control the zoom and focus of video camera 110 .
- Actuator 120 can also include a mechanism to control the camera iris and/or the camera's scene illuminator.
- Video display 130 displays video from video camera 110 to a user or operator of video camera 110 .
- Video display 130 can be a computer screen, a television, a stereoscopic viewer if multiple cameras are used, or any other device capable of displaying a video signal from video camera 110 .
- Eyetracker 140 can include one or more video cameras, an asymmetric aperture, a light source, a gimbal, and a processor, all of which are not shown in FIG. 1 . Eyetracker 140 can also include video display 130 .
- Eyetracker 140 may measure a number of eye or head activity variables. At a minimum, eyetracker 140 images at least one of the user's eyes 160 over time as the user is observing video display 130 , and measures the point of gaze 170 of eye 160 on or within video display 130 . Additionally, eyetracker 140 may explicitly measure one, two or three coordinates of the position of one or both eyes 160 in space as the user moves his head around. Additionally, eyetracker 140 may explicitly measure the orientation of the eye(s) 160 , or more specifically the eye's gaze vector orientation(s), as the user rotates his eyes as he looks around.
- Eyetracker 140 may calculate the spatial position and/or orientation of the user's head (not shown) from the position(s) of the user's eye(s) 160 .
- Spatial positions of the eye and/or head may include the horizontal, vertical and/or longitudinal locations with respect to the video display 130 .
- Spatial orientations of the eye(s) 160 may be expressed in alternative ways, including vector directions or pan and tilt angles.
- Spatial orientation of the head may include pan, tilt and roll angles.
- Eyetracker 140 may calculate gazepoint 170 as the extrapolation of the gaze line(s) 180 from the spatial location of the eye(s) 160 .
- the gazepoint 170 may computed as the intersection of the gaze line with the display surface.
- a 3-dimensional gazepoint may be computed as the intersection of the gaze lines 180 from both eyes. (See gaze lines 596 and 597 converging on gazepoint 598 in FIG. 5 ).
- the gaze convergence or parallax may be computed by calculating the angle between the eye's two gaze lines.
- eyetracker 140 may measure the pupil diameter of eye(s) 160 .
- eyetracker 140 measurements may include, but are not limited to, eyeball location, eye gaze direction, gaze convergence or parallax, and eye pupil diameter.
- Head variables that can be calculated by Eyetracker 140 include, but are not limited to, head pan or tilt angle, head roll angle, head horizontal or vertical position, and head longitudinal position. Eyetracker 140 measures these eye or head variables whether the user changes them voluntarily or involuntarily.
- Processor 150 is in communication with actuator 120 , video display 130 , eyetracker 140 , and video camera 110 . This communication can include, but is not limited to, wired or wireless data or control communication.
- Processor 150 can include, but is not limited to, a computer, a microcontroller, a microprocessor, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or any device capable of executing a series of instructions.
- Processor 150 can be the same processor used by eyetracker 140 or it can be a separate device.
- Processor 150 also performs a number of steps. Processor 150 translates the eye and/or head activity variable(s) calculated by eyetracker 140 into their camera control setting(s) that drive video camera 110 . Processor 150 then instructs actuator 140 to respond to the control setting(s) for video camera 110 .
- the control settings that processor 150 may provide to video camera 110 include, but are not limited to, the pan or tilt angle of video camera 110 , the roll angle of video camera 110 , the horizontal or vertical position of video camera 110 , the longitudinal position of video camera 110 , the zoom percentage of video camera 110 , the focus of video camera 110 , the iris of video camera 110 , and illumination or light intensity produced by video camera 110 .
- FIG. 2 shows a matrix 200 of possible eye or head activity variables that an eyetracker can measure from its user, versus as set of alternative camera control settings that can be used to control a remote video camera, in accordance with various embodiments.
- Unbolded checkmarks 210 identify eye/head variables that could reasonably be used to control various camera variables.
- Bolded checkmarks 220 identify the preferred eye/head variables used to drive the various camera control-variable settings.
- FIG. 3 is a flowchart showing a method 300 for remotely controlling a setting of a video camera, in accordance with various embodiments.
- step 310 of method 300 video from a video camera is displayed to a user using a video display.
- step 320 at least one eye of the user is imaged as the user is observing the video display, a change in an image of the at least one eye of the user is measured over time, and an eye/head activity variable is calculated from the measured change in the image using an eyetracker.
- step 330 the eye/head activity variable is translated into a camera control setting, and an actuator connected to the video camera is instructed to apply the camera control setting to the video camera using a processor.
- instructions configured to be executed by a processor to perform a method are stored on a computer-readable storage medium.
- the computer-readable storage medium can be a device that stores digital information.
- a computer-readable storage medium includes a compact disc read-only memory (CD-ROM) as is known in the art for storing software.
- CD-ROM compact disc read-only memory
- the computer-readable storage medium is accessed by a processor suitable for executing instructions configured to be executed.
- a computer program product includes a tangible computer-readable storage medium whose contents include a program with instructions being executed on a processor so as to perform a method for remotely controlling a setting of a video camera. This method is performed by a system of distinct software modules.
- FIG. 4 is a schematic diagram of a system 400 of distinct software modules that performs a method for remotely controlling a setting of a video camera, in accordance with various embodiments.
- System 400 includes video display module 410 , eye tracking module 420 , and camera control module 430 .
- Video display module 410 displays video from a video camera to a user on a video display.
- Eye tracking module 420 performs a number of steps. Eye tracking module 420 images at least one eye of the user with an eyetracker as the user is observing the video display. Eye tracking module 420 measures a change in an image of the at least one eye of the user over time. Finally, eye tracking module 420 calculates an eye/head activity variable from the measured change in the image.
- Camera control module 430 also performs a number of steps. Camera control module 430 translates the eye/head activity variable into a camera control setting. Then, camera control module 430 instructs an actuator connected to the video camera to apply the camera control setting to the video camera.
- Remote video sensors allow observers to see and detect targets without personally being on the scene.
- a key limitation of current remote display systems is that the observer often cannot see a target in a three-dimensional (3-D) space. 3-D information is critical for determining the range to a target.
- systems and methods are described for determining the range or 3-D location of a target simply by looking at it within a remote 3-D display.
- a stereo imager generates a 3-D image from a pair of cameras viewing the real scene.
- the 3-D location of his equivalent gazepoint within the real scene is computed quantitatively, automatically and continuously using an eyetracker. If the user wishes to designate a target, he fixes his gaze on its image and activates a switch or speaks a keyword.
- the location data for the actual target in real space is then recorded and passed to a client application, for example.
- Animals use binocular vision to determine the 3-D locations of objects within their environments. Loosely speaking, the horizontal and vertical coordinates of the object within the viewer's space are determined from the orientation of the head, the orientation of the eyes within the head, and the position of the object within the eyes' two-dimensional (2-D) images.
- the range coordinate is determined using stereopsis: viewing the scene from two different locations allows the inference of range by triangulation.
- 3-D target location information is extracted from a person based on the observable behavior of his eyes. A human's natural behavior of looking at targets of interest is exploited. Rapid target designation is obtained with a single, simple command, and the need for manual manipulation or pointing of equipment in the computation of the target location is eliminated.
- an eye-operated 3-D targeting system includes an eyetracker and a stereoscopic display or viewer.
- Two video cameras view the real scene from two different locations.
- a stereoscopic viewer converts the two camera video signals into a scaled 3-dimensional image of the real scene.
- the operator views the 3-D image space with both eyes.
- a binocular eyetracker monitors both the user's eyes as he views the stereoscopic or holographic 3-D display, and it continuously computes the gaze lines of the two eyes within the 3-D image space. The intersection of the two gaze lines is computed to be the user's 3-D gazepoint within the image space.
- the 3-D gazepoint within the image scene is mathematically transformed (using formulas well known in the art) to the equivalent 3-D location of the target being observed in real space.
- the system continuously computes the 3-D location of the user's gazepoint.
- the two measured gaze lines do not precisely intersect with each other.
- the 3-D intersection point may be taken to be the point in space where the two measured gaze lines come closest to one another.
- a 3-D target range finding system allows accurate measurement over a wide range of distances by using variable camera separations. Long ranges are measured with widely separated cameras, and short ranges are measured with closely separated cameras. In aerial targeting applications, for example, long ranges can be measured by placing the two cameras on different flight vehicles. The vehicles may be separated as needed to provide accurate range information. In small-scale applications, such as surgery, miniature cameras mounted close to the surgical instrument allows accurate 3-D manipulation of the instrument.
- the user may designate the target by fixing his gaze on it and activating a switch or verbalizing a keyword.
- a 3-D target range finding system samples the 3-D gazepoint location for use by the client application.
- Velocities, directions, and accelerations of moving targets may also be measured if the user keeps his gaze fixed on the target as it moves.
- the 3-dimensional target location system stores the time history of the user's equivalent gazepoint location in real space.
- the target velocity, direction, and/or acceleration may be computed by appropriate (well known) mathematical calculations on the point motion history. At least two successive time points are needed to calculate target velocity and direction, and three points are required to calculate acceleration.
- a 3-D target range finding system is passive. There is no active range-finding sensor such as a laser or radar that may be detected by the enemy. The operator does not have to be at the scene or near the cameras. He may operate at a remote workstation. Cameras can protect the operator's eyes from exposure to dangerous lighting conditions.
- FIG. 5 is a schematic diagram of a system 500 for remotely determining the location of a target in a three-dimensional real space, in accordance with various embodiments.
- System 500 includes two or more video cameras 510 , stereoscopic display 530 , binocular eyetracker 540 , and processor 550 .
- Two or more video cameras 510 image target 580 in a three-dimensional real space.
- Stereoscopic display 530 is, for example, a video display as described above that can display three-dimensional images. Stereoscopic display 530 renders the video signals from the two cameras 510 to present the user with what appears to him as a three-dimensional image. The view to the user appears as if each of his two eyes were located at the real locations of the two cameras in the real environment.
- Binocular eyetracker 540 is an eyetracker as described above that includes at least two video cameras that are used to track both eyes of the user. Binocular eyetracker 540 performs a number of steps. Binocular eyetracker 540 images right eye 560 and a left eye 570 of the user as the user is observing target image 590 in stereoscopic video display 510 . Binocular eyetracker 540 calculates right gaze line 596 of right eye 560 and left gaze line 597 of left eye 570 in the three-dimensional image space. Finally, binocular eyetracker 540 calculates gazepoint 598 in the three-dimensional image space as the intersection of right gaze line 596 and the left gaze line 597 .
- Processor 550 is in communication with two or more video cameras 510 , stereoscopic display 530 , and binocular eyetracker 540 .
- Processor 550 is a processor as described above.
- Processor 550 also performs a number of steps. Processor 550 calculates the image target location in the three-dimensional image space from gazepoint 598 . Processor 550 then determines the real target location by translating the image target location to the real target location in the three-dimensional real space from the locations and positions of two video cameras 510 .
- system 500 can include an actuator (not shown) in communication with processor 550 and connected to at least one of the two video cameras 510 .
- the actuator can change the relative distance between the two video cameras 510 at the request of processor 550 .
- processor 550 can instruct the actuator to increase the relative distance to determine the real target location at longer ranges.
- processor 550 can instruct the actuator to decrease the relative distance to determine the real target location at shorter ranges.
- processor 550 selects two video cameras 510 from the two or more video cameras based the relative distance between two video cameras 510 . For example, processor 550 can select two video cameras with a larger relative distance to determine the real target location at longer ranges. Alternatively, processor 550 can select two video cameras with a smaller relative distance to determine the real target location at shorter ranges.
- processor 550 can calculate a velocity, acceleration, or direction of target 580 from two or more real target positions determined over time.
- FIG. 6 is a flowchart showing a method 600 for remotely determining the location of a target in a three-dimensional real space, in accordance with various embodiments.
- a target is imaged in a three-dimensional real space using two or more video cameras.
- step 620 a three-dimensional image space combined from two video cameras of the two or more video cameras is displayed to a user using a stereoscopic display.
- a right eye and a left eye of the user are imaged as the user is observing the target in the stereoscopic video display, a right gaze line of the right eye and a left gaze line of the left eye are calculated in the three-dimensional image space, and a gazepoint in the three-dimensional image space is calculated as the intersection of the right gaze line and the left gaze line using a binocular eyetracker.
- a real target location is determined by translating the gazepoint in the three-dimensional image space to the real target location in the three-dimensional real space from the locations and the positions of the two video cameras using a processor.
- a computer program product includes a tangible computer-readable storage medium whose contents include a program with instructions being executed on a processor so as to perform a method for remotely determining the location of a target in a three-dimensional real space. This method is performed by a system of distinct software modules.
- FIG. 7 is a schematic diagram of a system 700 of distinct software modules that performs a method for remotely determining the location of a target in a three-dimensional real space, in accordance with various embodiments.
- System 700 includes imaging/display module 710 , eye tracking module 720 , and target location module 730 .
- Imaging/display module 710 images a target in a three-dimensional real space with two or more video cameras. Imaging/display module 710 also displays a three-dimensional image space combined from two video cameras of the two or more video cameras to a user on a stereoscopic display.
- Eye tracking module 720 performs a number of steps. Eye tracking module 720 images a right eye and a left eye of the user with a binocular eyetracker as the user is observing the target in the stereoscopic video display. Eye tracking module 720 calculates a right gaze line of the right eye and a left gaze line of the left eye in the three-dimensional image space. Finally, eye tracking module 720 calculates a gazepoint in the three-dimensional image space as an intersection of the right gaze line and the left gaze line.
- Target location module 730 determines a real target location by translating the gazepoint in the three-dimensional image space to the real target location in the three-dimensional real space from locations and positions of the two video cameras.
- the specification may have presented a method and/or process as a particular sequence of steps.
- the method or process should not be limited to the particular sequence of steps described.
- other sequences of steps may be possible. Therefore, the particular order of the steps set forth in the specification should not be construed as limitations on the claims.
- the claims directed to the method and/or process should not be limited to the performance of their steps in the order written, and one skilled in the art can readily appreciate that the sequences may be varied and still remain within the spirit and scope of the various embodiments.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Studio Devices (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
A setting of a video camera is remotely controlled. Video from a video camera is displayed to a user using a video display. At least one eye of the user is imaged as the user is observing the video display, a change in an image of at least one eye of the user is measured over time, and an eye/head activity variable is calculated from the measured change in the image using an eyetracker. The eye/head activity variable is translated into a camera control setting, and an actuator connected to the video camera is instructed to apply the camera control setting to the video camera using a processor.
Description
- This application claims the benefit of U.S. Provisional Patent Application No. 61/187,864 filed Jun. 17, 2009.
- 1. Field of the Invention
- Embodiments of the present invention relate to systems and methods for controlling the orientation of a camera. More particularly, embodiments of the present invention relate to systems and methods for controlling the orientation of a camera using an eye tracking system to monitor a person's gazepoint.
- 2. Background Information
- In many tele-operation applications where a human operator is controlling a remote robotic tool, the operator has two tasks. Firstly, and most obviously, he operates the robot itself, including its primary tool or tools, such as the robot hand(s) or arm(s). Secondly, if the robot is equipped with a vision system that provides the operator a close up view of the robot's work area from the robot's perspective, the operator may be able, or required, to control the robot's vision system.
- The skilled artisan will understand that the drawings, described below, are for illustration purposes only. The drawings are not intended to limit the scope of the present teachings in any way.
-
FIG. 1 is a schematic diagram of a system for remotely controlling a setting of a video camera, in accordance with various embodiments. -
FIG. 2 shows a matrix of possible eye or head activity variables that an eyetracker can measure from its user, versus as set of alternative camera control settings that can be used to control a remote video camera, in accordance with various embodiments. -
FIG. 3 is a flowchart showing a method for remotely controlling a setting of a video camera, in accordance with various embodiments. -
FIG. 3 is a schematic diagram of a system for remotely determining the location of a target in a three-dimensional real space, in accordance with various embodiments. -
FIG. 4 is a schematic diagram of a system of distinct software modules that performs a method for remotely controlling a setting of a video camera, in accordance with various embodiments. -
FIG. 5 is a schematic diagram of a system for remotely determining the location of a target in a three-dimensional real space, in accordance with various embodiments. -
FIG. 6 is a flowchart showing a method for remotely determining the location of a target in a three-dimensional real space, in accordance with various embodiments. -
FIG. 7 is a schematic diagram of a system of distinct software modules that performs a method for remotely determining the location of a target in a three-dimensional real space, in accordance with various embodiments. - Before one or more embodiments of the present teachings are described in detail, one skilled in the art will appreciate that the present teachings are not limited in their application to the details of construction, the arrangements of components, and the arrangement of steps set forth in the following detailed description or illustrated in the drawings. Also, it is to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting.
- As described above, a robot operator must control multiple robotic systems simultaneously. Traditionally, a robot operator has used his hands to control all of these systems. In various embodiments, systems and methods provide a robot operator means to control remote vision systems without using his hands, which are typically occupied in controlling the robot itself. When using remote vision in robotic applications, it is often desirable for the operator to be able to control the robot vision system as if he were controlling his own eyes at the scene.
- In various embodiments, a camera's pan and tilt angles are manipulated in direct response to the operator's own eye orientations. In this manner, the camera automatically rotates to point directly toward the object the operator is looking at on his display screen. Additionally, eye and/or head movements may be used to direct other camera activity such as camera roll; zoom; x, y and z position with respect to the host platform; camera separation and convergence angle in stereoscopic cameras; lens iris or aperture; and scene illumination. Minimum or no hand control of the camera is required.
- Based on the operation of our own eyes and head, it is fully natural to control the activity of a remote camera system based on our eye or head activity. When we want to look right, we naturally rotate our eyes and head to the right. When we want to examine something closely, we move our head in to get a more detailed visual image. When we want a more expansive view, we often move our head back. When we need more light, we open our pupils, although this is an unconscious activity. In various embodiments, methods for controlling remote video cameras are based on the natural eye or head activities of the system operator and include velocity modes of control, combinations of eye and head movements, and the control of stereoscopic cameras.
- In this discussion, eye or head “activity variables” are defined as the set of all the dynamic head and eye activities that a user exercises in the course of looking at things. Eye or head activity variables include, but are not limited to, eye rotation (pan and tilt angles), gaze convergence, pupil constriction and dilation, head rotation (pan, tilt and roll), and head translation (horizontal, vertical, and longitudinal).
- In this discussion, “camera control settings” refer to the set of the video camera's controllable parameters, including, but not limited to, pan, tilt, zoom, focus range, iris, parallax or convergence angle (for stereoscopic camera pairs), and camera-body separation (also for stereoscopic camera pairs).
- Various embodiments assume that there are a broad variety of methods for measuring the operator's eye or head activity variables, and it is understood that the camera control methods and apparatus presented here may be implemented with any appropriate eye and/or head tracking equipment. Advanced video eyetrackers, for example, can measure both activity variables of both the eyes and the head.
- It is desired to develop methods that allow the operator to keep his eyes focused on the subject matter while he is performing camera control operations. Further, it is an objective to make use of natural eye or head motions that people routinely perform when looking at objects with their own eyes. This rules out visually activating special eye-operated keys drawn at certain locations on the screen, because this would attract the eyes away from the real visual attention task.
- When a person manipulates an object with his own hands, he typically moves his head to obtain an optimum view of his hands and the object he is manipulating. Similarly, when he manipulates an object remotely via a robot, he would like to be able to move the camera side to side, up and down, and/or forward and back, to obtain optimum viewing angles of the robot end effector and work space. Since people naturally move their heads to control their own viewing angles, it is natural for them to move their heads to control a remote camera's viewing angle.
- One approach for controlling the camera viewing angle is to provide the camera with position actuators that allow the camera to move (translate) right-left, up-down and forward-back. As the operator moves his head, the camera moves proportionately with it. This method, however, requires the operator to maintain a given positional perspective, he must maintain his head at a given location. That position may not be comfortable, or it might not be optimum for him to view the video screen, despite the camera's perspective on the work scene.
- In various embodiments, an improved method allows the operator to keep his head within a small comfortable range while allowing the camera to move through a large range, and allowing the camera to remain fixed at any desired point throughout that range. In this method, the operator's head deviations from a reference point are translated into velocity commands to the camera's position actuators. If the operator positions his head to the right of the nominal “reference”, or “resting” point, for example, the camera control system provides a velocity command to the camera position actuator that causes the camera to move at a velocity in proportion to the distance that the operator's head is positioned from the reference point. (Again, a dead zone, or a low-gain zone, allows the user some freedom of head movement without camera movement.)
- In various embodiments, a gaze-controlled camera (GCC) allows a robot operator to control the robot's vision system. In a robot equipped with a GCC, the robot operator does not control the pan-tilt of the robot camera manually. Rather, a gaze-based control system commands the cameras automatically. The commands are based on the operator's natural eye activity as he observes the display while performing his task. No manual action, other than normal eye activity, is required to control the remote camera.
- In various embodiments a GCC includes an eyetracker. The eyetracker, typically mounted below the operator display, uses one or more video cameras to observe the operator's eyes, and it continually calculates the coordinates of the operator's gazepoint within the display. Second, an automatic control system uses the operator's gazepoint activity to generate camera pan-tilt commands. When the user fixates on an object, the controller rotates the camera to center that object in the display.
- GCC exploits two facts: 1) people naturally point their eyes at what they are interested in, and 2) their gaze is measurable by an unobtrusive instrument. By slaving the robot camera's pointing direction to the operator's eye as he observes the camera scene, the camera automatically rotates toward what the user wants to see—without the user having to take manual action. The operator drives the camera right, left, up, or down simply by doing what he does anyway—looking at the object he is interested in.
- In various embodiments, a camera control algorithm can maintain a balance between two potentially conflicting goals. On the one hand, when the user is performing high precision work, the cameras must move slowly to maintain highly stable images. On the other hand, when the user wants to scan a wide area, i.e. shift his gaze by large angles, the camera must move rapidly. The algorithm balances these objectives by commanding the camera with angular velocities proportional to the angular offset of the operator's gaze with respect to the center of the display. When the user makes small eye rotations, the camera rotates slowly. When the user makes large eye rotations, the camera rotates quickly.
- When a person uses a robot to perform a task, rather than using his hands to do the work directly, the robot adds two key elements of workload to the task. First, the indirect tele-operation of the robot makes it more difficult for the operator to maneuver within the environment and to manipulate objects. Second, if the robot has a camera providing remote vision, the operator has the additional task of controlling the direction the camera is pointed.
- With a conventional operation of a remote camera on a robot, the operator typically controls the robot and the camera separately, alternately transferring his hands between robot and camera control panels. This “serial” operation of the two control tasks seriously slows task execution. In complex environments and/or while performing complex tasks, the camera control task can often generate as much physical and cognitive workload as operating the robot itself.
- In various embodiments, GCCs can reduce the manual and cognitive workload of a robot operator. A GCC can eliminate the manual workload of controlling the robot camera, leaving the operator's hands free to operate the robot's body and manipulators. With his hands completely dedicated to controlling the robot body and manipulator, the operator can drive the robot continuously, without interruption from camera control.
- A GCC can minimize the operator's cognitive workload. When a person performs a physical task without a robot, his conscious attention is (ideally) focused fully on the physical work at hand. While his visual activity during the task is absolutely essential to the success of his effort, the operator's actions of controlling his eye rotations do not add to the cognitive workload required for him to execute the task. A separate (but very powerful) portion of his brain handles the ocular control functions in parallel, without adding workload to the conscious part of the brain handling the central cognitive task.
- On the other hand, when a person performs the same task with a robot, he has the added subtask of controlling the camera, i.e. controlling his remote eyes. In addition to using his built-in ocular control system to control his own eyes, he must use his hands to control the remote camera. In this case, the camera control does add cognitive workload, because the operator orchestrates the manual camera control functions using the same part of his brain that performs the robot control functions. In complex tasks, the additional vision-control workload can seriously interfere with the task's main cognitive workload, i.e. operating the robot and/or its manipulator.
- In various embodiments, a GCC eliminates the cognitive workload associated with manual camera control. It directly harnesses the camera's pan-tilt control to the operator's own eye activity, exploiting the brain's existing ocular control functions to implement the remote camera control automatically. Rather than requiring the operator to use the conscious resources of his brain to execute remote vision control, GCC utilizes the brain's separate, unconscious ocular control function to do the job. Thus GCC not only relieves his hands from the camera control task, but also eliminates the cognitive workload associated with manual camera control. Without the distraction of manual camera control, the operator can concentrate his full attention on his ultimate task and, consequently, perform the task with fewer errors.
- In various embodiments, gaze-controlled pan-tilt improves robot operation two ways: 1) eliminating manual camera-control workload improves task execution speed, and 2) eliminating cognitive camera-control workload reduces operator performance errors. Camera pan/tilt orientation is controlled by gaze direction, for example.
- In various embodiments, camera control settings include camera zoom, focus range, parallax, camera-body separation, and iris diameter, in addition to pan and tilt. Due to the constraints of generating three-dimensional (3-D) images that are easily and properly perceived by the human visual system, controls for these parameters are highly inter-dependent. Options for operator control of these parameters include gaze pan/tilt, gaze parallax (indicative of gazepoint range), head position, and head pan/tilt.
- In various embodiments, zoom control is accomplished by commanding image magnification based on the operator's longitudinal head position. An eyetracker measures the operator's head position with respect to a set point in the middle of the stereo display's eye box. If the user moves his head forward of the set point (or more likely forward of a dead zone around the set point), the lenses are given velocity commands to zoom in, and vice versa. This concept is based a person's natural tendency to move his head forward when he wants a closer look at something and to move his head back when he wants a wider perspective. Zoom factors for the two cameras, for example, must be programmed to match each other, so both eyes experience equal image magnification.
- In various embodiments, camera parallax is made to match the operator's eye parallax to optimize the human's 3-D perception of the scene being viewed. That is, the tow-in angle between the two camera axes is controlled to match the tow-in angle between the visual axes of the operator's two eyes. Eye parallax is measured with a binocular version eyetracker, for example, and the camera controller computes camera parallax commands to follow the eyes' parallax. Camera parallax control is fully automatic—no conscious operator control, e.g. through head position or orientation, is required.
- Given that the interocular distance between a person's eyes is fixed, matching the camera and eye parallax angles is geometrically equivalent to matching relative object and image ranges. That is, if the camera and eye parallax angles are matched, the range of an object within the camera frame of reference is matched to the range of its image within the stereoscopic display frame.
- In various embodiments, an algorithm for controlling camera parallax based on eye parallax is implemented as follows: If the operator's 3-D gaze point remains focused on an object or objects beyond the current camera convergence range, for example, the cameras are directed to converge further out. Bringing the camera convergence range in the real world into alignment with the eye convergence range within the display frame, brings the camera parallax into alignment with eye parallax. Using eye convergence range to control camera convergence range extends the concept of gazepoint control from 2-D (pan/tilt) to 3-D (pan/tilt/range).
- In various embodiments, to mimic human eye operation, the focus ranges of the camera lenses are adjusted to match the equivalent range of the operator's gazepoint within the 3-D display. Based on the assumption that a human focuses his eyes at the same range where the two eyes' gaze lines converge, the control algorithm for the camera focus ranges makes the lens focus ranges follow the camera convergence range. With this embodiment, camera focus control is fully automatic—no conscious operator control, e.g. through head position or orientation, is required. Precise focus range control is required only with low-light, high-speed lenses that have short depth of field. With large depth of field, lens focus control only need be approximate to obtain adequate stereo images.
- While a human cannot change the distance between his eyes, the robot could change the lateral distance between the two camera bodies, thereby changing the operator's apparent viewing distance. By moving the camera bodies further apart, for example, the stereo display makes it appear to the operator that he is moving in and viewing the scene from a position closer than the cameras really are.
- In various embodiments, the camera's iris and/or the camera's illumination level on the scene is/are controlled by the user's pupil activity. A large eye pupil often indicates that the scene is under illuminated, and the eye dilates its pupil to accommodate low available light. Conversely, if the scene is very bright, the eye's pupil constricts to allow the retina to work with the high level of incident light. An eyetracker that measures the user's pupil diameter may direct the camera to adjust its iris and/or scene illuminator to provide optimum lighting conditions that permit comfortable pupil diameters. With this embodiment, a large eye pupil, indicating restricted light, directs the camera iris diameter to increase and/or causes the camera's illuminator to intensify. Conversely, a small pupil directs decreased iris diameter and/or decreased scene illumination. Similarly, pupil size may be used to control the display brightness—as well as the camera's iris or illumination control settings. A scene illuminator is, for example, a light source that is part of a video camera. In various embodiments, a scene illuminator can be a device that is separate from the video camera.
- In various embodiments, camera-body separation is programmatically tied directly to zoom for apparent distance viewing. As the user commands the camera lenses to zoom in, the camera bodies are simultaneously controlled to move apart. Increased zoom provides image enlargement, and the corresponding increased camera-body separation provides apparent range reduction (with respect to the fixed human interocular distance). In this embodiment, control of the camera body separation is fully automatic—no conscious operator control, e.g. through head position or orientation, is required.
- It may prove useful, however, to allow the robot operator to control image magnification and apparent range separately rather than together. In this case, it may be more natural to use longitudinal head position to control apparent range (camera-body separation) and to use some other control, such as head tilt angle to control image amplification (zoom).
- In various embodiments, all camera controls from the computer to the pan, tilt, zoom, focus, parallax and camera-body separation actuators, take the form of velocity commands. The sensor feedback required from the camera-control subsystem back to the computer are position signals indicating the current values of the individual control-variable states.
-
FIG. 1 is a schematic diagram of asystem 100 for remotely controlling a setting of avideo camera 110, in accordance with various embodiments.System 100 includesactuator 120,video display 130,eyetracker 140, andprocessor 150.Actuator 120 is physically connected tovideo camera 110 and controls at least one setting ofvideo camera 110.Actuator 120 can be a mechanical device and/or an electronic device.Actuator 120 can be a separate device fromvideo camera 110 or it can be integrated as part ofvideo camera 110.Actuator 120 can include, for example, a gimbal mechanism and/or a slide mechanism to orient andposition camera 110.Actuator 120 can also include a mechanism to control the zoom and focus ofvideo camera 110.Actuator 120 can also include a mechanism to control the camera iris and/or the camera's scene illuminator. -
Video display 130 displays video fromvideo camera 110 to a user or operator ofvideo camera 110.Video display 130 can be a computer screen, a television, a stereoscopic viewer if multiple cameras are used, or any other device capable of displaying a video signal fromvideo camera 110. -
Eyetracker 140, for example, can include one or more video cameras, an asymmetric aperture, a light source, a gimbal, and a processor, all of which are not shown inFIG. 1 .Eyetracker 140 can also includevideo display 130. -
Eyetracker 140 may measure a number of eye or head activity variables. At a minimum,eyetracker 140 images at least one of the user'seyes 160 over time as the user is observingvideo display 130, and measures the point ofgaze 170 ofeye 160 on or withinvideo display 130. Additionally,eyetracker 140 may explicitly measure one, two or three coordinates of the position of one or botheyes 160 in space as the user moves his head around. Additionally,eyetracker 140 may explicitly measure the orientation of the eye(s) 160, or more specifically the eye's gaze vector orientation(s), as the user rotates his eyes as he looks around. -
Eyetracker 140 may calculate the spatial position and/or orientation of the user's head (not shown) from the position(s) of the user's eye(s) 160. Spatial positions of the eye and/or head may include the horizontal, vertical and/or longitudinal locations with respect to thevideo display 130. Spatial orientations of the eye(s) 160 may be expressed in alternative ways, including vector directions or pan and tilt angles. Spatial orientation of the head may include pan, tilt and roll angles. -
Eyetracker 140 may calculategazepoint 170 as the extrapolation of the gaze line(s) 180 from the spatial location of the eye(s) 160. On a 2-dimensional display 130, thegazepoint 170 may computed as the intersection of the gaze line with the display surface. In stereoscopic, holographic, or 3-dimensional displays 130, a 3-dimensional gazepoint may be computed as the intersection of thegaze lines 180 from both eyes. (Seegaze lines gazepoint 598 inFIG. 5 ). The gaze convergence or parallax may be computed by calculating the angle between the eye's two gaze lines. Finally,eyetracker 140 may measure the pupil diameter of eye(s) 160. - In summary, eyetracker 140 measurements may include, but are not limited to, eyeball location, eye gaze direction, gaze convergence or parallax, and eye pupil diameter. Head variables that can be calculated by
Eyetracker 140 include, but are not limited to, head pan or tilt angle, head roll angle, head horizontal or vertical position, and head longitudinal position.Eyetracker 140 measures these eye or head variables whether the user changes them voluntarily or involuntarily. -
Processor 150 is in communication withactuator 120,video display 130,eyetracker 140, andvideo camera 110. This communication can include, but is not limited to, wired or wireless data or control communication.Processor 150 can include, but is not limited to, a computer, a microcontroller, a microprocessor, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or any device capable of executing a series of instructions.Processor 150 can be the same processor used byeyetracker 140 or it can be a separate device. -
Processor 150 also performs a number of steps.Processor 150 translates the eye and/or head activity variable(s) calculated byeyetracker 140 into their camera control setting(s) that drivevideo camera 110.Processor 150 then instructsactuator 140 to respond to the control setting(s) forvideo camera 110. - The control settings that
processor 150 may provide tovideo camera 110 include, but are not limited to, the pan or tilt angle ofvideo camera 110, the roll angle ofvideo camera 110, the horizontal or vertical position ofvideo camera 110, the longitudinal position ofvideo camera 110, the zoom percentage ofvideo camera 110, the focus ofvideo camera 110, the iris ofvideo camera 110, and illumination or light intensity produced byvideo camera 110. -
FIG. 2 shows amatrix 200 of possible eye or head activity variables that an eyetracker can measure from its user, versus as set of alternative camera control settings that can be used to control a remote video camera, in accordance with various embodiments. Unbolded checkmarks 210 identify eye/head variables that could reasonably be used to control various camera variables. Bolded checkmarks 220 identify the preferred eye/head variables used to drive the various camera control-variable settings. -
FIG. 3 is a flowchart showing amethod 300 for remotely controlling a setting of a video camera, in accordance with various embodiments. - In
step 310 ofmethod 300, video from a video camera is displayed to a user using a video display. - In step 320, at least one eye of the user is imaged as the user is observing the video display, a change in an image of the at least one eye of the user is measured over time, and an eye/head activity variable is calculated from the measured change in the image using an eyetracker.
- In
step 330, the eye/head activity variable is translated into a camera control setting, and an actuator connected to the video camera is instructed to apply the camera control setting to the video camera using a processor. - In accordance with various embodiments, instructions configured to be executed by a processor to perform a method are stored on a computer-readable storage medium. The computer-readable storage medium can be a device that stores digital information. For example, a computer-readable storage medium includes a compact disc read-only memory (CD-ROM) as is known in the art for storing software. The computer-readable storage medium is accessed by a processor suitable for executing instructions configured to be executed.
- In various embodiments, a computer program product includes a tangible computer-readable storage medium whose contents include a program with instructions being executed on a processor so as to perform a method for remotely controlling a setting of a video camera. This method is performed by a system of distinct software modules.
-
FIG. 4 is a schematic diagram of asystem 400 of distinct software modules that performs a method for remotely controlling a setting of a video camera, in accordance with various embodiments.System 400 includesvideo display module 410,eye tracking module 420, andcamera control module 430.Video display module 410 displays video from a video camera to a user on a video display. -
Eye tracking module 420 performs a number of steps.Eye tracking module 420 images at least one eye of the user with an eyetracker as the user is observing the video display.Eye tracking module 420 measures a change in an image of the at least one eye of the user over time. Finally,eye tracking module 420 calculates an eye/head activity variable from the measured change in the image. -
Camera control module 430 also performs a number of steps.Camera control module 430 translates the eye/head activity variable into a camera control setting. Then,camera control module 430 instructs an actuator connected to the video camera to apply the camera control setting to the video camera. - Remote video sensors allow observers to see and detect targets without personally being on the scene. A key limitation of current remote display systems is that the observer often cannot see a target in a three-dimensional (3-D) space. 3-D information is critical for determining the range to a target.
- In various embodiments, systems and methods are described for determining the range or 3-D location of a target simply by looking at it within a remote 3-D display. A stereo imager generates a 3-D image from a pair of cameras viewing the real scene. As the user scans the 3-D image of the scene, the 3-D location of his equivalent gazepoint within the real scene is computed quantitatively, automatically and continuously using an eyetracker. If the user wishes to designate a target, he fixes his gaze on its image and activates a switch or speaks a keyword. The location data for the actual target in real space is then recorded and passed to a client application, for example.
- Animals use binocular vision to determine the 3-D locations of objects within their environments. Loosely speaking, the horizontal and vertical coordinates of the object within the viewer's space are determined from the orientation of the head, the orientation of the eyes within the head, and the position of the object within the eyes' two-dimensional (2-D) images. The range coordinate is determined using stereopsis: viewing the scene from two different locations allows the inference of range by triangulation.
- Though humans implicitly use 3-D target location information to guide the execution of their own physical activities, they have no natural means for exporting this information to their outside world. In various embodiments, quantitative 3-D target-location information is extracted from a person based on the observable behavior of his eyes. A human's natural behavior of looking at targets of interest is exploited. Rapid target designation is obtained with a single, simple command, and the need for manual manipulation or pointing of equipment in the computation of the target location is eliminated.
- In various embodiments, an eye-operated 3-D targeting system includes an eyetracker and a stereoscopic display or viewer. Two video cameras view the real scene from two different locations. A stereoscopic viewer converts the two camera video signals into a scaled 3-dimensional image of the real scene. The operator views the 3-D image space with both eyes. A binocular eyetracker monitors both the user's eyes as he views the stereoscopic or holographic 3-D display, and it continuously computes the gaze lines of the two eyes within the 3-D image space. The intersection of the two gaze lines is computed to be the user's 3-D gazepoint within the image space. Based on the known locations and orientations of the two cameras, the 3-D gazepoint within the image scene is mathematically transformed (using formulas well known in the art) to the equivalent 3-D location of the target being observed in real space. As the user looks around the scene, the system continuously computes the 3-D location of the user's gazepoint.
- Generally, due to noise inherent in the eye tracking system, the two measured gaze lines do not precisely intersect with each other. For computational purposes, the 3-D intersection point may be taken to be the point in space where the two measured gaze lines come closest to one another.
- Due to the fixed distance between his eyes, two key limitations arise in the human's ability to measure range. At long ranges beyond about 20 feet, the gaze lines of both eyes become virtually parallel, and triangulation methods become inaccurate. (Animals infer longer range from environmental context queues.) Conversely, at short ranges below about six inches, it is difficult for the eyes to converge.
- In various embodiments, a 3-D target range finding system allows accurate measurement over a wide range of distances by using variable camera separations. Long ranges are measured with widely separated cameras, and short ranges are measured with closely separated cameras. In aerial targeting applications, for example, long ranges can be measured by placing the two cameras on different flight vehicles. The vehicles may be separated as needed to provide accurate range information. In small-scale applications, such as surgery, miniature cameras mounted close to the surgical instrument allows accurate 3-D manipulation of the instrument.
- In various embodiments, where it is desired to determine the location of a specific target, the user may designate the target by fixing his gaze on it and activating a switch or verbalizing a keyword. At the time of the designation, a 3-D target range finding system samples the 3-D gazepoint location for use by the client application.
- Velocities, directions, and accelerations of moving targets may also be measured if the user keeps his gaze fixed on the target as it moves. To implement measurement of target velocity, direction, and/or accelerations, the 3-dimensional target location system stores the time history of the user's equivalent gazepoint location in real space. The target velocity, direction, and/or acceleration may be computed by appropriate (well known) mathematical calculations on the point motion history. At least two successive time points are needed to calculate target velocity and direction, and three points are required to calculate acceleration.
- A 3-D target range finding system is passive. There is no active range-finding sensor such as a laser or radar that may be detected by the enemy. The operator does not have to be at the scene or near the cameras. He may operate at a remote workstation. Cameras can protect the operator's eyes from exposure to dangerous lighting conditions.
-
FIG. 5 is a schematic diagram of asystem 500 for remotely determining the location of a target in a three-dimensional real space, in accordance with various embodiments.System 500 includes two ormore video cameras 510,stereoscopic display 530,binocular eyetracker 540, andprocessor 550. Two ormore video cameras 510image target 580 in a three-dimensional real space. -
Stereoscopic display 530 is, for example, a video display as described above that can display three-dimensional images.Stereoscopic display 530 renders the video signals from the twocameras 510 to present the user with what appears to him as a three-dimensional image. The view to the user appears as if each of his two eyes were located at the real locations of the two cameras in the real environment. -
Binocular eyetracker 540 is an eyetracker as described above that includes at least two video cameras that are used to track both eyes of the user.Binocular eyetracker 540 performs a number of steps.Binocular eyetracker 540 imagesright eye 560 and aleft eye 570 of the user as the user is observing target image 590 instereoscopic video display 510.Binocular eyetracker 540 calculatesright gaze line 596 ofright eye 560 and leftgaze line 597 ofleft eye 570 in the three-dimensional image space. Finally,binocular eyetracker 540 calculatesgazepoint 598 in the three-dimensional image space as the intersection ofright gaze line 596 and theleft gaze line 597. -
Processor 550 is in communication with two ormore video cameras 510,stereoscopic display 530, andbinocular eyetracker 540.Processor 550 is a processor as described above. -
Processor 550 also performs a number of steps.Processor 550 calculates the image target location in the three-dimensional image space fromgazepoint 598.Processor 550 then determines the real target location by translating the image target location to the real target location in the three-dimensional real space from the locations and positions of twovideo cameras 510. - In various embodiments,
system 500 can include an actuator (not shown) in communication withprocessor 550 and connected to at least one of the twovideo cameras 510. The actuator can change the relative distance between the twovideo cameras 510 at the request ofprocessor 550. For example,processor 550 can instruct the actuator to increase the relative distance to determine the real target location at longer ranges. Alternatively,processor 550 can instruct the actuator to decrease the relative distance to determine the real target location at shorter ranges. - In various embodiments,
processor 550 selects twovideo cameras 510 from the two or more video cameras based the relative distance between twovideo cameras 510. For example,processor 550 can select two video cameras with a larger relative distance to determine the real target location at longer ranges. Alternatively,processor 550 can select two video cameras with a smaller relative distance to determine the real target location at shorter ranges. - In various embodiments,
processor 550 can calculate a velocity, acceleration, or direction oftarget 580 from two or more real target positions determined over time. -
FIG. 6 is a flowchart showing amethod 600 for remotely determining the location of a target in a three-dimensional real space, in accordance with various embodiments. - In
step 610 ofmethod 600, a target is imaged in a three-dimensional real space using two or more video cameras. - In
step 620, a three-dimensional image space combined from two video cameras of the two or more video cameras is displayed to a user using a stereoscopic display. - In
step 630, a right eye and a left eye of the user are imaged as the user is observing the target in the stereoscopic video display, a right gaze line of the right eye and a left gaze line of the left eye are calculated in the three-dimensional image space, and a gazepoint in the three-dimensional image space is calculated as the intersection of the right gaze line and the left gaze line using a binocular eyetracker. - In
step 640, a real target location is determined by translating the gazepoint in the three-dimensional image space to the real target location in the three-dimensional real space from the locations and the positions of the two video cameras using a processor. - In various embodiments, a computer program product includes a tangible computer-readable storage medium whose contents include a program with instructions being executed on a processor so as to perform a method for remotely determining the location of a target in a three-dimensional real space. This method is performed by a system of distinct software modules.
-
FIG. 7 is a schematic diagram of asystem 700 of distinct software modules that performs a method for remotely determining the location of a target in a three-dimensional real space, in accordance with various embodiments.System 700 includes imaging/display module 710,eye tracking module 720, andtarget location module 730. - Imaging/
display module 710 images a target in a three-dimensional real space with two or more video cameras. Imaging/display module 710 also displays a three-dimensional image space combined from two video cameras of the two or more video cameras to a user on a stereoscopic display. -
Eye tracking module 720 performs a number of steps.Eye tracking module 720 images a right eye and a left eye of the user with a binocular eyetracker as the user is observing the target in the stereoscopic video display.Eye tracking module 720 calculates a right gaze line of the right eye and a left gaze line of the left eye in the three-dimensional image space. Finally,eye tracking module 720 calculates a gazepoint in the three-dimensional image space as an intersection of the right gaze line and the left gaze line. -
Target location module 730 determines a real target location by translating the gazepoint in the three-dimensional image space to the real target location in the three-dimensional real space from locations and positions of the two video cameras. - While the present teachings are described in conjunction with various embodiments, it is not intended that the present teachings be limited to such embodiments. On the contrary, the present teachings encompass various alternatives, modifications, and equivalents, as will be appreciated by those of skill in the art.
- Further, in describing various embodiments, the specification may have presented a method and/or process as a particular sequence of steps. However, to the extent that the method or process does not rely on the particular order of steps set forth herein, the method or process should not be limited to the particular sequence of steps described. As one of ordinary skill in the art would appreciate, other sequences of steps may be possible. Therefore, the particular order of the steps set forth in the specification should not be construed as limitations on the claims. In addition, the claims directed to the method and/or process should not be limited to the performance of their steps in the order written, and one skilled in the art can readily appreciate that the sequences may be varied and still remain within the spirit and scope of the various embodiments.
Claims (25)
1. A system for remotely controlling a setting of a video camera, comprising:
an actuator that is connected to the video camera and controls at least one setting of the video camera;
a video display that displays video from the video camera to a user;
an eyetracker that images at least one eye of the user as the user is observing the video display, measures a change in an image of the at least one eye of the user over time, and calculates an eye/head activity variable from the measured change in the image; and
a processor that is in communication with the actuator, the video display, the eyetracker, and the video camera, that translates the eye/head activity variable into a camera control setting of the video camera, and that instructs the actuator to apply the camera control setting to the video camera.
2. The system of claim 1 , wherein the eye/head activity variable comprises an eye gaze direction.
3. The system of claim 1 , wherein the eye/head activity variable comprises an eye gaze convergence/parallax.
4. The system of claim 1 , wherein the eye/head activity variable comprises an eye pupil diameter.
5. The system of claim 1 , wherein the eye/head activity variable comprises positions of two eyes with respect to the video display and the eye positions are used to calculate the pan, tilt and/or roll angle of the user's head.
6. The system of claim 1 , wherein the eye/head activity variable comprises the at least one eye's horizontal, vertical or longitudinal position with respect to the video display and the at least one eye's positions are used to represent the horizontal, vertical or longitudinal position of the user's head.
7. The system of claim 1 , wherein the camera control setting comprises a pan or tilt angle of the video camera.
8. The system of claim 1 , wherein the camera control setting comprises a horizontal, vertical, or longitudinal position of the video camera.
9. The system of claim 1 , wherein the camera control setting comprises a zoom of the video camera.
10. The system of claim 1 , wherein the camera control setting comprises a focus the video camera.
11. The system of claim 1 , wherein the camera control setting comprises an illumination level for a scene being viewed by the video camera.
12. The system of claim 1 , wherein the camera control setting comprises an iris diameter of the video camera.
13. A method for remotely controlling a setting of a video camera, comprising:
displaying video from a video camera to a user using a video display;
imaging at least one eye of the user as the user is observing the video display, measuring a change in an image of the at least one eye of the user over time, and calculating an eye/head activity variable from the measured change in the image using an eyetracker; and
translating the eye/head activity variable into a camera control setting, and instructing an actuator connected to the video camera to apply the camera control setting to the video camera using a processor.
14. The method of claim 13 , wherein the eye/head activity variable comprises an eye gaze direction.
15. The method of claim 13 , wherein the eye/head activity variable comprises an eye gaze convergence/parallax.
16. The method of claim 13 , wherein the eye/head activity variable comprises an eye pupil diameter.
17. The method of claim 13 , wherein the eye/head activity variable comprises positions of two eyes with respect to the video display and the eye positions are used to calculate the pan, tilt and/or roll angle of the user's head.
18. The method of claim 13 , wherein the eye/head activity variable comprises the at least one eye's horizontal, vertical or longitudinal position with respect to the video display and the at least one eye's positions are used to represent the horizontal, vertical or longitudinal position of the user's head.
19. The method of claim 13 , wherein the camera control setting comprises a pan or tilt angle of the video camera.
20. The method of claim 13 , wherein the camera control setting comprises a horizontal, vertical, or longitudinal position of the video camera.
21. The method of claim 13 , wherein the camera control setting comprises a zoom of the video camera.
22. The method of claim 13 , wherein the camera control setting comprises a focus the video camera.
23. The method of claim 13 , wherein the camera control setting comprises an illumination level for a scene being viewed by the video camera.
24. The method of claim 13 , wherein the camera control setting comprises an iris diameter of the video camera.
25. A computer program product, comprising a tangible computer-readable storage medium whose contents include a program with instructions being executed on a processor so as to perform a method for remotely controlling a setting of a video camera, the method comprising:
providing a system, wherein the system comprises distinct software modules, and wherein the distinct software modules comprise a video display module, an eye tracking module, and a camera control module;
displaying video from a video camera to a user on a video display using the video display module;
imaging at least one eye of the user with an eyetracker as the user is observing the video display, measuring a change in an image of the at least one eye of the user over time, and calculating an eye/head activity variable from the measured change in the image using the eye tracking module; and
translating the eye/head activity variable into a camera control setting, and instructing an actuator connected to the video camera to apply the camera control setting to the video camera using the camera control module.
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/817,604 US20100321482A1 (en) | 2009-06-17 | 2010-06-17 | Eye/head controls for camera pointing |
US14/102,813 US20140092268A1 (en) | 2009-06-17 | 2013-12-11 | Eye/Head Controls for Camera Pointing |
US15/045,566 US20160165130A1 (en) | 2009-06-17 | 2016-02-17 | Eye/Head Controls for Camera Pointing |
US15/383,071 US20170099433A1 (en) | 2009-06-17 | 2016-12-19 | Eye/Head Controls for Camera Pointing |
US15/888,229 US20180160035A1 (en) | 2009-06-17 | 2018-02-05 | Robot System for Controlling a Robot in a Tele-Operation |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18786409P | 2009-06-17 | 2009-06-17 | |
US12/817,604 US20100321482A1 (en) | 2009-06-17 | 2010-06-17 | Eye/head controls for camera pointing |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/102,813 Continuation US20140092268A1 (en) | 2009-06-17 | 2013-12-11 | Eye/Head Controls for Camera Pointing |
Publications (1)
Publication Number | Publication Date |
---|---|
US20100321482A1 true US20100321482A1 (en) | 2010-12-23 |
Family
ID=43353975
Family Applications (6)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/817,569 Active 2031-05-12 US8320623B2 (en) | 2009-06-17 | 2010-06-17 | Systems and methods for 3-D target location |
US12/817,604 Abandoned US20100321482A1 (en) | 2009-06-17 | 2010-06-17 | Eye/head controls for camera pointing |
US14/102,813 Abandoned US20140092268A1 (en) | 2009-06-17 | 2013-12-11 | Eye/Head Controls for Camera Pointing |
US15/045,566 Abandoned US20160165130A1 (en) | 2009-06-17 | 2016-02-17 | Eye/Head Controls for Camera Pointing |
US15/383,071 Abandoned US20170099433A1 (en) | 2009-06-17 | 2016-12-19 | Eye/Head Controls for Camera Pointing |
US15/888,229 Abandoned US20180160035A1 (en) | 2009-06-17 | 2018-02-05 | Robot System for Controlling a Robot in a Tele-Operation |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/817,569 Active 2031-05-12 US8320623B2 (en) | 2009-06-17 | 2010-06-17 | Systems and methods for 3-D target location |
Family Applications After (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/102,813 Abandoned US20140092268A1 (en) | 2009-06-17 | 2013-12-11 | Eye/Head Controls for Camera Pointing |
US15/045,566 Abandoned US20160165130A1 (en) | 2009-06-17 | 2016-02-17 | Eye/Head Controls for Camera Pointing |
US15/383,071 Abandoned US20170099433A1 (en) | 2009-06-17 | 2016-12-19 | Eye/Head Controls for Camera Pointing |
US15/888,229 Abandoned US20180160035A1 (en) | 2009-06-17 | 2018-02-05 | Robot System for Controlling a Robot in a Tele-Operation |
Country Status (1)
Country | Link |
---|---|
US (6) | US8320623B2 (en) |
Cited By (85)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090143912A1 (en) * | 2007-12-04 | 2009-06-04 | Industrial Technology Research Institute | System and method for graphically allocating robot's working space |
US20100322479A1 (en) * | 2009-06-17 | 2010-12-23 | Lc Technologies Inc. | Systems and methods for 3-d target location |
US20120002063A1 (en) * | 2010-06-30 | 2012-01-05 | Hon Hai Precision Industry Co., Ltd. | Camera adjusting system and method |
US20120109923A1 (en) * | 2010-11-03 | 2012-05-03 | Research In Motion Limited | System and method for displaying search results on electronic devices |
WO2012105909A1 (en) * | 2011-02-01 | 2012-08-09 | National University Of Singapore | An imaging system and method |
US20130021448A1 (en) * | 2011-02-24 | 2013-01-24 | Multiple Interocular 3-D, L.L.C. | Stereoscopic three-dimensional camera rigs |
US20130162675A1 (en) * | 2011-12-22 | 2013-06-27 | Canon Kabushiki Kaisha | Information processing apparatus |
US20140063198A1 (en) * | 2012-08-30 | 2014-03-06 | Microsoft Corporation | Changing perspectives of a microscopic-image device based on a viewer' s perspective |
EP2720464A1 (en) * | 2012-10-11 | 2014-04-16 | Sony Mobile Communications AB | Generating image information |
US20140187322A1 (en) * | 2010-06-18 | 2014-07-03 | Alexander Luchinskiy | Method of Interaction with a Computer, Smartphone or Computer Game |
US20140232648A1 (en) * | 2011-10-17 | 2014-08-21 | Korea Institute Of Science And Technology | Display apparatus and contents display method |
US20140327754A1 (en) * | 2013-05-06 | 2014-11-06 | Delta ID Inc. | Method and apparatus for compensating for sub-optimal orientation of an iris imaging apparatus |
CN104811609A (en) * | 2015-03-03 | 2015-07-29 | 小米科技有限责任公司 | Photographing parameter adjustment method and device |
US9256089B2 (en) | 2012-06-15 | 2016-02-09 | Microsoft Technology Licensing, Llc | Object-detecting backlight unit |
US20160048964A1 (en) * | 2014-08-13 | 2016-02-18 | Empire Technology Development Llc | Scene analysis for improved eye tracking |
US9265458B2 (en) | 2012-12-04 | 2016-02-23 | Sync-Think, Inc. | Application of smooth pursuit cognitive testing paradigms to clinical drug development |
US9269330B2 (en) * | 2014-01-02 | 2016-02-23 | Quanta Computer Inc. | Head mounted display apparatus and backlight adjustment method thereof |
US20160063347A1 (en) * | 2014-08-27 | 2016-03-03 | Hyundai Motor Company | System for capturing pupil and method thereof |
US9304949B2 (en) | 2012-03-02 | 2016-04-05 | Microsoft Technology Licensing, Llc | Sensing user input at display area edge |
US9354748B2 (en) | 2012-02-13 | 2016-05-31 | Microsoft Technology Licensing, Llc | Optical stylus interaction |
CN105635582A (en) * | 2016-01-27 | 2016-06-01 | 惠州Tcl移动通信有限公司 | Photographing control method and photographing control terminal based on eye feature recognition |
US9380976B2 (en) | 2013-03-11 | 2016-07-05 | Sync-Think, Inc. | Optical neuroinformatics |
US20160297362A1 (en) * | 2015-04-09 | 2016-10-13 | Ford Global Technologies, Llc | Vehicle exterior side-camera systems and methods |
US9497501B2 (en) | 2011-12-06 | 2016-11-15 | Microsoft Technology Licensing, Llc | Augmented reality virtual monitor |
WO2016201015A1 (en) * | 2015-06-12 | 2016-12-15 | Microsoft Technology Licensing, Llc | Display for stereoscopic augmented reality |
US9563270B2 (en) * | 2014-12-26 | 2017-02-07 | Microsoft Technology Licensing, Llc | Head-based targeting with pitch amplification |
CN106484124A (en) * | 2016-11-14 | 2017-03-08 | 北京英贝思科技有限公司 | A kind of sight control method |
US20170127011A1 (en) * | 2014-06-10 | 2017-05-04 | Socionext Inc. | Semiconductor integrated circuit, display device provided with same, and control method |
US9678542B2 (en) | 2012-03-02 | 2017-06-13 | Microsoft Technology Licensing, Llc | Multiple position input device cover |
US20170211931A1 (en) * | 2014-08-06 | 2017-07-27 | Hand Held Products, Inc. | Dimensioning system with guided alignment |
US9752864B2 (en) | 2014-10-21 | 2017-09-05 | Hand Held Products, Inc. | Handheld dimensioning system with feedback |
CN107147883A (en) * | 2017-06-09 | 2017-09-08 | 中国科学院心理研究所 | A kind of remote shooting system based on the dynamic control of head |
US9762793B2 (en) | 2014-10-21 | 2017-09-12 | Hand Held Products, Inc. | System and method for dimensioning |
US9779546B2 (en) | 2012-05-04 | 2017-10-03 | Intermec Ip Corp. | Volume dimensioning systems and methods |
US9779276B2 (en) | 2014-10-10 | 2017-10-03 | Hand Held Products, Inc. | Depth sensor based auto-focus system for an indicia scanner |
US9786101B2 (en) | 2015-05-19 | 2017-10-10 | Hand Held Products, Inc. | Evaluating image values |
US9784566B2 (en) | 2013-03-13 | 2017-10-10 | Intermec Ip Corp. | Systems and methods for enhancing dimensioning |
US9835486B2 (en) | 2015-07-07 | 2017-12-05 | Hand Held Products, Inc. | Mobile dimensioner apparatus for use in commerce |
US9841311B2 (en) | 2012-10-16 | 2017-12-12 | Hand Held Products, Inc. | Dimensioning system |
US9857167B2 (en) | 2015-06-23 | 2018-01-02 | Hand Held Products, Inc. | Dual-projector three-dimensional scanner |
US9897434B2 (en) | 2014-10-21 | 2018-02-20 | Hand Held Products, Inc. | Handheld dimensioning system with measurement-conformance feedback |
US20180074581A1 (en) * | 2015-03-23 | 2018-03-15 | Haim Melman | Eye Tracking System |
US9939259B2 (en) | 2012-10-04 | 2018-04-10 | Hand Held Products, Inc. | Measuring object dimensions using mobile computer |
US9940721B2 (en) | 2016-06-10 | 2018-04-10 | Hand Held Products, Inc. | Scene change detection in a dimensioner |
US10007858B2 (en) | 2012-05-15 | 2018-06-26 | Honeywell International Inc. | Terminals and methods for dimensioning objects |
US10025314B2 (en) | 2016-01-27 | 2018-07-17 | Hand Held Products, Inc. | Vehicle positioning and object avoidance |
US10060729B2 (en) | 2014-10-21 | 2018-08-28 | Hand Held Products, Inc. | Handheld dimensioner with data-quality indication |
US10066982B2 (en) | 2015-06-16 | 2018-09-04 | Hand Held Products, Inc. | Calibrating a volume dimensioner |
CN108632563A (en) * | 2017-03-18 | 2018-10-09 | 杰瑞·L·康威 | Dynamic visual telephone system and its application method |
US10094650B2 (en) | 2015-07-16 | 2018-10-09 | Hand Held Products, Inc. | Dimensioning and imaging items |
US10134120B2 (en) | 2014-10-10 | 2018-11-20 | Hand Held Products, Inc. | Image-stitching for dimensioning |
US10140724B2 (en) | 2009-01-12 | 2018-11-27 | Intermec Ip Corporation | Semi-automatic dimensioning with imager on a portable device |
US10157313B1 (en) * | 2014-09-19 | 2018-12-18 | Colorado School Of Mines | 3D gaze control of robot for navigation and object manipulation |
US10163216B2 (en) | 2016-06-15 | 2018-12-25 | Hand Held Products, Inc. | Automatic mode switching in a volume dimensioner |
US10203402B2 (en) | 2013-06-07 | 2019-02-12 | Hand Held Products, Inc. | Method of error correction for 3D imaging device |
US10225544B2 (en) | 2015-11-19 | 2019-03-05 | Hand Held Products, Inc. | High resolution dot pattern |
CN109475387A (en) * | 2016-06-03 | 2019-03-15 | 柯惠Lp公司 | For controlling system, method and the computer-readable storage medium of the aspect of robotic surgical device and viewer's adaptive three-dimensional display |
US10249030B2 (en) | 2015-10-30 | 2019-04-02 | Hand Held Products, Inc. | Image transformation for indicia reading |
US10247547B2 (en) | 2015-06-23 | 2019-04-02 | Hand Held Products, Inc. | Optical pattern projector |
US10281979B2 (en) * | 2014-08-21 | 2019-05-07 | Canon Kabushiki Kaisha | Information processing system, information processing method, and storage medium |
US10321127B2 (en) | 2012-08-20 | 2019-06-11 | Intermec Ip Corp. | Volume dimensioning system calibration systems and methods |
US10339352B2 (en) | 2016-06-03 | 2019-07-02 | Hand Held Products, Inc. | Wearable metrological apparatus |
US20190258880A1 (en) * | 2014-06-13 | 2019-08-22 | B/E Aerospace, Inc. | Apparatus and Method for Providing Attitude Reference for Vehicle Passengers |
US10393506B2 (en) | 2015-07-15 | 2019-08-27 | Hand Held Products, Inc. | Method for a mobile dimensioning device to use a dynamic accuracy compatible with NIST standard |
US10585485B1 (en) | 2014-11-10 | 2020-03-10 | Amazon Technologies, Inc. | Controlling content zoom level based on user head movement |
US10584962B2 (en) | 2018-05-01 | 2020-03-10 | Hand Held Products, Inc | System and method for validating physical-item security |
US10621398B2 (en) | 2018-03-14 | 2020-04-14 | Hand Held Products, Inc. | Methods and systems for operating an indicia scanner |
US10678743B2 (en) | 2012-05-14 | 2020-06-09 | Microsoft Technology Licensing, Llc | System and method for accessory device architecture that passes via intermediate processor a descriptor when processing in a low power state |
US10733748B2 (en) | 2017-07-24 | 2020-08-04 | Hand Held Products, Inc. | Dual-pattern optical 3D dimensioning |
US10775165B2 (en) | 2014-10-10 | 2020-09-15 | Hand Held Products, Inc. | Methods for improving the accuracy of dimensioning-system measurements |
US10783835B2 (en) * | 2016-03-11 | 2020-09-22 | Lenovo (Singapore) Pte. Ltd. | Automatic control of display brightness |
JP2020202499A (en) * | 2019-06-11 | 2020-12-17 | 国立大学法人静岡大学 | Image observation system |
US10909708B2 (en) | 2016-12-09 | 2021-02-02 | Hand Held Products, Inc. | Calibrating a dimensioner using ratios of measurable parameters of optic ally-perceptible geometric elements |
CN112423942A (en) * | 2018-09-03 | 2021-02-26 | 川崎重工业株式会社 | Robot system |
WO2021067044A1 (en) * | 2019-10-03 | 2021-04-08 | Facebook Technologies, Llc | Systems and methods for video communication using a virtual camera |
EP2995075B1 (en) * | 2013-05-10 | 2021-04-28 | Samsung Electronics Co., Ltd. | Display apparatus with a plurality of screens and method of controlling the same |
US11029762B2 (en) | 2015-07-16 | 2021-06-08 | Hand Held Products, Inc. | Adjusting dimensioning results using augmented reality |
US11047672B2 (en) | 2017-03-28 | 2021-06-29 | Hand Held Products, Inc. | System for optically dimensioning |
US11058504B2 (en) | 2016-06-03 | 2021-07-13 | Covidien Lp | Control arm assemblies for robotic surgical systems |
CN113347405A (en) * | 2015-01-28 | 2021-09-03 | 纳维曼德资本有限责任公司 | Scaling related method and apparatus |
CN114020156A (en) * | 2015-09-24 | 2022-02-08 | 托比股份公司 | Wearable device capable of eye tracking |
US20220201194A1 (en) * | 2020-12-23 | 2022-06-23 | Yokogawa Electric Corporation | Apparatus, system, method and storage medium |
DE102021122543A1 (en) | 2021-08-31 | 2023-03-02 | Bayerische Motoren Werke Aktiengesellschaft | Driving assistance system and driving assistance method for a vehicle |
US11639846B2 (en) | 2019-09-27 | 2023-05-02 | Honeywell International Inc. | Dual-pattern optical 3D dimensioning |
WO2024137749A1 (en) * | 2022-12-22 | 2024-06-27 | Apple Inc. | Focus adjustments based on attention |
Families Citing this family (58)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10039445B1 (en) | 2004-04-01 | 2018-08-07 | Google Llc | Biosensors, communicators, and controllers monitoring eye movement and methods for using them |
FR2906899B1 (en) * | 2006-10-05 | 2009-01-16 | Essilor Int | DISPLAY DEVICE FOR STEREOSCOPIC VISUALIZATION. |
US11273344B2 (en) | 2007-09-01 | 2022-03-15 | Engineering Acoustics Incorporated | Multimodal sensory feedback system and method for treatment and assessment of disequilibrium, balance and motion disorders |
US10258259B1 (en) * | 2008-08-29 | 2019-04-16 | Gary Zets | Multimodal sensory feedback system and method for treatment and assessment of disequilibrium, balance and motion disorders |
CN102088552A (en) * | 2009-12-03 | 2011-06-08 | 鸿富锦精密工业(深圳)有限公司 | Adjusting system and method for PTZ (Pan/Tilt/Zoom) camera |
CN102088551A (en) * | 2009-12-03 | 2011-06-08 | 鸿富锦精密工业(深圳)有限公司 | Camera adjustment system and method |
US8687070B2 (en) * | 2009-12-22 | 2014-04-01 | Apple Inc. | Image capture device having tilt and/or perspective correction |
US20120023161A1 (en) * | 2010-07-21 | 2012-01-26 | Sk Telecom Co., Ltd. | System and method for providing multimedia service in a communication system |
US20120093358A1 (en) * | 2010-10-15 | 2012-04-19 | Visteon Global Technologies, Inc. | Control of rear-view and side-view mirrors and camera-coordinated displays via eye gaze |
US8780179B2 (en) * | 2011-05-10 | 2014-07-15 | Southwest Research Institute | Robot vision with three dimensional thermal imaging |
US8885877B2 (en) | 2011-05-20 | 2014-11-11 | Eyefluence, Inc. | Systems and methods for identifying gaze tracking scene reference locations |
US8911087B2 (en) | 2011-05-20 | 2014-12-16 | Eyefluence, Inc. | Systems and methods for measuring reactions of head, eyes, eyelids and pupils |
US8929589B2 (en) | 2011-11-07 | 2015-01-06 | Eyefluence, Inc. | Systems and methods for high-resolution gaze tracking |
US20130271575A1 (en) * | 2012-04-11 | 2013-10-17 | Zspace, Inc. | Dynamically Controlling an Imaging Microscopy System |
US9423870B2 (en) * | 2012-05-08 | 2016-08-23 | Google Inc. | Input determination method |
US9674436B2 (en) * | 2012-06-18 | 2017-06-06 | Microsoft Technology Licensing, Llc | Selective imaging zones of an imaging sensor |
US9398229B2 (en) | 2012-06-18 | 2016-07-19 | Microsoft Technology Licensing, Llc | Selective illumination of a region within a field of view |
KR101970197B1 (en) * | 2012-10-29 | 2019-04-18 | 에스케이 텔레콤주식회사 | Method for Controlling Multiple Camera, Apparatus therefor |
US10116911B2 (en) * | 2012-12-18 | 2018-10-30 | Qualcomm Incorporated | Realistic point of view video method and apparatus |
US20150092064A1 (en) * | 2013-09-29 | 2015-04-02 | Carlo Antonio Sechi | Recording Device Positioner Based on Relative Head Rotation |
US9285872B1 (en) * | 2013-12-12 | 2016-03-15 | Google Inc. | Using head gesture and eye position to wake a head mounted device |
US20150228119A1 (en) | 2014-02-11 | 2015-08-13 | Osterhout Group, Inc. | Spatial location presentation in head worn computing |
US9753288B2 (en) | 2014-01-21 | 2017-09-05 | Osterhout Group, Inc. | See-through computer display systems |
US9766463B2 (en) | 2014-01-21 | 2017-09-19 | Osterhout Group, Inc. | See-through computer display systems |
KR20150088355A (en) * | 2014-01-23 | 2015-08-03 | 한국전자통신연구원 | Apparatus and method for stereo light-field input/ouput supporting eye-ball movement |
US9762895B1 (en) * | 2014-03-11 | 2017-09-12 | Rockwell Collins, Inc. | Dual simultaneous image presentation for a three-dimensional aviation display |
US10334150B2 (en) | 2014-05-14 | 2019-06-25 | Hanwha Aerospace Co., Ltd. | Camera system and method of tracking object using the same |
US9704036B2 (en) * | 2014-05-30 | 2017-07-11 | Lc Technologies, Inc. | Eyetracker mounts for use with handheld devices |
US9798383B2 (en) * | 2014-09-19 | 2017-10-24 | Intel Corporation | Facilitating dynamic eye torsion-based eye tracking on computing devices |
US9928422B2 (en) | 2014-10-15 | 2018-03-27 | Samsung Electronics Co., Ltd. | User terminal apparatus and IRIS recognition method thereof |
US10567641B1 (en) * | 2015-01-19 | 2020-02-18 | Devon Rueckner | Gaze-directed photography |
US20160239985A1 (en) | 2015-02-17 | 2016-08-18 | Osterhout Group, Inc. | See-through computer display systems |
US10878775B2 (en) | 2015-02-17 | 2020-12-29 | Mentor Acquisition One, Llc | See-through computer display systems |
KR20160133328A (en) * | 2015-05-12 | 2016-11-22 | 삼성전자주식회사 | Remote control method and device using wearable device |
US20170064209A1 (en) * | 2015-08-26 | 2017-03-02 | David Cohen | Wearable point of regard zoom camera |
US10523923B2 (en) | 2015-12-28 | 2019-12-31 | Microsoft Technology Licensing, Llc | Synchronizing active illumination cameras |
US20170195543A1 (en) * | 2015-12-31 | 2017-07-06 | Skytraq Technology, Inc. | Remote control between mobile communication devices for capturing images |
DE102016101967B9 (en) * | 2016-02-04 | 2022-06-30 | Carl Zeiss Microscopy Gmbh | Methods and devices for stereo imaging |
US10591728B2 (en) | 2016-03-02 | 2020-03-17 | Mentor Acquisition One, Llc | Optical systems for head-worn computers |
US10667981B2 (en) | 2016-02-29 | 2020-06-02 | Mentor Acquisition One, Llc | Reading assistance system for visually impaired |
US10082866B2 (en) * | 2016-04-12 | 2018-09-25 | International Business Machines Corporation | Gaze point detection using dynamic facial reference points under varying lighting conditions |
CN106488093A (en) * | 2016-09-23 | 2017-03-08 | 浙江宇视科技有限公司 | A kind of video signal processing method and binocular camera |
US10918445B2 (en) * | 2016-12-19 | 2021-02-16 | Ethicon Llc | Surgical system with augmented reality display |
CN111182847B (en) * | 2017-09-05 | 2023-09-26 | 柯惠Lp公司 | Robotic surgical system and method and computer readable medium for control thereof |
CN108592865A (en) * | 2018-04-28 | 2018-09-28 | 京东方科技集团股份有限公司 | Geometric measurement method and its device, AR equipment based on AR equipment |
CN110799921A (en) * | 2018-07-18 | 2020-02-14 | 深圳市大疆创新科技有限公司 | Shooting method and device and unmanned aerial vehicle |
KR102224157B1 (en) * | 2019-02-20 | 2021-03-08 | 엘지전자 주식회사 | Moving robot system comprising moving robot and charging station |
WO2020209491A1 (en) * | 2019-04-11 | 2020-10-15 | Samsung Electronics Co., Ltd. | Head-mounted display device and operating method of the same |
KR102245186B1 (en) * | 2019-05-27 | 2021-04-27 | 아주대학교산학협력단 | Endoscope control apparatus, control method, and control system using the same |
CN110286754B (en) * | 2019-06-11 | 2022-06-24 | Oppo广东移动通信有限公司 | Projection method based on eyeball tracking and related equipment |
US11376733B2 (en) * | 2019-06-11 | 2022-07-05 | Facebook Technologies, Llc | Mechanical eyeball for animatronic devices |
US11023095B2 (en) * | 2019-07-12 | 2021-06-01 | Cinemoi North America, LLC | Providing a first person view in a virtual world using a lens |
US11850730B2 (en) | 2019-07-17 | 2023-12-26 | Asensus Surgical Us, Inc. | Double eye tracker configuration for a robot-assisted surgical system |
WO2021133186A1 (en) * | 2019-12-23 | 2021-07-01 | федеральное государственное автономное образовательное учреждение высшего образования "Московский физико-технический институт (национальный исследовательский университет)" | Method for controlling robotic manipulator |
EP3992689A1 (en) * | 2020-10-28 | 2022-05-04 | Leica Instruments (Singapore) Pte. Ltd. | Control system for adapting illumination intensity in microscopy, microscopy arrangement and corresponding method |
CN113359996A (en) * | 2021-08-09 | 2021-09-07 | 季华实验室 | Life auxiliary robot control system, method and device and electronic equipment |
US11798204B2 (en) * | 2022-03-02 | 2023-10-24 | Qualcomm Incorporated | Systems and methods of image processing based on gaze detection |
US20230298197A1 (en) * | 2022-03-17 | 2023-09-21 | Motorola Mobility Llc | Electronic device with gaze-based autofocus of camera during video rendition of scene |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4945367A (en) * | 1988-03-02 | 1990-07-31 | Blackshear David M | Surveillance camera system |
US20040105010A1 (en) * | 2000-06-30 | 2004-06-03 | Karl Osen | Computer aided capturing system |
US20080136916A1 (en) * | 2005-01-26 | 2008-06-12 | Robin Quincey Wolff | Eye tracker/head tracker/camera tracker controlled camera/weapon positioner control system |
US20080297590A1 (en) * | 2007-05-31 | 2008-12-04 | Barber Fred | 3-d robotic vision and vision control system |
US7686451B2 (en) * | 2005-04-04 | 2010-03-30 | Lc Technologies, Inc. | Explicit raytracing for gimbal-based gazepoint trackers |
Family Cites Families (42)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS61101883A (en) * | 1984-10-25 | 1986-05-20 | Canon Inc | Convergence angle matching device |
GB8430980D0 (en) * | 1984-12-07 | 1985-01-16 | Robinson M | Generation of apparently three-dimensional images |
US5047700A (en) * | 1988-03-23 | 1991-09-10 | The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration | Universal computer control system for motors |
US5142642A (en) * | 1988-08-24 | 1992-08-25 | Kabushiki Kaisha Toshiba | Stereoscopic television system |
US5175616A (en) * | 1989-08-04 | 1992-12-29 | Her Majesty The Queen In Right Of Canada, As Represented By The Minister Of National Defence Of Canada | Stereoscopic video-graphic coordinate specification system |
WO1993014454A1 (en) * | 1992-01-10 | 1993-07-22 | Foster-Miller, Inc. | A sensory integrated data interface |
US6414709B1 (en) * | 1994-11-03 | 2002-07-02 | Synthonics Incorporated | Methods and apparatus for zooming during capture and reproduction of 3-dimensional images |
US5742264A (en) * | 1995-01-24 | 1998-04-21 | Matsushita Electric Industrial Co., Ltd. | Head-mounted display |
US5583795A (en) * | 1995-03-17 | 1996-12-10 | The United States Of America As Represented By The Secretary Of The Army | Apparatus for measuring eye gaze and fixation duration, and method therefor |
US5649061A (en) * | 1995-05-11 | 1997-07-15 | The United States Of America As Represented By The Secretary Of The Army | Device and method for estimating a mental decision |
JPH099124A (en) * | 1995-06-16 | 1997-01-10 | Noriaki Konuma | Automatic control type spot light |
US5905525A (en) * | 1995-07-13 | 1999-05-18 | Minolta Co., Ltd. | Image display apparatus having a display controlled by user's head movement |
US6369952B1 (en) * | 1995-07-14 | 2002-04-09 | I-O Display Systems Llc | Head-mounted personal visual display apparatus with image generator and holder |
US5726916A (en) * | 1996-06-27 | 1998-03-10 | The United States Of America As Represented By The Secretary Of The Army | Method and apparatus for determining ocular gaze point of regard and fixation duration |
EP1978418A2 (en) * | 1996-12-06 | 2008-10-08 | Nippon Telegraph And Telephone Corporation | Method and system for producing computer generated holograms realizing real time holographic video production and display |
CA2255382A1 (en) * | 1997-12-05 | 1999-06-05 | Mcgill University | Stereoscopic gaze controller |
JP2002525769A (en) * | 1998-09-22 | 2002-08-13 | ヴェガ ヴィスタ インコーポレイテッド | Direct control of portable data display |
US6857741B2 (en) * | 2002-01-16 | 2005-02-22 | E-Vision, Llc | Electro-active multi-focal spectacle lens |
US6120461A (en) * | 1999-08-09 | 2000-09-19 | The United States Of America As Represented By The Secretary Of The Army | Apparatus for tracking the human eye with a retinal scanning display, and method thereof |
US6864912B1 (en) * | 1999-12-16 | 2005-03-08 | International Business Machines Corp. | Computer system providing hands free user input via optical means for navigation or zooming |
US7428001B2 (en) * | 2002-03-15 | 2008-09-23 | University Of Washington | Materials and methods for simulating focal shifts in viewers using large depth of focus displays |
US9955551B2 (en) * | 2002-07-12 | 2018-04-24 | Yechezkal Evan Spero | Detector controlled illuminating system |
SE524003C2 (en) * | 2002-11-21 | 2004-06-15 | Tobii Technology Ab | Procedure and facility for detecting and following an eye and its angle of view |
JP3984907B2 (en) * | 2002-11-29 | 2007-10-03 | キヤノン株式会社 | Image observation system |
JP2004309930A (en) * | 2003-04-09 | 2004-11-04 | Olympus Corp | Stereoscopic observation system |
US20060210111A1 (en) * | 2005-03-16 | 2006-09-21 | Dixon Cleveland | Systems and methods for eye-operated three-dimensional object location |
US7609952B2 (en) * | 2005-08-01 | 2009-10-27 | Scott Jezierski | Apparatus and method for remote viewing system |
US7744216B1 (en) * | 2006-01-06 | 2010-06-29 | Lockheed Martin Corporation | Display system intensity adjustment based on pupil dilation |
US7747068B1 (en) * | 2006-01-20 | 2010-06-29 | Andrew Paul Smyth | Systems and methods for tracking the eye |
US7878652B2 (en) * | 2006-01-24 | 2011-02-01 | University Of Tennessee Research Foundation | Adaptive photoscreening system |
ES2605367T3 (en) * | 2006-01-26 | 2017-03-14 | Nokia Technologies Oy | Eye tracking device |
US9344612B2 (en) * | 2006-02-15 | 2016-05-17 | Kenneth Ira Ritchey | Non-interference field-of-view support apparatus for a panoramic facial sensor |
DE602007001600D1 (en) * | 2006-03-23 | 2009-08-27 | Koninkl Philips Electronics Nv | HOTSPOTS FOR THE FOCUSED CONTROL OF PICTURE PIPULATIONS |
JP5228305B2 (en) * | 2006-09-08 | 2013-07-03 | ソニー株式会社 | Display device and display method |
US8884763B2 (en) * | 2006-10-02 | 2014-11-11 | iRobert Corporation | Threat detection sensor suite |
FR2912274B1 (en) * | 2007-02-02 | 2009-10-16 | Binocle Sarl | METHOD FOR CONTROLLING A VOLUNTARY OCULAR SIGNAL, IN PARTICULAR FOR SHOOTING |
US7903166B2 (en) * | 2007-02-21 | 2011-03-08 | Sharp Laboratories Of America, Inc. | Methods and systems for display viewer motion compensation based on user image data |
ES2463716T3 (en) * | 2007-05-22 | 2014-05-29 | Koninklijke Philips N.V. | Remote lighting control |
US20120185115A1 (en) * | 2007-10-05 | 2012-07-19 | Jason Dean | Laserbot: programmable robotic apparatus with laser |
US8155479B2 (en) * | 2008-03-28 | 2012-04-10 | Intuitive Surgical Operations Inc. | Automated panning and digital zooming for robotic surgical systems |
JP2010176170A (en) * | 2009-01-27 | 2010-08-12 | Sony Ericsson Mobilecommunications Japan Inc | Display apparatus, display control method, and display control program |
US8320623B2 (en) * | 2009-06-17 | 2012-11-27 | Lc Technologies, Inc. | Systems and methods for 3-D target location |
-
2010
- 2010-06-17 US US12/817,569 patent/US8320623B2/en active Active
- 2010-06-17 US US12/817,604 patent/US20100321482A1/en not_active Abandoned
-
2013
- 2013-12-11 US US14/102,813 patent/US20140092268A1/en not_active Abandoned
-
2016
- 2016-02-17 US US15/045,566 patent/US20160165130A1/en not_active Abandoned
- 2016-12-19 US US15/383,071 patent/US20170099433A1/en not_active Abandoned
-
2018
- 2018-02-05 US US15/888,229 patent/US20180160035A1/en not_active Abandoned
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4945367A (en) * | 1988-03-02 | 1990-07-31 | Blackshear David M | Surveillance camera system |
US20040105010A1 (en) * | 2000-06-30 | 2004-06-03 | Karl Osen | Computer aided capturing system |
US20080136916A1 (en) * | 2005-01-26 | 2008-06-12 | Robin Quincey Wolff | Eye tracker/head tracker/camera tracker controlled camera/weapon positioner control system |
US7686451B2 (en) * | 2005-04-04 | 2010-03-30 | Lc Technologies, Inc. | Explicit raytracing for gimbal-based gazepoint trackers |
US20080297590A1 (en) * | 2007-05-31 | 2008-12-04 | Barber Fred | 3-d robotic vision and vision control system |
Cited By (144)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8160746B2 (en) * | 2007-12-04 | 2012-04-17 | Industrial Technology Research Institute | System and method for graphically allocating robot's working space |
US20090143912A1 (en) * | 2007-12-04 | 2009-06-04 | Industrial Technology Research Institute | System and method for graphically allocating robot's working space |
US10845184B2 (en) | 2009-01-12 | 2020-11-24 | Intermec Ip Corporation | Semi-automatic dimensioning with imager on a portable device |
US10140724B2 (en) | 2009-01-12 | 2018-11-27 | Intermec Ip Corporation | Semi-automatic dimensioning with imager on a portable device |
US8320623B2 (en) * | 2009-06-17 | 2012-11-27 | Lc Technologies, Inc. | Systems and methods for 3-D target location |
US20100322479A1 (en) * | 2009-06-17 | 2010-12-23 | Lc Technologies Inc. | Systems and methods for 3-d target location |
US20140187322A1 (en) * | 2010-06-18 | 2014-07-03 | Alexander Luchinskiy | Method of Interaction with a Computer, Smartphone or Computer Game |
US20120002063A1 (en) * | 2010-06-30 | 2012-01-05 | Hon Hai Precision Industry Co., Ltd. | Camera adjusting system and method |
US8849845B2 (en) * | 2010-11-03 | 2014-09-30 | Blackberry Limited | System and method for displaying search results on electronic devices |
US20120109923A1 (en) * | 2010-11-03 | 2012-05-03 | Research In Motion Limited | System and method for displaying search results on electronic devices |
US9392258B2 (en) * | 2011-02-01 | 2016-07-12 | National University Of Singapore | Imaging system and method |
WO2012105909A1 (en) * | 2011-02-01 | 2012-08-09 | National University Of Singapore | An imaging system and method |
US20130307935A1 (en) * | 2011-02-01 | 2013-11-21 | National University Of Singapore | Imaging system and method |
US20130021448A1 (en) * | 2011-02-24 | 2013-01-24 | Multiple Interocular 3-D, L.L.C. | Stereoscopic three-dimensional camera rigs |
US20140232648A1 (en) * | 2011-10-17 | 2014-08-21 | Korea Institute Of Science And Technology | Display apparatus and contents display method |
US9594435B2 (en) * | 2011-10-17 | 2017-03-14 | Korea Institute Of Science And Technology | Display apparatus and contents display method |
US9497501B2 (en) | 2011-12-06 | 2016-11-15 | Microsoft Technology Licensing, Llc | Augmented reality virtual monitor |
US10497175B2 (en) | 2011-12-06 | 2019-12-03 | Microsoft Technology Licensing, Llc | Augmented reality virtual monitor |
US9558719B2 (en) * | 2011-12-22 | 2017-01-31 | Canon Kabushiki Kaisha | Information processing apparatus |
US20130162675A1 (en) * | 2011-12-22 | 2013-06-27 | Canon Kabushiki Kaisha | Information processing apparatus |
US9354748B2 (en) | 2012-02-13 | 2016-05-31 | Microsoft Technology Licensing, Llc | Optical stylus interaction |
US9904327B2 (en) | 2012-03-02 | 2018-02-27 | Microsoft Technology Licensing, Llc | Flexible hinge and removable attachment |
US9619071B2 (en) | 2012-03-02 | 2017-04-11 | Microsoft Technology Licensing, Llc | Computing device and an apparatus having sensors configured for measuring spatial information indicative of a position of the computing devices |
US9304949B2 (en) | 2012-03-02 | 2016-04-05 | Microsoft Technology Licensing, Llc | Sensing user input at display area edge |
US9678542B2 (en) | 2012-03-02 | 2017-06-13 | Microsoft Technology Licensing, Llc | Multiple position input device cover |
US10963087B2 (en) | 2012-03-02 | 2021-03-30 | Microsoft Technology Licensing, Llc | Pressure sensitive keys |
US10013030B2 (en) | 2012-03-02 | 2018-07-03 | Microsoft Technology Licensing, Llc | Multiple position input device cover |
US10467806B2 (en) | 2012-05-04 | 2019-11-05 | Intermec Ip Corp. | Volume dimensioning systems and methods |
US9779546B2 (en) | 2012-05-04 | 2017-10-03 | Intermec Ip Corp. | Volume dimensioning systems and methods |
US10678743B2 (en) | 2012-05-14 | 2020-06-09 | Microsoft Technology Licensing, Llc | System and method for accessory device architecture that passes via intermediate processor a descriptor when processing in a low power state |
US10007858B2 (en) | 2012-05-15 | 2018-06-26 | Honeywell International Inc. | Terminals and methods for dimensioning objects |
US10635922B2 (en) | 2012-05-15 | 2020-04-28 | Hand Held Products, Inc. | Terminals and methods for dimensioning objects |
US9256089B2 (en) | 2012-06-15 | 2016-02-09 | Microsoft Technology Licensing, Llc | Object-detecting backlight unit |
US10321127B2 (en) | 2012-08-20 | 2019-06-11 | Intermec Ip Corp. | Volume dimensioning system calibration systems and methods |
US10805603B2 (en) | 2012-08-20 | 2020-10-13 | Intermec Ip Corp. | Volume dimensioning system calibration systems and methods |
US20140063198A1 (en) * | 2012-08-30 | 2014-03-06 | Microsoft Corporation | Changing perspectives of a microscopic-image device based on a viewer' s perspective |
US9939259B2 (en) | 2012-10-04 | 2018-04-10 | Hand Held Products, Inc. | Measuring object dimensions using mobile computer |
EP2720464A1 (en) * | 2012-10-11 | 2014-04-16 | Sony Mobile Communications AB | Generating image information |
US10908013B2 (en) | 2012-10-16 | 2021-02-02 | Hand Held Products, Inc. | Dimensioning system |
US9841311B2 (en) | 2012-10-16 | 2017-12-12 | Hand Held Products, Inc. | Dimensioning system |
US9265458B2 (en) | 2012-12-04 | 2016-02-23 | Sync-Think, Inc. | Application of smooth pursuit cognitive testing paradigms to clinical drug development |
US9380976B2 (en) | 2013-03-11 | 2016-07-05 | Sync-Think, Inc. | Optical neuroinformatics |
US9784566B2 (en) | 2013-03-13 | 2017-10-10 | Intermec Ip Corp. | Systems and methods for enhancing dimensioning |
US20140327754A1 (en) * | 2013-05-06 | 2014-11-06 | Delta ID Inc. | Method and apparatus for compensating for sub-optimal orientation of an iris imaging apparatus |
EP2995075B1 (en) * | 2013-05-10 | 2021-04-28 | Samsung Electronics Co., Ltd. | Display apparatus with a plurality of screens and method of controlling the same |
US10203402B2 (en) | 2013-06-07 | 2019-02-12 | Hand Held Products, Inc. | Method of error correction for 3D imaging device |
US10228452B2 (en) | 2013-06-07 | 2019-03-12 | Hand Held Products, Inc. | Method of error correction for 3D imaging device |
US9269330B2 (en) * | 2014-01-02 | 2016-02-23 | Quanta Computer Inc. | Head mounted display apparatus and backlight adjustment method thereof |
US10855946B2 (en) * | 2014-06-10 | 2020-12-01 | Socionext Inc. | Semiconductor integrated circuit, display device provided with same, and control method |
US20170127011A1 (en) * | 2014-06-10 | 2017-05-04 | Socionext Inc. | Semiconductor integrated circuit, display device provided with same, and control method |
US20190258880A1 (en) * | 2014-06-13 | 2019-08-22 | B/E Aerospace, Inc. | Apparatus and Method for Providing Attitude Reference for Vehicle Passengers |
US10949689B2 (en) * | 2014-06-13 | 2021-03-16 | B/E Aerospace, Inc. | Apparatus and method for providing attitude reference for vehicle passengers |
US9976848B2 (en) * | 2014-08-06 | 2018-05-22 | Hand Held Products, Inc. | Dimensioning system with guided alignment |
US9823059B2 (en) | 2014-08-06 | 2017-11-21 | Hand Held Products, Inc. | Dimensioning system with guided alignment |
US20170211931A1 (en) * | 2014-08-06 | 2017-07-27 | Hand Held Products, Inc. | Dimensioning system with guided alignment |
US10240914B2 (en) * | 2014-08-06 | 2019-03-26 | Hand Held Products, Inc. | Dimensioning system with guided alignment |
CN105373218A (en) * | 2014-08-13 | 2016-03-02 | 英派尔科技开发有限公司 | Scene analysis for improved eye tracking |
US10394318B2 (en) * | 2014-08-13 | 2019-08-27 | Empire Technology Development Llc | Scene analysis for improved eye tracking |
CN109062415A (en) * | 2014-08-13 | 2018-12-21 | 英派尔科技开发有限公司 | For improving the scene analysis of eyes tracking |
US9489739B2 (en) * | 2014-08-13 | 2016-11-08 | Empire Technology Development Llc | Scene analysis for improved eye tracking |
US20160048964A1 (en) * | 2014-08-13 | 2016-02-18 | Empire Technology Development Llc | Scene analysis for improved eye tracking |
US10281979B2 (en) * | 2014-08-21 | 2019-05-07 | Canon Kabushiki Kaisha | Information processing system, information processing method, and storage medium |
US20160063347A1 (en) * | 2014-08-27 | 2016-03-03 | Hyundai Motor Company | System for capturing pupil and method thereof |
US10108877B2 (en) * | 2014-08-27 | 2018-10-23 | Hyundai Motor Company | System for capturing pupil and method thereof |
US10682038B1 (en) | 2014-09-19 | 2020-06-16 | Colorado School Of Mines | Autonomous robotic laparoscope based on eye tracking |
US10157313B1 (en) * | 2014-09-19 | 2018-12-18 | Colorado School Of Mines | 3D gaze control of robot for navigation and object manipulation |
US10755096B2 (en) | 2014-09-19 | 2020-08-25 | Colorado School Of Mines | 3D gaze control of robot for navigation and object manipulation |
US10775165B2 (en) | 2014-10-10 | 2020-09-15 | Hand Held Products, Inc. | Methods for improving the accuracy of dimensioning-system measurements |
US10859375B2 (en) | 2014-10-10 | 2020-12-08 | Hand Held Products, Inc. | Methods for improving the accuracy of dimensioning-system measurements |
US9779276B2 (en) | 2014-10-10 | 2017-10-03 | Hand Held Products, Inc. | Depth sensor based auto-focus system for an indicia scanner |
US10121039B2 (en) | 2014-10-10 | 2018-11-06 | Hand Held Products, Inc. | Depth sensor based auto-focus system for an indicia scanner |
US10134120B2 (en) | 2014-10-10 | 2018-11-20 | Hand Held Products, Inc. | Image-stitching for dimensioning |
US10810715B2 (en) | 2014-10-10 | 2020-10-20 | Hand Held Products, Inc | System and method for picking validation |
US10402956B2 (en) | 2014-10-10 | 2019-09-03 | Hand Held Products, Inc. | Image-stitching for dimensioning |
US10218964B2 (en) | 2014-10-21 | 2019-02-26 | Hand Held Products, Inc. | Dimensioning system with feedback |
US9897434B2 (en) | 2014-10-21 | 2018-02-20 | Hand Held Products, Inc. | Handheld dimensioning system with measurement-conformance feedback |
US9752864B2 (en) | 2014-10-21 | 2017-09-05 | Hand Held Products, Inc. | Handheld dimensioning system with feedback |
US9762793B2 (en) | 2014-10-21 | 2017-09-12 | Hand Held Products, Inc. | System and method for dimensioning |
US9826220B2 (en) | 2014-10-21 | 2017-11-21 | Hand Held Products, Inc. | Dimensioning system with feedback |
US10060729B2 (en) | 2014-10-21 | 2018-08-28 | Hand Held Products, Inc. | Handheld dimensioner with data-quality indication |
US10393508B2 (en) | 2014-10-21 | 2019-08-27 | Hand Held Products, Inc. | Handheld dimensioning system with measurement-conformance feedback |
US10585485B1 (en) | 2014-11-10 | 2020-03-10 | Amazon Technologies, Inc. | Controlling content zoom level based on user head movement |
US9563270B2 (en) * | 2014-12-26 | 2017-02-07 | Microsoft Technology Licensing, Llc | Head-based targeting with pitch amplification |
CN113347405A (en) * | 2015-01-28 | 2021-09-03 | 纳维曼德资本有限责任公司 | Scaling related method and apparatus |
US20160261792A1 (en) * | 2015-03-03 | 2016-09-08 | Xiaomi Inc. | Method and apparatus for adjusting photography parameters |
US9843716B2 (en) * | 2015-03-03 | 2017-12-12 | Xiaomi Inc. | Method and apparatus for adjusting photography parameters |
CN104811609A (en) * | 2015-03-03 | 2015-07-29 | 小米科技有限责任公司 | Photographing parameter adjustment method and device |
EP3064993A1 (en) * | 2015-03-03 | 2016-09-07 | Xiaomi Inc. | Method and apparatus for adjusting photography parameters |
US10761601B2 (en) * | 2015-03-23 | 2020-09-01 | Controlrad Systems Inc. | Eye tracking system including an eye tracker camera and a positioning camera |
US20180074581A1 (en) * | 2015-03-23 | 2018-03-15 | Haim Melman | Eye Tracking System |
US20160297362A1 (en) * | 2015-04-09 | 2016-10-13 | Ford Global Technologies, Llc | Vehicle exterior side-camera systems and methods |
US10593130B2 (en) | 2015-05-19 | 2020-03-17 | Hand Held Products, Inc. | Evaluating image values |
US11403887B2 (en) | 2015-05-19 | 2022-08-02 | Hand Held Products, Inc. | Evaluating image values |
US11906280B2 (en) | 2015-05-19 | 2024-02-20 | Hand Held Products, Inc. | Evaluating image values |
US9786101B2 (en) | 2015-05-19 | 2017-10-10 | Hand Held Products, Inc. | Evaluating image values |
CN107810634A (en) * | 2015-06-12 | 2018-03-16 | 微软技术许可有限责任公司 | Display for three-dimensional augmented reality |
WO2016201015A1 (en) * | 2015-06-12 | 2016-12-15 | Microsoft Technology Licensing, Llc | Display for stereoscopic augmented reality |
US10066982B2 (en) | 2015-06-16 | 2018-09-04 | Hand Held Products, Inc. | Calibrating a volume dimensioner |
US9857167B2 (en) | 2015-06-23 | 2018-01-02 | Hand Held Products, Inc. | Dual-projector three-dimensional scanner |
US10247547B2 (en) | 2015-06-23 | 2019-04-02 | Hand Held Products, Inc. | Optical pattern projector |
US10612958B2 (en) | 2015-07-07 | 2020-04-07 | Hand Held Products, Inc. | Mobile dimensioner apparatus to mitigate unfair charging practices in commerce |
US9835486B2 (en) | 2015-07-07 | 2017-12-05 | Hand Held Products, Inc. | Mobile dimensioner apparatus for use in commerce |
US11353319B2 (en) | 2015-07-15 | 2022-06-07 | Hand Held Products, Inc. | Method for a mobile dimensioning device to use a dynamic accuracy compatible with NIST standard |
US10393506B2 (en) | 2015-07-15 | 2019-08-27 | Hand Held Products, Inc. | Method for a mobile dimensioning device to use a dynamic accuracy compatible with NIST standard |
US11029762B2 (en) | 2015-07-16 | 2021-06-08 | Hand Held Products, Inc. | Adjusting dimensioning results using augmented reality |
US10094650B2 (en) | 2015-07-16 | 2018-10-09 | Hand Held Products, Inc. | Dimensioning and imaging items |
CN114020156A (en) * | 2015-09-24 | 2022-02-08 | 托比股份公司 | Wearable device capable of eye tracking |
US10249030B2 (en) | 2015-10-30 | 2019-04-02 | Hand Held Products, Inc. | Image transformation for indicia reading |
US10225544B2 (en) | 2015-11-19 | 2019-03-05 | Hand Held Products, Inc. | High resolution dot pattern |
US10747227B2 (en) | 2016-01-27 | 2020-08-18 | Hand Held Products, Inc. | Vehicle positioning and object avoidance |
CN105635582A (en) * | 2016-01-27 | 2016-06-01 | 惠州Tcl移动通信有限公司 | Photographing control method and photographing control terminal based on eye feature recognition |
US10025314B2 (en) | 2016-01-27 | 2018-07-17 | Hand Held Products, Inc. | Vehicle positioning and object avoidance |
US10783835B2 (en) * | 2016-03-11 | 2020-09-22 | Lenovo (Singapore) Pte. Ltd. | Automatic control of display brightness |
US11547520B2 (en) | 2016-06-03 | 2023-01-10 | Covidien Lp | Systems, methods, and computer-readable storage media for controlling aspects of a robotic surgical device and viewer adaptive stereoscopic display |
JP2019523663A (en) * | 2016-06-03 | 2019-08-29 | コヴィディエン リミテッド パートナーシップ | System, method and computer readable storage medium for controlling aspects of a robotic surgical apparatus and a viewer adapted stereoscopic display |
US10339352B2 (en) | 2016-06-03 | 2019-07-02 | Hand Held Products, Inc. | Wearable metrological apparatus |
US10872214B2 (en) | 2016-06-03 | 2020-12-22 | Hand Held Products, Inc. | Wearable metrological apparatus |
US11058504B2 (en) | 2016-06-03 | 2021-07-13 | Covidien Lp | Control arm assemblies for robotic surgical systems |
JP2022036255A (en) * | 2016-06-03 | 2022-03-04 | コヴィディエン リミテッド パートナーシップ | Systems, methods and computer-readable storage media for controlling aspects of robotic surgical device and viewer adaptive stereoscopic display |
US10980610B2 (en) * | 2016-06-03 | 2021-04-20 | Covidien Lp | Systems, methods, and computer-readable storage media for controlling aspects of a robotic surgical device and viewer adaptive stereoscopic display |
CN109475387A (en) * | 2016-06-03 | 2019-03-15 | 柯惠Lp公司 | For controlling system, method and the computer-readable storage medium of the aspect of robotic surgical device and viewer's adaptive three-dimensional display |
EP3463161A4 (en) * | 2016-06-03 | 2020-05-20 | Covidien LP | Systems, methods, and computer-readable storage media for controlling aspects of a robotic surgical device and viewer adaptive stereoscopic display |
US11653991B2 (en) | 2016-06-03 | 2023-05-23 | Covidien Lp | Control arm assemblies for robotic surgical systems |
US9940721B2 (en) | 2016-06-10 | 2018-04-10 | Hand Held Products, Inc. | Scene change detection in a dimensioner |
US10163216B2 (en) | 2016-06-15 | 2018-12-25 | Hand Held Products, Inc. | Automatic mode switching in a volume dimensioner |
US10417769B2 (en) | 2016-06-15 | 2019-09-17 | Hand Held Products, Inc. | Automatic mode switching in a volume dimensioner |
CN106484124A (en) * | 2016-11-14 | 2017-03-08 | 北京英贝思科技有限公司 | A kind of sight control method |
US10909708B2 (en) | 2016-12-09 | 2021-02-02 | Hand Held Products, Inc. | Calibrating a dimensioner using ratios of measurable parameters of optic ally-perceptible geometric elements |
CN108632563A (en) * | 2017-03-18 | 2018-10-09 | 杰瑞·L·康威 | Dynamic visual telephone system and its application method |
US11047672B2 (en) | 2017-03-28 | 2021-06-29 | Hand Held Products, Inc. | System for optically dimensioning |
CN107147883A (en) * | 2017-06-09 | 2017-09-08 | 中国科学院心理研究所 | A kind of remote shooting system based on the dynamic control of head |
US10733748B2 (en) | 2017-07-24 | 2020-08-04 | Hand Held Products, Inc. | Dual-pattern optical 3D dimensioning |
US10621398B2 (en) | 2018-03-14 | 2020-04-14 | Hand Held Products, Inc. | Methods and systems for operating an indicia scanner |
US10584962B2 (en) | 2018-05-01 | 2020-03-10 | Hand Held Products, Inc | System and method for validating physical-item security |
CN112423942A (en) * | 2018-09-03 | 2021-02-26 | 川崎重工业株式会社 | Robot system |
JP2020202499A (en) * | 2019-06-11 | 2020-12-17 | 国立大学法人静岡大学 | Image observation system |
JP7356697B2 (en) | 2019-06-11 | 2023-10-05 | 国立大学法人静岡大学 | Image observation system |
US11639846B2 (en) | 2019-09-27 | 2023-05-02 | Honeywell International Inc. | Dual-pattern optical 3D dimensioning |
US11410331B2 (en) | 2019-10-03 | 2022-08-09 | Facebook Technologies, Llc | Systems and methods for video communication using a virtual camera |
WO2021067044A1 (en) * | 2019-10-03 | 2021-04-08 | Facebook Technologies, Llc | Systems and methods for video communication using a virtual camera |
EP4020970A1 (en) * | 2020-12-23 | 2022-06-29 | Yokogawa Electric Corporation | Apparatus, system, method and program |
US20220201194A1 (en) * | 2020-12-23 | 2022-06-23 | Yokogawa Electric Corporation | Apparatus, system, method and storage medium |
DE102021122543A1 (en) | 2021-08-31 | 2023-03-02 | Bayerische Motoren Werke Aktiengesellschaft | Driving assistance system and driving assistance method for a vehicle |
WO2024137749A1 (en) * | 2022-12-22 | 2024-06-27 | Apple Inc. | Focus adjustments based on attention |
Also Published As
Publication number | Publication date |
---|---|
US20100322479A1 (en) | 2010-12-23 |
US20170099433A1 (en) | 2017-04-06 |
US20160165130A1 (en) | 2016-06-09 |
US20140092268A1 (en) | 2014-04-03 |
US20180160035A1 (en) | 2018-06-07 |
US8320623B2 (en) | 2012-11-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20180160035A1 (en) | Robot System for Controlling a Robot in a Tele-Operation | |
US11806101B2 (en) | Hand controller for robotic surgery system | |
US10591735B2 (en) | Head-mounted display device and image display system | |
US7686451B2 (en) | Explicit raytracing for gimbal-based gazepoint trackers | |
US20060210111A1 (en) | Systems and methods for eye-operated three-dimensional object location | |
KR20170136413A (en) | Medical devices, systems, and methods integrating eye gaze tracking for stereo viewer | |
WO2004029786A1 (en) | Control of robotic manipulation | |
US20220272272A1 (en) | System and method for autofocusing of a camera assembly of a surgical robotic system | |
Iovene et al. | Towards exoscope automation in neurosurgery: A markerless visual-servoing approach | |
JP3482228B2 (en) | Manipulator control system by gaze detection | |
US11576736B2 (en) | Hand controller for robotic surgery system | |
Schneider et al. | Vision system for wearable and robotic uses | |
CN117279576A (en) | System and method for auto-focusing a camera assembly of a surgical robotic system | |
CN118043765A (en) | Controlling a repositionable structural system based on a geometric relationship between an operator and a computer-aided device | |
Pérez Mejías | Design of a telepresence interfacefor direct teleoperation of robots: The synergy between Virtual Reality and FreeLook Control | |
Villgrattner et al. | Vision system for wearable and robotic uses |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: LC TECHNOLOGIES, INC., VIRGINIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CLEVELAND, DIXON;REEL/FRAME:029124/0278 Effective date: 20121003 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |