US20070285554A1 - Apparatus method and system for imaging - Google Patents

Apparatus method and system for imaging Download PDF

Info

Publication number
US20070285554A1
US20070285554A1 US11/277,578 US27757806A US2007285554A1 US 20070285554 A1 US20070285554 A1 US 20070285554A1 US 27757806 A US27757806 A US 27757806A US 2007285554 A1 US2007285554 A1 US 2007285554A1
Authority
US
United States
Prior art keywords
image
images
optical
color
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/277,578
Other languages
English (en)
Inventor
Dor Givon
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Extreme Reality Ltd
Original Assignee
Extreme Reality Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Extreme Reality Ltd filed Critical Extreme Reality Ltd
Priority to US11/277,578 priority Critical patent/US20070285554A1/en
Priority to PCT/IL2006/001254 priority patent/WO2007052262A2/fr
Priority to US12/092,220 priority patent/US8462199B2/en
Publication of US20070285554A1 publication Critical patent/US20070285554A1/en
Assigned to EXTREME REALITY LTD. reassignment EXTREME REALITY LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GIVON, DOR
Priority to US12/897,390 priority patent/US8878896B2/en
Priority to US13/734,987 priority patent/US9131220B2/en
Priority to US13/737,345 priority patent/US9046962B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/257Colour aspects
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H1/00Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
    • G03H1/26Processes or apparatus specially adapted to produce multiple sub- holograms or to obtain images from them, e.g. multicolour technique
    • G03H1/268Holographic stereogram
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/207Image signal generators using stereoscopic image cameras using a single 2D image sensor
    • H04N13/218Image signal generators using stereoscopic image cameras using a single 2D image sensor using spatial multiplexing
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H2210/00Object characteristics
    • G03H2210/40Synthetic representation, i.e. digital or optical object decomposition
    • G03H2210/42Synthetic representation, i.e. digital or optical object decomposition from real object, e.g. using 3D scanner
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H2210/00Object characteristics
    • G03H2210/40Synthetic representation, i.e. digital or optical object decomposition
    • G03H2210/44Digital representation
    • G03H2210/441Numerical processing applied to the object data other than numerical propagation

Definitions

  • the present invention relates generally to the field of imaging. More specifically, the present invention relates to an apparatus, method and system having multiple optical paths for acquiring one or more images from a scene and producing one or more data sets representative of various aspects of the scene, including depth (i.e. Three Dimensional—3D) information.
  • depth i.e. Three Dimensional—3D
  • Conventional 3D-stereoscopic photography typically employs twin cameras having parallel optical axes and a fixed distance between their aligned lenses. These twin cameras generally produce a pair of images which images can be displayed by any of the known in the art techniques for stereoscopic displaying and viewing. These techniques are based, in general, on the principle that the image taken by a right lens is displayed to the right eye of a viewer and the image taken by the left lens is displayed to the left eye of the viewer.
  • U.S. Pat. No. 6,906,687 assigned to Texas Instruments Incorporated, entitled “Digital formatter for 3-dimensional display applications” discloses a 3D digital projection display that uses a quadruple memory buffer to store and read processed video data for both right-eye and left-eye display.
  • video data is processed at a 48-frame/sec rate and readout twice (repeated) to provide a flash rate of 96 (up to 120) frames/sec, which is above the display flicker threshold.
  • the data is then synchronized with a headset or goggles with the right-eye and left-eye frames being precisely out-of-phase to produce a perceived 3-D image.
  • Spherical or panoramic photographing is traditionally done either by a very wide-angle lens, such as a “fish-eye” lens, or by “stitching” together overlapping adjacent images to cover a wide field of vision, up to fully spherical fields of vision.
  • the panoramic or spherical images obtained by using such techniques can be two dimensional images or stereoscopic images, giving to the viewer a perception of depth.
  • These images can also be computed as three dimensional (3D-depth) images in terms of computing the distance of every pixel in the image from the camera using known in art passive methods such as triangulation methods, semi active methods or active methods.
  • U.S. Pat. No. 6,833,843 assigned to Tempest Microsystems Incorporated, teaches an image acquisition and viewing system that employs a fish-eye lens and an imager such as, a charge coupled device (CCD), to obtain a wide angle image, e.g., an image of a hemispherical field of view.
  • a charge coupled device CCD
  • the application teaches an imaging system for obtaining full stereoscopic spherical images of the visual environment surrounding a viewer, 360 degrees both horizontally and vertically. Displaying the images by means suitable for stereoscopic displaying, gives the viewers the ability to look everywhere around them, as well as up and down, while having stereoscopic depth perception of the displayed images.
  • the disclosure teaches an array of cameras, wherein the lenses of the cameras are situated on a curved surface, pointing out from C common centers of said curved surface.
  • the captured images are arranged and processed to create a pair of stereoscopic image pairs, wherein one image of said pair is designated for the observer's right eye and the second image for his left eye, thus creating a three dimensional perception.
  • Active methods may intentionally project high-frequency illumination into the scene in order to construct 3D measurement of the image.
  • 3DV systems Incorporated http://www.3dvsystems.com/
  • the ZcamTM camera is a uniquely designed camera which employs a light wall having a proper width.
  • the light wall may be generated, for example, as a square laser pulse. As the light wall hits objects in a photographed scene it is reflected towards the ZCamTM camera carrying an imprint of the objects. The imprint carries all the information required for the reconstruction of the depth maps.
  • Passive methods for depth construction may use triangulation techniques that make use of at least two known scene viewpoints. Corresponding features are identified, and rays are intersected to find the 3D position of each feature.
  • Space-time stereo adds a temporal dimension to the neighborhoods used in the spatial matching function. Adding temporal stereo, using multiple frames across time, we match a single pixel from the first image against the second image. This can also be done by matching space-time trajectories of moving objects, in contrast to matching interest points (corners), as done in regular feature-based image-to-image matching techniques. The sequences are matched in space and time by enforcing consistent matching of all points along corresponding space-time trajectories, also obtaining sub-frame temporal correspondence (synchronization) between two video sequences.
  • Tidex extracts 3D depth information from a 2D video sequence of a rigid structure and modeling it as a rigid 3D model.
  • SOS technology www.hoip.jp/ENG/sosENG.htm, www.viewplusco.jp/products/sos/astro-e.html
  • SOS Stereo Omni-directional System
  • the illumination means can comprise one or more laser sources, such as a small diode laser, or other small radiation sources for generating beams of visible or invisible light in a set of points in the area of the lenses to form a set of imprinted markers in the captured images. Said set of imprinted markers are identified and enable passive methods to facilitate image processing in accordance with known in art passive processing methods.
  • structured light scanning where a known set of temporal patterns (the structured light patterns) are used for matching. These patterns induce a temporal matching vector.
  • Structured light is a special case of space-time stereo, with matching in the temporal domain.
  • laser scanning where a single camera and a laser scanner sweeps across a scene. A plane of laser light is generated from a single point of projection and is moved across the scene. At any given time, the camera can see the intersection of this plane with the object. Both spatial and temporal domain laser scanners have been built for that purpose.
  • holographic stereograms consist of information recorded from a number of discrete viewpoints.
  • Laser-illuminated display holography developed in 1964 by Leith and Upatnieks, was the first truly high quality three-dimensional display medium.
  • Hologram is burdened by the fact that it is not only a display but a recording medium.
  • Holographic recording must be done in monochromatic, coherent light, and requires that the objects being imaged remain stable to within a fraction of a wavelength of light. These requirements have hindered holography from gaining widespread use.
  • the amount of optical information stored in a hologram makes the computation of holographic patterns very difficult.
  • a holographic stereogram records a relatively large number of viewpoints of an object and may use a hologram to record those viewpoints and present them to a viewer.
  • the information content of the stereogram is greatly reduced from that of a true hologram because only a finite number of different views of the scene are stored.
  • the number of views captured can be chosen based on human perception rather than on the storage capacity of the medium.
  • the capturing of the viewpoints for the stereogram is detached from the recording process; image capture is photographic and optically incoherent, so that images of natural scenes with natural lighting can be displayed in a stereogram.
  • the input views for traditional stereograms are taken with ordinary photographic cameras and can be synthesized using computer graphic techniques.
  • HPO Horizontal Parallax Only
  • the holographic stereogram is a means of approximating a continuous optical phenomenon in a discrete form.
  • display holo-stereography the continuous three-dimensional information of an object's appearance can be approximated by a relatively small number of two-dimensional images of that object. While these images can be taken with a photographic camera or synthesized using a computer, both capture processes can be modeled as if a physical camera was used to acquire them.
  • the photographic capture, the holographic recording, and the final viewing geometries all determine how accurately a particular holographic stereogram approximates a continuous scene.
  • stereogram capturing methods There are a number of stereogram capturing methods. For example, some may require a Holographic exposure setup using a single holographic plate comprised of a series of thin vertical slit holograms exposed one next to the other across the plate's horizontal extent. Each slit is individually exposed to an image projected onto a rear-projection screen some distance away from the plate. Once the hologram is developed, each slit forms an aperture through which the image of the projection screen at the time of that slit's exposure can be seen.
  • the images projected onto the screen are usually views of an object captured from many different viewpoints. A viewer looking at the stereogram will see two different projection views through two slit apertures, one through each eye. The brain interprets the differences between the two views as three-dimensional information. If the viewer moves side to side, different pairs of images are presented, and so the scene appears to gradually and accurately change from one viewpoint to the next to faithfully mimic the appearance of an actual three-dimensional scene.
  • Some methods may use multiple recentered camera arrays.
  • the actual projected image whose extent is defined by the projection frame, is the visible sub region of the projection screen in any particular view.
  • the projection screen itself is a sub region of a plane of infinite extent called the projection plane.
  • the projection screen directly faces the holographic plate and slit mechanism or the camera.
  • the viewer interprets the two images stereoscopically. This binocular depth cue is very strong; horizontal image parallax provides most of the viewer's depth sense.
  • a stationary point appears to be at infinity as a landmark
  • the correct camera geometry needed to accurately capture a three-dimensional scene can be inferred. To appear at infinity, then, an object point must remain at the same position in every camera view. This constraint implies that the camera should face the same direction, straight ahead, as each frame is captured.
  • the camera moves along a track whose position and length correspond to the final stereogram plate.
  • the camera takes pictures of a scene from viewpoints that correspond to the locations of the stereogram's slits.
  • the plate is planar, so the camera track must be straight, not curved.
  • the camera must be able to image the area corresponding to the projection frame onto its film; thus, the frame defines the cross section of the viewing pyramid with its apex located at the camera's position. Because the projection frame bounds the camera's image, the size of the projection frame and its distance from the slit determine the angle of view of the image and thus the maximum (and optimal) focal length of the camera's lens.
  • the film plane of the stereogram capture camera is always parallel to the plane of the scene that corresponds to the projection plane (the capture projection plane) in order to image it without geometric distortions onto the focal plane of the lens.
  • a stereogram exposure geometry is well suited for objects far from the camera when the image of the object wanders little from frame to frame, always remaining in the camera's field of view and thus always visible to the stereogram viewer.
  • distant objects are seldom the center of interest in three-dimensional images because the different perspectives captured over the view zone have little disparity and, as a result, convey little sense of depth.
  • Objects at more interesting locations, closer to the camera wander across the frame from one camera view to the next and tend to be vignetted in the camera's image at either or both extremes of the camera's travel.
  • the solution to the problem is to alter the capture camera to always frame the object of interest as it records the photographic sequence. Effectively, this change centers the object plane in every camera frame so that it remains stationary on the film from view to view.
  • Object points in front of or behind the stationary plane will translate horizontally from view to view, but at a slower rate than they would in a simple camera stereogram.
  • Altering the camera geometry requires changes in the holographic exposure geometry needed to produce undistorted images.
  • the projection screen is no longer centered in front of the slit aperture during all exposures. Instead, the holographic plate holder is stationary and the slit in front of it moves from exposure to exposure. Thus, the projection frame is fixed in space relative to the plate for all exposures, rather than being centered in front of each slit during each exposure. In this geometry, called the “recentered camera” geometry, only one projection frame position exists for all slits.
  • the projection frame no longer seems to follow the viewer but instead appears stationary in space. If an image of the object plane of the original scene remains stationary on the projection screen, then, the object plane of the original scene and the projection plane of the final hologram will lie at the same depth.
  • a recentering camera One type of camera which may take pictures for this type of stereogram is called a recentering camera. Recall that in the simple camera image capture, the image of a nearby object point translated across the camera's film plane as the camera moved down its track taking pictures. In a recentering camera, the lens and the film back of the camera can move independently from each other, so the film plane can be translated at the same rate as the image of the object of interest. The film and image move together through all frames, so just as desired the image appears stationary in all the resulting images. A view camera with a “shifting” or “shearing” lens provides this type of recentering. The lens of the camera must be wide enough to always capture the full horizontal extent of the object plane without vignetting the image at extreme camera positions.
  • a correspondence must exist between the camera capture and the holographic exposure geometries.
  • the necessary translation of the camera's lens adds another constraint that must be maintained.
  • a point in the middle of the object plane must always be imaged into the middle of the film plane, and must always be projected onto the middle of the projection frame.
  • the angle subtended by the object frame as seen from the camera must equal to the angle subtended by the projection frame as seen from the slit. If, for example, the focal length of the lens of the taking camera is changed, the amount of lens translation required and the size of the holographic projection frame would also have to be adjusted.
  • the first in which the projection frame is located directly in front of the slit during each exposure and the plate translates with respect to it the “simple camera” geometry
  • the second in which the screen is centered in front of the plate throughout all the exposures and the slit moves from one exposure to the next the “recentering camera” geometry.
  • the first method has the advantage that the camera needed to acquire the projectional images is the easier to build, but the input frames tend to vignette objects that are close to the camera.
  • the second method requires a more complicated camera, but moves the plane of the image where no vignetting occurs from infinity to the object plane.
  • the camera complexity of this method is less of an issue if a computer graphics camera rather than a physical camera is used.
  • the projection frame in a recentered camera stereogram forms a “window” of information in space, fixed with respect to the stereogram and located at the depth of the projection plane.
  • the usefulness of this fixed window becomes important when the slit hologram is optically transferred in a second holographic step in order to simplify viewing.
  • the viewer's eyes must be located at the plane of the slit hologram.
  • the stereogram is a physical object, the viewer's face must be immediately next to a piece of glass or film.
  • a holographic transfer image can be made so as to project a real image of the slit master hologram out into space, allowing the viewer to be conveniently positioned in the image of the slits, hologram of a real object.
  • a holographic stereogram can of course be cylindrical, for all-round view.
  • the transparencies can be made by photographing a rotating subject from a fixed position. If the subject articulates as well, each frame is a record of a particular aspect at a particular time.
  • a rotating cylindrical holographic stereogram made from successive frames of movie film can then show an apparently three-dimensional display of a moving subject.
  • ray tracing While wavefront analysis is useful when determining the small changes in the direction of light that proved significant in the stereogram's wavefront approximation, ray tracing's strength is in illustrating the general paths of light from large areas, overlooking small differences in direction and completely omitting phase variations. Ray tracing can be used to determine the image that each camera along a track sees, and thus what each projection screen should look like when each slit is exposed. It also shows what part of the projection screen of each slit is visible to a viewer at any one position. Distortion-free viewing requires that the rays from the photographic capture step and the viewing step correspond to each other.
  • a digital camera uses a sensor array (e.g. Charge Coupled Devices—CCD) comprised of millions of tiny electroptical receptors that enables to digitize or digitally print an optical image.
  • the basic operation of the sensor is to convert light into electrons.
  • CCD Charge Coupled Devices
  • each of these receptors is a “photosite” which collects and uses photons to produce electrons.
  • the camera closes each of these photosites, and then tries to assess how many photons fell into each photosite by measuring the number of electrons.
  • the relative quantity of photons which fell onto each photosite is then sorted into various intensity levels, whose precision may be determined by a bit depth (e.g.
  • the ability to generate a serial data stream from a large number of photosites enables the light incident on the sensor to be sampled with high spatial resolution in a controlled and convenient manner.
  • the simplest architecture is a linear sensor, consisting of a line of photodiodes adjacent to a single CCD readout register.
  • a common clocking technique is the 4-phase clocking system which uses 4 gates per pixel. At any given time, two gates act as barriers (no charge storage) and two provide charge storage.
  • Digital cameras may contain “microlenses” above each photosite to enhance their light-gathering ability. These lenses are analogous to funnels which direct photons into the photosite where the photons would have otherwise been unused. Well-designed microlenses can improve the photon signal at each photosite, and subsequently create images which have less noise for the same exposure time.
  • each photosite is unable to distinguish how much of each color has fallen in, so the above illustration would only be able to create grayscale images.
  • each photosite has to have a filter placed over it which only allows penetration of a particular color of light.
  • Virtually all current digital cameras can only capture one of the three primary colors in each photosite, and so they discard roughly 2 ⁇ 3 of the incoming light. As a result, the camera has to approximate the other two primary colors in order to have information about all three colors at every pixel in the image.
  • the most common type of color filter array is called a “Bayer array”, or Bayer Filter.
  • a Bayer filter mosaic is a color filter array (CFA) for arranging RGB color filters, as shown in FIG. 3 , on a square grid of photo sensors.
  • CFA color filter array
  • the term derives from the name of its inventor, Bryce Bayer of Eastman Kodak, and refers to a particular arrangement of color filters used in most single-chip digital cameras (mostly CCD, as apposed to CMOS).
  • a Bayer array consists of alternating rows of red-green and green-blue filters.
  • the Bayer array contains twice as many green as red or blue sensors. These elements are referred to as samples and after interpolation become pixels.
  • Each primary color does not receive an equal fraction of the total area because the human eye is more sensitive to green light than both red and blue light. Redundancy with green pixels produces an image which appears less noisy and has finer detail than could be accomplished if each color were treated equally. This also explains why noise in the green channel is much less than for the other two primary colors.
  • the RAW output of Bayer-filter cameras is referred to as a BayerPattern image.
  • a Demosaicing algorithm is used to interpolate a set of complete red, green, and blue values for each point, to make an RGB image. Many different algorithms exist.
  • Demosaicing is the process of translating an array of primary colors (such as Bayer array) into a final image which contains full color information (RGB) at each point in the image which may be referred to as a pixel.
  • a Demosaicing algorithm may be used to interpolate a complete image from the partial raw data that one typically receives from the color-filtered CCD image sensor internal to a digital camera. The most basic idea is to independently interpolate the R, G and B planes. In other words, to find the missing green values, neighboring green values may be used, to find the missing blue values neighboring blue pixels values may be used, and so on for red pixel values. For example, for linear interpolation, to obtain the missing green pixels, calculate the average of the four known neighboring green pixels.
  • To calculate the missing blue pixels proceed in two steps. First, calculate the missing blue pixels at the red location by averaging the four neighboring blue pixels. Second, calculate the missing blue pixels at the green locations by averaging the four neighboring blue pixels. The second step is equivalent to taking 3 ⁇ 8 of each of the closest pixels and 1/16 of four next closest pixels. This example of interpolation introduces aliasing artifacts. Improved method exists to obtain better interpolation.
  • the RAW file format is digital photography's equivalent of a negative in film photography: it contains untouched, “raw” photosite information straight from the digital camera's sensor.
  • the RAW file format has yet to undergo Demosaicing, and so it contains just one red, green, or blue value at each photosite.
  • the image must be processed and converted to an RGB format such as TIFF, JPEG or any other known in the art compatible format, before it can be manipulated.
  • Digital cameras have to make several interpretive decisions when they develop a RAW file, and so the RAW file format offers you more control over how the final image is generated.
  • a RAW file is developed into a final image in several steps, each of which may contain several irreversible image adjustments.
  • One key advantage of RAW is that it allows the photographer to postpone applying these adjustments—giving more flexibility to the photographer to later control the conversion process, in a way which best suits each image.
  • Demosaicing and white balance involve interpreting and converting the Bayer array into an image with all three colors at each pixel, and occur in the same step.
  • RAW image is then converted into 8-bits per channel, and may be compressed into a JPEG based on the compression setting within the camera.
  • RAW image data permits much greater control of the image.
  • White balance and color casts can be difficult to correct after the conversion to RGB is done.
  • RAW files give you the ability to set the white balance of a photo *after* the picture has been taken—without unnecessarily destroying bits.
  • Digital cameras actually record each color channel with more precision than the 8-bits (256 levels) per channel used for JPEG images.
  • RAW file formats are proprietary, and differ greatly from one manufacturer to another, and sometimes between cameras made by one manufacturer.
  • Adobe Systems published the Digital Negative Specification (DNG), which is intended to be a unified raw format.
  • DNG Digital Negative Specification
  • An optical assembly including a set of optical paths.
  • An optical path may include a lens and a diaphragm structure, where a given optical path's lens and diaphragm structure may be adapted to receive and/or collect optical image information corresponding to one or more features of a given projection plain (e.g. visible features within the given projection plane).
  • Two or more optical paths from the set of optical paths may receive and/or collect optical image information from a common projection plane.
  • the projection plane may be flat. According to further embodiments of the present invention, the projection plane may be any other shape, including spherical, cylindrical or any other projection surface which may be defined using optical elements such as lenses.
  • each of the two or more optical paths may direct their respective received/collected optical image information onto an image sensor, which image sensor may be adapted to convert the optical image information into an image data set correlated to the optical image information (e.g. a digital image frame representing the collected optical image).
  • an image sensor may be adapted to produce a series of image data sets (i.e. series of digital image frames), wherein each image data set is representative of optical information received/collected over a given period of time (e.g. 30 milliseconds).
  • each of the two or more optical paths may direct its respective received/collected optical image information onto a separate image sensor, while according to further embodiments of the present invention, the two or more optical paths may direct their received/collected optical image information onto a common image sensor.
  • each optical path may either direct its respective collected image onto a separate portion of the image sensor, or two or more optical paths may direct their respective collected images onto overlapping segments on the image sensor.
  • two or more optical paths may direct their respective collected images onto a common segment of the image sensor, thereby optically encoding the images.
  • image data produced from each separate optical sensor's segment may be considered a separate image data set (e.g. frame).
  • image data produced from each separate optical sensor's segment may be considered a separate image data set (e.g. frame).
  • two or more collected optical images are directed to a common segment on an image sensor (e.g. substantially the entire active/sensing area of the sensor)
  • several methods may be used to produce a separate image data set associated with each of the directed optical images, said methods including: (1) time domain multiplexing, and (2) encoding/decoding function.
  • an optical shutter e.g. Liquid Crystal Display shutter
  • Time domain multiplexing may be achieved by opening only one given optical path's shutters during a given period, which given period is within an acquisition period during which the image sensor is to produce image data associated with the given optical path.
  • image data sets i.e. image frames
  • the optical sensor may produce a composite image data set that may include information relating to some encoded composite of the two or more collected optical images.
  • An encoding/decoding function method or algorithm in accordance with some embodiments of the present invention may be used to encode the two or more collected optical images and decode the composite image data set into two or more separate image data sets, wherein each of the separate image data sets may be associated with and may represent collected optical image information from a single optical path.
  • an image data processing block implemented either on a dedicated data processor or on a programmable generable purpose processor.
  • One or multiple image processing algorithms may be implemented or executed via the processing block.
  • the data processing block may be adapted to perform a decoding function of a composite image data set
  • the processing block may be adapted to combine two or more collected images into a multidimensional (e.g. four dimensional) data set, wherein the multidimensional data set may include image data representing various features of the common projection plain.
  • an image extrapolation block implemented either on a dedicated data processor or on a programmable generable purpose processor.
  • the image extrapolation block may extrapolate either from the multidimensional data set, from the originally acquired image data sets (i.e. frames) or from the encoded data set of the encoding/decoding function one or more types of derived data sets, wherein each extrapolated data set type may be associated with one or more features of the common projection plain from which the two or more optical paths collected optical image information.
  • Extrapolated data set types may include: (1) a depth map (i.e.
  • each image may also be an approximated virtual view point of the common projection plain.
  • each of the optical paths may include (1) a fix mirror, (2) a fixed lens and (3) a fixed diaphragm structure.
  • the lenses and mirrors of two or more optical paths may be functionally associated (e.g. synchronized).
  • the diaphragms on the two or more given optical paths having functionally associated lenses and mirrors may be adapted to adjust their configuration (e.g. aperture size and shape) so as to maintain a common projection plain between the given optical paths when the focus on the synchronized lenses is changed.
  • FIG. 1A shows a series of graphs depicting the basic principles of space time stereo
  • FIG. 1B shows a geometric diagram illustrating the basic principles and formulas relating to generating a disparity map using two sensors
  • FIG. 1C shows a geometric diagram illustrating the basic principles and formulas relating to generating a disparity map using multiple sensors
  • FIG. 2 shows a video camera array that may be used to extrapolate 360 degree depth information using a passive method
  • FIG. 3 shows an example of a Bayer filter
  • FIG. 4A shows an optical assembly according to some embodiments of the present invention
  • FIG. 4B shows a diagrammatic representation of an optical assembly according to some embodiments of the present invention where optical image information from two separate optical paths are projected onto separate areas of a common image sensor;
  • FIG. 4C shows a diagrammatic representation of an optical assembly according to some embodiments of the present invention where optical image information from two separate optical paths are projected onto a common area of a common image sensor;
  • FIG. 5A shows a symbolic diagram of an optical assembly and image processing system according to some embodiments of the present invention producing a series of digital image data sets (e.g. frames);
  • FIG. 5B shows a symbolic diagram depicting multiple images being generated from a multidimensional image data set according to some embodiments of the present invention
  • FIG. 5C shows a symbolic diagram depicting various image related data sets being derived from the multidimensional image data set according to some embodiments of the present invention
  • FIG. 6A shows a block diagram representing image processing element/modules according to some embodiments of the present invention.
  • FIG. 6B shows a flow diagram including the steps of producing one or more multidimensional image data sets from two or more acquired image frames according to some embodiments of the present invention
  • FIG. 6C shows a flow diagram including the steps of producing one or more multidimensional image data sets from an optically encoded image frame according to some embodiments of the present invention.
  • FIGS. 7A, 7B and 7 C shows an exemplary filter according to some embodiments of the present invention.
  • FIG. 8 shows an exemplary filter according to some embodiments of the present invention.
  • FIG. 9A shows two images which are used as an input to an algorithm according to some embodiment of the present invention.
  • FIGS. 9B and 9C shows the outcome of image processing, which is performed in accordance with some embodiment of the present invention.
  • FIG. 10 shows a flow diagram including the steps of an algorithm in accordance with some embodiments of the present invention.
  • Embodiments of the present invention may include apparatuses for performing the operations herein.
  • This apparatus may be specially constructed for the desired purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer.
  • a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs) electrically programmable read-only memories (EPROMs), electrically erasable and programmable read only memories (EEPROMs), magnetic or optical cards, or any other type of media suitable for storing electronic instructions, and capable of being coupled to a computer system bus.
  • An optical assembly including a set of optical paths.
  • An optical path may include a lens and a diaphragm structure, where a given optical path's lens and diaphragm structure may be adapted to receive and/or collect optical image information corresponding to one or more features of a given projection plain (e.g. visible features within the given projection plane).
  • Two or more optical paths from the set of optical paths may receive and/or collect optical image information from a common projection plane.
  • the projection plane may be flat. According to further embodiments of the present invention, the projection plane may be any other shape, including spherical, cylindrical or any other projection surface which may be defined using optical elements such as lenses or mirrors.
  • each of the two or more optical paths may direct their respective received/collected optical image information onto an image sensor, which image sensor may be adapted to convert the optical image information into an image data set correlated to the optical image information (e.g. a digital image frame representing the collected optical image).
  • an image sensor may be adapted to produce a series of image data sets (i.e. series of digital image frames), wherein each image data set is representative of optical information received/collected over a given period of time (e.g. 30 milliseconds).
  • each of the two or more optical paths may direct its respective received/collected optical image information onto a separate image sensor, while according to further embodiments of the present invention, the two or more optical paths may direct their received/collected optical image information onto a common image sensor.
  • each optical path may either direct its respective collected image onto a separate portion of the image sensor, or two or more optical paths may direct their respective collected images onto overlapping segments on the image sensor.
  • two or more optical paths may direct their respective collected images onto a common segment of the image sensor, thereby optically encoding the images.
  • image data produced from each separate optical sensor's segment may be considered a separate image data set (e.g. frame).
  • image data produced from each separate optical sensor's segment may be considered a separate image data set (e.g. frame).
  • two or more collected optical images are directed to a common segment on an image sensor (e.g. substantially the entire active/sensing area of the sensor)
  • several methods may be used to produce a separate image data set associated with each of the directed optical images, said methods including: (1) time domain multiplexing, and (2) encoding/decoding function.
  • an optical shutter e.g. Liquid Crystal Display shutter
  • Time domain multiplexing may be achieved by opening only one given optical path's shutters during a given period, which given period is within an acquisition period during which the image sensor is to produce image data associated with the given optical path.
  • image data sets i.e. image frames
  • the optical sensor may produce a composite image data set that may include information relating to some encoded composite of the two or more collected optical images.
  • An optical encoding/decoding apparatus/method or algorithm in accordance with some embodiments of the present invention may be used to encode the two or more collected optical images and decode the composite image data set into two or more separate image data sets, wherein each of the separate image data sets may be associated with and may represent collected optical image information from a single optical path.
  • an image data processing block implemented either on optical means, a dedicated data processor or on a programmable generable purpose processor.
  • One or multiple image processing algorithms may be implemented or executed via the processing block.
  • an optical data processing block may be adapted to generate a complex multidimensional data set to be printed on the camera image sensor.
  • a data processing block may be adapted to extrapolate each of the subset of optical paths printed on the camera image sensor.
  • the processing block may be adapted to combine two or more collected images into a multidimensional (e.g. four dimensional) data set, wherein the multidimensional data set may include image data representing various features of the common projection plain (i.e. common surface).
  • a multidimensional data set may include image data representing various features of the common projection plain (i.e. common surface).
  • an image extrapolation block implemented either on optical means, a dedicated data processor or on a programmable generable purpose processor.
  • the image extrapolation block may extrapolate either from the optically encoded complex multidimensional data set, the extrapolated subsets of optical paths printed on the camera image sensor (i.e. the originally acquired image data sets), or the reconstructed multidimensional data set, which multidimensional data set may be one or more types of derived data sets, wherein each extrapolated data set type may be associated with one or more features of the common projection plain (i.e. common surface) from which the two or more optical paths collected optical image information.
  • extrapolated data set types may include: (1) a depth map (i.e. z-channel or depth information of every pixel's-point in the common projection plain), (2) a holographic stereogram image of the common projection plain, (3) a hologram image of the common surface, (4) a stereo image of the common surface, and (5) one or more two-dimensional images, where each image may also be an approximated virtual view point of the common surface.
  • each of the optical paths may include a fix mirror, a fixed lens and a fixed diaphragm structure.
  • the lenses and mirrors of two or more optical paths may be functionally associated (e.g. synchronized).
  • the diaphragms on the two or more given optical paths having functionally associated lenses and mirrors may be adapted to adjust their configuration (e.g. aperture size and shape) so as to maintain a common projection plain between the given optical paths when the focus on the synchronized lenses is changed.
  • the optical assembly has multiple optical paths, which paths may include lenses, mirrors and may also include a diaphragm structure behind each lens/mirror.
  • the configuration of each optical path's diaphragm, lens, mirror and positioning of the optical sensor may define the projection plain from where the optical path may receive optical image information.
  • two or more of the optical paths on the optical assembly may be configured to acquire optical image information from a substantially common surface.
  • the projection plain (i.e. surface) for each of the two or more optical paths may partially, substantially or totally overlap.
  • the shape of each optical path's projection plain may be flat, spherical, cylindrical or any other shape which may be defined using optical elements such as lenses and mirrors.
  • FIG. 4B shows a diagrammatic representation of an embodiment of the optical assembly where optical image information from two separate optical paths are projected onto separate areas of a common image sensor.
  • each area of the optical sensor may correspond with an image frame.
  • an image processing algorithm may parse the image into separate frames.
  • FIG. 4C shows a diagrammatic representation of an optical assembly according to some embodiments of the present invention where optical image information from two separate optical paths are projected onto a common area of a common image sensor.
  • a dedicated lens, an optical filter and/or a grid associated with each optical path may provide for the optical encoding of the optical image information received by each of the optical paths, such that although two or more optical images may be simultaneously printed onto the same area of an image sensor, the two images may later be extrapolated from the sensor's data set (may also be RAW data) using an image processing algorithm, according some embodiments of the present invention.
  • FIG. 5A shows a symbolic diagram of an optical assembly and image processing which according to some embodiments of the present invention produces a series of digital image data sets (e.g. 2D image frames).
  • the frames may be produced by one or more image sensors. If two or more optical paths direct their respective received/collected optical image information onto a separate area of a common image sensor or onto separate sensors, separate image data sets (i.e. image) frames may be produced directly by the image sensors.
  • the separate image frames may be used to generate a multidimensional image data set, for example a three dimensional (3D) depth map representing the projection plane.
  • a three dimensional (3D) depth map representing the projection plane. Examples for calculations associated with calculating depth information on a pixel by pixel basis (disparity map) are shown in FIGS. 5B and 5C , and detailed explanation is given herein below.
  • a color may also be associated with each of the pixels in the 3D depth map, using one of several methods according to the present invention.
  • the multidimensional image data set may be extended to yet a further dimension by, for example, using multiple sets of images taken at different times to add a time variable to a colorized or non-colorized 3D depth map, thereby producing a 4D depth map.
  • a 3D depth map may be generated from a single optically encoded image data set.
  • FIGS. 5B and 5C there are shown symbolic diagrams depicting multiple images and image related data sets being derived and/or extrapolated from a multidimensional image data set according to some embodiments of the present invention.
  • ray tracing, pixel color interpellation and other image processing methods may be used to convert a multidimensional image data set into a variety of image data types.
  • certain image data types may be generated directly from acquired image frames.
  • FIG. 6A there is shown a block diagram showing an image processing element/modules according to some embodiments of the present invention.
  • the operation of the various elements of FIG. 6A shall be described in conjunction with the steps of the methods illustrated in FIGS. 6B and 6C , where FIG. 6B shows a flow diagram consisting of the steps of producing one or more multidimensional image data sets from two or more acquired image frames according to some embodiments of the present invention, and FIG. 6C shows a flow diagram consisting of the steps of producing one or more multidimensional image data sets from an optically encoded image frame according to some embodiments of the present invention.
  • the received image information may either represent separate image frames (step 1000 B) or RAW data, where each frame is generated from optical image information coming from a separate optical path, or the received image information may be an optically encoded image data (step 1000 C) including optical image information from multiple optical paths mixed onto a common area of an optical sensor.
  • the data received by element 100 may be directed towards a secondary extrapolation module, denoted as element 150 , the extrapolation unit may be adapted to extrapolate each of the subset of optical paths printed on the camera image sensor (step 1500 C), or may be adapted to directly extrapolate (complex) 2D Image(s) (step 5000 B) as explained herein below.
  • a secondary extrapolation module denoted as element 150
  • the extrapolation unit may be adapted to extrapolate each of the subset of optical paths printed on the camera image sensor (step 1500 C), or may be adapted to directly extrapolate (complex) 2D Image(s) (step 5000 B) as explained herein below.
  • a Depth Map Generation Module 200 may generate a (3D) depth map (step 2000 B) using one of various depth extraction algorithms, including the point by point disparity map calculations as illustrated in FIGS. 1B and 1C .
  • the received image information may be optically encoded (step 1000 C) such that a (3D) depth map may be derived (step 2000 C) without having to perform considerable calculations, in which case the Interface to Image Sensor (optical processing block) 100 may be adapted to generate the depth map.
  • a Color Estimation Module 300 may interpolate a color for each of the points in the depth map (steps 3000 B and 3000 C). Various color interpolation or estimation methods, including the one described below, may be used.
  • a 4D matrix generation module 400 may use data associated with multiple depth maps, produced based on images acquired at different times, to generate a multidimensional image data set (steps 4000 B and 4000 C) which includes time as one of the dimensions.
  • An image extrapolation module 500 may generate, either from the extrapolated subset of optical paths, in conjunction with the secondary extrapolation module 150 , or from a 4D data set, one or more image data types (steps 5000 B & 5000 C) including simple 2D images from various view points, complex 2D images with encoded depth information, and various others which are described below.
  • the following algorithm enables simultaneously optically encoding together on the same single capturing sensor (e.g. CCD) multiple different images (up to full sensor resolution for each image), and decoding said multiple images with out loosing resolution for each of said images.
  • the algorithm may input optically acquired images or multiple different sources (e.g. cameras, where the encoding may also be digital).
  • the algorithm high level may comprise:
  • the Encoding step may comprise of:
  • the Printing images on digital sensor step may comprise of:
  • the input complex data step may comprise of:
  • the Decoding complex data step may comprise of:
  • the following text describes an example for an encoding decoding reconstruction and compression Algorithm.
  • the encoding decoding reconstruction and compression process may be done on images acquired from multiple optical sources where the encoding process is done optically, or from multiple sources such as cameras having no direct link to the holo-stereo capturing device, where the encoding process is done using image processing tools.
  • the decoding and reconstruction process occurs in two stages.
  • the first stage is decoding, performed as the quantized complex 2D function image is loaded into memory, holding a 3D structure or being filtered in to its original encoded particles (e.g. two images).
  • the second stage de-quantization, reconstructs the 4D matrix.
  • the final part of the system is a real time image creator given the requested data and geometry (e.g. a 2D slice of the 4D matrix). This is a real time rendering model.
  • image processing algorithms may also extrapolate data directly from the decoded data (e.g. 3D extraction of a single point in the image or a depth map) and to view one of the decoded images, with no reconstruction of a 4D matrix.
  • stereogram photography we record several pictures of a projection frame (e,g, surface) from different points of view.
  • the encoded complex 2D function may be reconstructed in to a computer generated holographic stereogram—our 4D matrix, reconstructing the original three dimensional projection frame. This means that the image is 3D reconstructed in the vicinity of our 4D matrix, in the horizontal (X) vertical (Y) depth (Z) and time (t) image axis. Consequently, the desired output may be extrapolated to any suitable dedicated means.
  • a real world 3-D projection frame is captured by two or more 2D images from several angles. These images are encoded to a complex 2D function as a single frame.
  • the decoded projections of the scene may be computed to a 4D computer generated matrix also extrapolating each of said projection angles, holographic stereogram, depth map, depth perception (stereoscopy) and multiple new 2-D matrix projection angles along X, Y, Z.
  • a 2D image is a 2D slice of a 4D light field over time. Reconstructing a light field over time from a set of images corresponds to inserting each of the 2D light field slices into a 4D light field representation over time—a 4D matrix. Similarly, generating new views, depth values and so on corresponds to real time extracting and resampling a slice of different data and different views. This requires that the 4D matrix will properly resample each slice rays intersection representation to avoid distortions in the final image data. This process is the 4D data reconstruction.
  • each point in the real world 3D projection frame holds a certain color, we can define as our absolute color, represented in the multiple images on different 2D projections, and following the given conditions this absolute color will be overlaid with distortions creating “color disparity” from the relative points' color and the absolute color.
  • this absolute color will be overlaid with distortions creating “color disparity” from the relative points' color and the absolute color.
  • the 4D matrix we will strive to reconstruct these points' absolute color. Different angels exposed to the same point will preserve this color with given distortions such as angels, different horizontal positioning in the image, different lighting, optical distortions and so on.
  • Data redundancy A good compression technique removes redundancy from a signal without affecting its content.
  • Our method enables the 3D interpolation of a point from multiple views enabling a complex 2D function to avoid multiplication of the same data from multiple images.
  • Random access Most compression techniques place some constraint on random access to data. Predictive coding schemes further complicate random access because pixels depend on previously decoded pixels, scanlines, or frames. Our method enables direct random execs to any given point in space and time.
  • the following example algorithm assumes the case of 3 images taken from 3 lenses ( FIG. 5A elements 150 A, 150 B and 150 C) of the holo-stereo capturing device; a single CCD (some cameras have 3CCD array one for each color or CMOS) and Bayer filter ( FIG. 3 ).
  • the encoding methods may alter pending on the overall settings (input images, optical configurations, 3CCD, CMOS, fps and so on . . . ). But it should be realized to those skilled in the art that it does not change the process of the encoding, decoding, reconstruction and compression algorithm.
  • an encoding/decoding reconstruction and compression algorithm may comprise the following steps and may be described in conjunction with FIG. 10 and FIG. 5A :
  • the Encoding algorithm i.e. optical process
  • the Encoding algorithm itself may vary (e.g. encoding using color, Fourier, Diffraction, Fresnel, Spatial etc), and it is not limited to specific configuration, the following example is not a limiting one.
  • Image 1 I 1 , its color: R1G1B
  • Image 2 I 2 , its color: R2G2B2
  • Image 3 I 3 , its color: R3G3B3
  • the encoding in this example may be the following:
  • the encoding may be in the form of projecting said input images throw optical filters such that said images are projected on said CCD as shown in FIG. 7A , Line 1 ( 7100 ), Line 2 ( 7140 ) and Line 3 ( 7170 ), wherein red color is denoted by “R”, blue color is denoted by “B”, and green color is denoted by “G”.
  • Enhancing said images optical signature on the digital sensor using the reaction of each CCD receptor to light, or the amount of data each pixel may hold in a single frame or any means of data transfer.
  • the numbers of photons in each photo sensor on the CCD may interpreted to digital values from 0 to 2 ⁇ 12 using 12 bit for each sensor in some RAW files and even higher bit for each sensor.
  • Method 2 exploits the potentially larger bandwidth of each photo sensor in terms of digital light printing, as if to exploit the total CCD information more then the usual output value of the pixel (mostly 8 bit per pixel per each color).
  • filters to optically vary the values of image points during the projection step on the CCD sensor larger values may be collected and later interpreted to digital terms in a 12 bit array or more. Understanding the idea that the image sensor can be looked at as an optical memory unit, where values can be analyzed as digital values of a memory unit. Using higher storage values and optical encoding manipulations, one can encode and extrapolate number of values to be decoded as different images.
  • the set of optical paths in the formation of Fresnel and/or Kirchhoff and/or Eisenhoffer diffraction are convoluted and recorded on a camera image sensor and stored digitally.
  • the decoding and reconstruction is performed by numerical means based on the use of fast Fourier transform (FFT), and may also use spatial and/or temporal filtering as well polarization means.
  • FFT fast Fourier transform
  • Input images from digital sensor may also be in a 12 bit RAW format, and may also be prior to color interpolation processes (e.g. Demosaicing).
  • Decoding complex multidimensional data set process is done in order to extrapolate each of the subset of optical paths printed on the camera image sensor. Multiple data sets may be extrapolated directly from the complex multidimensional data set and/or from the extrapolated subset of optical paths. Following the decoding process is the 4D matrix reconstruction.
  • the decoding and reconstruction is performed by numerical means based on the use of fast Fourier transform (FFT), and may also use spatial and/or temporal filtering as well polarization means.
  • FFT fast Fourier transform
  • Complex multidimensional data set based on color interpolation may be extrapolated by spatial and/or time decoding, by digital values interpolation as in the example of method 2, and in a 3D reconstruction process
  • multiple data sets may be extrapolated directly from the complex multidimensional data set and/or from the extrapolated subset of optical paths (e.g. extrapolate depth maps of the input images), or to reconstruct a unified 3D image in a 4D matrix.
  • the subset of optical paths of the complex multidimensional data set is the input for the 3D structure (depth reconstruction) placed in the 3D matrix. Depth extraction using image color information is best performed using the same color of the different images. Different colors might not extrapolate proper depth maps (even if we convert them to gray scale for example), because different color acquisition print different visual signatures on the image (such as a red table will leave low green or blew signatures if any).
  • the green holds half of the sensor original points, and since the depth map is done on the rows of the image to check the parallax disparity between the images, we need to enable a row of green for the first I 1 camera (the left one), and on the same row, a row of green for the second image I 2 , to enable depth calculation.
  • FIG. 7B there is shown the exemplary filter described hereinabove, the first encoded row/line is denoted by 7200 .
  • the first encoded row/line is denoted by 7200 .
  • FIG. 7C The outcome of reconstructing the vector color of each image for disparity purposes is shown in FIG. 7C , wherein line 1 is denoted by 7300 , and line 2 is denoted by 7310 .
  • the positioned points in 3D may enable us to reconstruct the green (or full RGB) as will further be explained to rows 1 , 2 , 3 -images 1 , 2 , 3 , as seen in FIG. 8
  • RGB for 3D point p′ is performed in the following not limiting example: following the fact that we aligned in 3D the two images we now have for example the red color from image 1 -line 1 , as explained earlier, its resolution was not damaged, we also have the blue color of the same example point p′ from image 2 line 2 , blue color resolution was also not harmed, and since we also reconstructed the green in both rows, we can now interpolate and reconstruct the RGB for point p′ based on the 3D location of neighboring pixels rather then on their original 2D position on the CCD in the first and second line.
  • the major difference of 3D color reconstruction in the 3D matrix with respect to the 2D RAW file Demosaicing is the 3D positioning of each point in the 3D matrix, reconstructing the 3D structure of the projection frame in the real world,
  • each point in the 3D matrix resaves color from at list two different view points.
  • line 1 point Pi receives red and green from image 1 and green from image 2
  • in the 2nd line receives green from image 1 , green and blue from image 2
  • the ratio of 1 ⁇ 2 amount of green, 1 ⁇ 4 amount of R and 1 ⁇ 4 amount of B is kept for each point
  • the 3D rays intersection allows for accurate positioning of 3D color
  • the 3D Demosaicing will now take place, reconstructing a 3D color point in space
  • the Demosaicing is based on the 3D neighboring pixels from each image point of view up to a (neighboring) trash hold, fully reconstructing the image lines, up to a full array of 12 bit (or more) per point of red, full array of 12 bit (or more) per point of blue, and same for green.
  • Distortions on the reconstructed color will be 4D filtered over space and time.
  • RGB-3D color for each point in the 4D matrix, we can generate 2D images also from arbitrary camera positions with depth information.
  • An image is a 2D slice of the 4D light field.
  • the pervious example was compression over space, where compression over time may also be generated using said encoding in the 4D matrix, adding the temporal vector to the XYZ spatial coordinates and RGB.
  • the 4D matrix may be done only once, on the compressing side.
  • a new 2D complex function might be generated that encapsulates each pixels, RGB, XYZ, where the Z value might be encapsulated in the image data by, for example as previously mentioned in method 2, using higher bit values for each pixels (need only one extra value to RGB XY—the Z), enabling to immediately decode the 4D image on the receiving side with very low computation needs.
  • image processing means i.e. the input are two images.
  • the image pixels are represented in their digital representation, enabling to accurately value the compression process, and also in a printed version.
  • the input is 2 ⁇ 2D images.
  • the algorithm comprises the steps of: (1) reconstruct a 3D matrix, (2) encode said matrix to a 2D complex function, enabling computation less decoding on the receiving end.
  • FIG. 9A there is shown two input images for this example, the left image is denoted by 9100 and the right image is denoted by 9200 .
  • the first step is extracting a depth map.
  • the input images are full array of RGB each, accordingly a depth map can be reconstructed as previously explained on all image layers:
  • the end process is the positioning of pixels in a 3D space, intersecting rays from both images.
  • FIG. 9B there is shown a region positioned on the same depth, of the Mobile phone that is located on the right of the white cap of coffee, the left image is denoted by 9300 and the right image is denoted by 9400
  • a vector of pixels on the same depth of one color from two images would look like that: R 42 42 33 21 13 14 28 58 97 144 176 L 45 42 36 30 18 17 26 51 95 138 179
  • Fast generation of different data and views is generated by rendering a 2D array of image data, wherein the 2D slice of the 4D matrix represents rays through a point, properly resampled to avoid artifacts in the final image data.
  • Processing algorithm for creating holographic-stereogram and stereoscopic images from said collection of images may also be the following process, wherein said holographic-stereogram and stereoscopic image pair covers said holo-stereo lens Projection plane—POV (point of view).
  • POV point of view
  • the image captured by each mini-lens/slit is processed separately.
  • Distorted information will be removed from each image of each lens separately.
  • the images are then cropped into pre-defined sections which are required for keeping an adequate overlap between adjacent images.
  • a stereoscopic image pair is created from the group of selected images in the following not limiting examples:
  • a stereoscopic image pair can also be created from the group of selected images in the following not limiting example:
  • this new image pair is equivalent to a pair of images as if were taken by two virtual lenses having their optical axes directed forward in the viewer's viewing direction and having horizontal disparity.
  • stereoscopic image pairs can also be created when the viewer's horizon is inclined with respect to ground (i.e., when the viewer eyes are not at the same height with respect to ground).
  • the selection horizon of the images is done by projecting the viewer's horizon on the collection of the lenses and the stereoscopic pair is generated by following same steps.
  • the data and images may be displayed to a viewer in various formats, such as stills, video.
  • the images formed can be displayed by any of the dedicated means such as dedicated hologram display device, stereoscopic viewing, virtual reality, and so on.
  • Depth maps can be viewed in a display device suitable for 3D viewing, can be exported to any 3D image or graphic image processing software to be used for editing and interpolation of any kind using the 3D extracted information.
  • the process of image processing manipulations on the images is comprised of endless possible options for image manipulations in real time based on the 3D information of the images.
  • the 3D images are a virtual world. Said virtual world is created in a designated area, where every point in the camera's POV is a 3D point in the virtual world. In the said virtual world 2D or 3D real images and computer generated images from different sources may be merged.
  • said virtual world are be referred as a virtual studio for example, using the 3D information from the virtual world.
  • Manipulations such as separation between a figure and its background, based on the distance of the figure from the camera are possible. Isolating a figure from its surrounding we can interlace the figure in the virtual world created in a computer.
  • the opposite thing can also be done interlacing a CGI figures or photographed images from different sources in the 3D image.
  • Holographic stereogram and stereoscopic information enables to enhance the 3D sensation to the viewers interpolating not only on the 3D positioning in the image, but also in the 3D sensation using stereoscopic imagery,
  • the present invention is not limited with regards to the type of camera(s) used for capturing the images or sequences of images.
  • the camera(s) may be selected from any known in the art digital or analog video/still cameras, and film cameras. If needed, non-digital data may be converted to digital using known in art techniques.
  • a holographic stereogram can also be cylindrical or spherical for panoramic or spherical view. This is achieved by producing stereograms with cylinder or spherical projection plane.
  • the depth map reconstruction process computes the depth map of a projection plain captured by the holographic stereogram capturing device images, from multiple view points, reconstructing the 3D formation of the projection frame as a 4D matrix.
  • the collective field of vision captured by all said images covers the whole projection frame of said array of images, and wherein any point in said projection frame is captured by at least two of said images.
  • the first part of the algorithm may be considered in the case where input images from the digital sensor are a full RGB images.
  • the printing (e.g. encoding) methods on the digital sensor and reading said images (e.g. decoding) as full RGB images were described earlier.
  • Space-time depth map reconstruction adds a temporal dimension to the neighborhoods used in the matching function.
  • the computation matches based on oriented space-time windows that allow the matching pixels to shift linearly over time.
  • the match scores based on space-time windows are easily incorporated into existing depth map reconstruction algorithms.
  • the matching vector can be constructed from an arbitrary spatiotemporal region around the pixel in question.
  • a window of size N ⁇ M ⁇ T can be chosen, Where N and M are the spatial sizes of the window, and T is the dimension along the time axis.
  • the optimal space-time matching window depends on the speeds with which objects in the scene move. Static scenes—a long temporal window will give optimal results. Scenes with quickly moving objects—a short temporal window is desirable to avoid the distortions when objects move at intermediate speed, it is likely that a space-time matching window with extent in both space and time will be optimal.
  • f is the focal length
  • B is the distance between the cameras.
  • X 1 , X 2 the location of point P in Image 1 and in Image 2 .
  • Z of point p (Zp) will be: f*B/deltaX[p]
  • the output of such an algorithm is a depth map, located in a 4D matrix, corresponds to the multiple images of the holographic stereogram, along time axis.
  • the next step will be the 3D color reconstruction.
  • the following exemplary 3D Color reconstruction method may be used in the case where the input multiple images are full RGB each, where each point in the projection plane is viewed from number of viewpoints, a finite number of different views of the scene.
  • the outcome display of the holo-stereography is continuous 3D color information of an object's appearance, approximated by a finite number of two-dimensional images of that object.
  • the 3D color reconstruction is the horizontal approximation of the continuous natural color captured optically in a discrete form.
  • the output images from the 4D matrix are images of natural scenes with natural lighting.
  • the reconstructed image in the 4D Matrix hold full RGB color for every point in the image from every point of view. Pending on the dedicated output, one way of out put would be to leave the color as layers of full color from every exposed point of view, outputting the image from a discrete point of view with its original color. The other way would be to re-interpolate each point in to unified color information of multiple view points. This is especially important in digital sensors such as CCD and Bayer filters where the 3 ⁇ 4 of the red and blue, and half of the green color in every point in the image are reconstructed from surrounding points and thus re-demosaicing can enhance dramatically the image quality.
  • the camera's horizontal position determines the angle at which the camera's rays strike the plane of the hologram.
  • Similar points in the projection frame appear to be located in the image in deferent horizontal positions, dew to the disparity of the images exposed to the projection frame. So color reconstruction cannot be done on points that are located on the similar (X, Y) 2D coordinates in the different images.
  • the projection frame is reconstructed in a 3D matrix and similar points are identified in the different images and in the 3D matrix, each point from every image is located in the identical points' location seen from the said images.
  • demosaicing based on 3D location of points in space interpolate color of points based on surrounding points in the same depth, dues preventing many distortions that interpolation in 2D images suffer from, for example we interpolate the edge of a wooden table pixel with the pixels of the wall behind him, as opposed to interpolate the wood only with points that surround him in 3D, giving higher image quality to every point in the image and enhancing resolution.
  • a demosaicing algorithm is used to interpolate a set of RGB color for every point in the image also enabling to enhance resolution (adding more interpolation points then in the original images), using the point's color as projected from the different viewing angles, and the neighbors 3D surrounding each point, reconstructing a final image which contains full color information (RGB) at each pixel. This process may be done using existing methods to obtain better interpolation.
  • the demosaicing algorithm works as a RAW format in the sense that it contains pixel information far greater then the final out come, targeted to reconstruct the absolute color on the one hand while preserving the color characteristics from every angle on the other, in exporting the data or reconstructing new view points that where not exposed originally.
  • the computer generated 4D matrix of the holographic stereogram is the reconstructed outcome of the non coherent light digital holography apparatus (i.e. this invention).
  • Computer generated holography may also be digitally printed using known in art computer generated Fresnel and/or Kirchhoff and/or Eisenhoffer diffraction by simulating computer generated coherent light.
  • the process of capturing, optically encoding the complex multidimensional data set, decoding and reconstruction of the 4D matrix process under non coherent illumination and digital computing may be the equal process of coherent light digital holography and/or stereographic hologram process.
  • the optical encoding process that creates the complex multidimensional data set equals to the complex phase and amplitude digital printing and reconstruction under coherent light.
  • convolved diffracted light propagates through a particular optical system, thus succeeds in recording the complex amplitude of some wave front without beam interference.
  • the claim is that complex amplitude can be restored under non coherent conditions.
  • this complex function is in computer memory, one can encode it to a CGH (computer-generated hologram).
  • This CGH may then be illuminated by a plane wave, which then propagates through the proper optical system.
  • the reconstructed image has features similar to those of an image coming from a coherently recorded hologram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Cameras In General (AREA)
  • Studio Devices (AREA)
US11/277,578 2005-10-31 2006-03-27 Apparatus method and system for imaging Abandoned US20070285554A1 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
US11/277,578 US20070285554A1 (en) 2005-10-31 2006-03-27 Apparatus method and system for imaging
PCT/IL2006/001254 WO2007052262A2 (fr) 2005-10-31 2006-10-31 Appareil, procede et systeme d'imagerie
US12/092,220 US8462199B2 (en) 2005-10-31 2006-10-31 Apparatus method and system for imaging
US12/897,390 US8878896B2 (en) 2005-10-31 2010-10-04 Apparatus method and system for imaging
US13/734,987 US9131220B2 (en) 2005-10-31 2013-01-06 Apparatus method and system for imaging
US13/737,345 US9046962B2 (en) 2005-10-31 2013-01-09 Methods, systems, apparatuses, circuits and associated computer executable code for detecting motion, position and/or orientation of objects within a defined spatial region

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US73127405P 2005-10-31 2005-10-31
US11/277,578 US20070285554A1 (en) 2005-10-31 2006-03-27 Apparatus method and system for imaging

Related Child Applications (4)

Application Number Title Priority Date Filing Date
PCT/IL2006/001254 Continuation WO2007052262A2 (fr) 2004-07-30 2006-10-31 Appareil, procede et systeme d'imagerie
US12/092,220 Continuation US8462199B2 (en) 2005-10-31 2006-10-31 Apparatus method and system for imaging
US9222008A Continuation 2005-10-31 2008-04-30
US12/897,390 Continuation US8878896B2 (en) 2005-10-31 2010-10-04 Apparatus method and system for imaging

Publications (1)

Publication Number Publication Date
US20070285554A1 true US20070285554A1 (en) 2007-12-13

Family

ID=38006288

Family Applications (4)

Application Number Title Priority Date Filing Date
US11/277,578 Abandoned US20070285554A1 (en) 2005-10-31 2006-03-27 Apparatus method and system for imaging
US12/092,220 Active 2029-07-10 US8462199B2 (en) 2005-10-31 2006-10-31 Apparatus method and system for imaging
US12/897,390 Active 2026-09-15 US8878896B2 (en) 2005-10-31 2010-10-04 Apparatus method and system for imaging
US13/734,987 Active 2026-05-30 US9131220B2 (en) 2005-10-31 2013-01-06 Apparatus method and system for imaging

Family Applications After (3)

Application Number Title Priority Date Filing Date
US12/092,220 Active 2029-07-10 US8462199B2 (en) 2005-10-31 2006-10-31 Apparatus method and system for imaging
US12/897,390 Active 2026-09-15 US8878896B2 (en) 2005-10-31 2010-10-04 Apparatus method and system for imaging
US13/734,987 Active 2026-05-30 US9131220B2 (en) 2005-10-31 2013-01-06 Apparatus method and system for imaging

Country Status (2)

Country Link
US (4) US20070285554A1 (fr)
WO (1) WO2007052262A2 (fr)

Cited By (77)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090066784A1 (en) * 2007-09-05 2009-03-12 Sony Corporation Image processing apparatus and method
US20090262208A1 (en) * 2008-04-21 2009-10-22 Ilia Vitsnudel Method and Apparatus for Optimizing Memory Usage in Image Processing
US20100066811A1 (en) * 2008-08-11 2010-03-18 Electronics And Telecommunications Research Institute Stereo vision system and control method thereof
US20100097443A1 (en) * 2008-10-16 2010-04-22 Peter Lablans Controller in a Camera for Creating a Panoramic Image
US20100103249A1 (en) * 2008-10-24 2010-04-29 Real D Stereoscopic image format with depth information
US20100194862A1 (en) * 2005-10-31 2010-08-05 Xtrextreme Reality Apparatus Method and System for Imaging
US20100287511A1 (en) * 2007-09-25 2010-11-11 Metaio Gmbh Method and device for illustrating a virtual object in a real environment
WO2010150177A1 (fr) * 2009-06-25 2010-12-29 Koninklijke Philips Electronics N.V. Procédé de capture d'images stéréoscopiques, système et caméra
US20110001802A1 (en) * 2009-07-03 2011-01-06 Takeshi Misawa Image display apparatus and method, as well as program
US20110129124A1 (en) * 2004-07-30 2011-06-02 Dor Givon Method circuit and system for human to machine interfacing by hand gestures
US20110164032A1 (en) * 2010-01-07 2011-07-07 Prime Sense Ltd. Three-Dimensional User Interface
US20110163948A1 (en) * 2008-09-04 2011-07-07 Dor Givon Method system and software for providing image sensor based human machine interfacing
US20110169825A1 (en) * 2008-09-30 2011-07-14 Fujifilm Corporation Three-dimensional display apparatus, method, and program
US20120033038A1 (en) * 2010-08-03 2012-02-09 Samsung Electronics Co., Ltd. Apparatus and method for generating extrapolated view
US20120075414A1 (en) * 2006-11-16 2012-03-29 Park Michael C Distributed Video Sensor Panoramic Imaging System
US20120148173A1 (en) * 2010-12-08 2012-06-14 Electronics And Telecommunications Research Institute Method and device for generating multi-viewpoint image
US20120162360A1 (en) * 2009-10-02 2012-06-28 Kabushiki Kaisha Topcon Wide-Angle Image Pickup Unit And Measuring Device
US20120194655A1 (en) * 2011-01-28 2012-08-02 Hsu-Jung Tung Display, image processing apparatus and image processing method
US20120274568A1 (en) * 2011-04-27 2012-11-01 Aptina Imaging Corporation Complete digital holographic image sensor-projector computing unit
US20120320036A1 (en) * 2011-06-17 2012-12-20 Lg Display Co., Ltd. Stereoscopic Image Display Device and Driving Method Thereof
US20130002715A1 (en) * 2011-06-28 2013-01-03 Tidman James M Image Sequence Reconstruction based on Overlapping Measurement Subsets
US20130083159A1 (en) * 2010-06-24 2013-04-04 Fujifilm Corporation Stereoscopic panoramic image synthesis device, image capturing device, stereoscopic panoramic image synthesis method, recording medium, and computer program
US20130162780A1 (en) * 2010-09-22 2013-06-27 Fujifilm Corporation Stereoscopic imaging device and shading correction method
US20130250067A1 (en) * 2010-03-29 2013-09-26 Ludwig Laxhuber Optical stereo device and autofocus method therefor
US8548258B2 (en) 2008-10-24 2013-10-01 Extreme Reality Ltd. Method system and associated modules and software components for providing image sensor based human machine interfacing
US20140028800A1 (en) * 2012-07-30 2014-01-30 Canon Kabushiki Kaisha Multispectral Binary Coded Projection
US8644376B2 (en) 2010-09-30 2014-02-04 Alcatel Lucent Apparatus and method for generating compressive measurements of video using spatial and temporal integration
US8681100B2 (en) 2004-07-30 2014-03-25 Extreme Realty Ltd. Apparatus system and method for human-machine-interface
US8702592B2 (en) 2010-09-30 2014-04-22 David Allan Langlois System and method for inhibiting injury to a patient during laparoscopic surgery
US20140111651A1 (en) * 2011-04-14 2014-04-24 Ulis Imaging system comprising a fresnel lens
US20140125587A1 (en) * 2011-01-17 2014-05-08 Mediatek Inc. Apparatuses and methods for providing a 3d man-machine interface (mmi)
US20140192868A1 (en) * 2013-01-07 2014-07-10 Qualcomm Incorporated Inter-layer reference picture generation for hls-only scalable video coding
US20140240532A1 (en) * 2013-02-27 2014-08-28 Massachusetts Institute Of Technology Methods and Apparatus for Light Field Photography
US8872762B2 (en) 2010-12-08 2014-10-28 Primesense Ltd. Three dimensional user interface cursor control
US8881051B2 (en) 2011-07-05 2014-11-04 Primesense Ltd Zoom-based gesture user interface
US8878779B2 (en) 2009-09-21 2014-11-04 Extreme Reality Ltd. Methods circuits device systems and associated computer executable code for facilitating interfacing with a computing platform display screen
US8928654B2 (en) 2004-07-30 2015-01-06 Extreme Reality Ltd. Methods, systems, devices and associated processing logic for generating stereoscopic images and video
US8929456B2 (en) * 2010-09-30 2015-01-06 Alcatel Lucent Video coding using compressive measurements
US8933876B2 (en) 2010-12-13 2015-01-13 Apple Inc. Three dimensional user interface session control
US8970693B1 (en) * 2011-12-15 2015-03-03 Rawles Llc Surface modeling with structured light
US20150062308A1 (en) * 2012-06-22 2015-03-05 Nikon Corporation Image processing apparatus, image-capturing apparatus and image processing method
US9030498B2 (en) 2011-08-15 2015-05-12 Apple Inc. Combining explicit select gestures and timeclick in a non-tactile three dimensional user interface
US9035876B2 (en) 2008-01-14 2015-05-19 Apple Inc. Three-dimensional user interface session control
US9046962B2 (en) 2005-10-31 2015-06-02 Extreme Reality Ltd. Methods, systems, apparatuses, circuits and associated computer executable code for detecting motion, position and/or orientation of objects within a defined spatial region
US20150156470A1 (en) * 2013-11-04 2015-06-04 Massachusetts Institute Of Technology Reducing View Transitions Artifacts In Automultiscopic Displays
US9177220B2 (en) 2004-07-30 2015-11-03 Extreme Reality Ltd. System and method for 3D space-dimension based image processing
US9218063B2 (en) 2011-08-24 2015-12-22 Apple Inc. Sessionless pointing user interface
US9218126B2 (en) 2009-09-21 2015-12-22 Extreme Reality Ltd. Methods circuits apparatus and systems for human machine interfacing with an electronic appliance
US20150373322A1 (en) * 2014-06-20 2015-12-24 Qualcomm Incorporated Automatic multiple depth cameras synchronization using time sharing
US9229534B2 (en) 2012-02-28 2016-01-05 Apple Inc. Asymmetric mapping for tactile and non-tactile user interfaces
US20160021390A1 (en) * 2014-07-15 2016-01-21 Alcatel-Lucent Usa, Inc. Method and system for modifying compressive sensing block sizes for video monitoring using distance information
US9319578B2 (en) 2012-10-24 2016-04-19 Alcatel Lucent Resolution and focus enhancement
US9344736B2 (en) 2010-09-30 2016-05-17 Alcatel Lucent Systems and methods for compressive sense imaging
US9377865B2 (en) 2011-07-05 2016-06-28 Apple Inc. Zoom-based gesture user interface
US9398310B2 (en) 2011-07-14 2016-07-19 Alcatel Lucent Method and apparatus for super-resolution video coding using compressive sampling measurements
US9398288B2 (en) 2011-11-04 2016-07-19 Empire Technology Development Llc IR signal capture for images
US9459758B2 (en) 2011-07-05 2016-10-04 Apple Inc. Gesture-based interface with enhanced features
US20160309136A1 (en) * 2012-11-08 2016-10-20 Leap Motion, Inc. Three-dimensional image sensors
US20160353087A1 (en) * 2015-05-29 2016-12-01 Thomson Licensing Method for displaying a content from 4d light field data
US9563806B2 (en) 2013-12-20 2017-02-07 Alcatel Lucent Methods and apparatuses for detecting anomalies using transform based compressed sensing matrices
US9600899B2 (en) 2013-12-20 2017-03-21 Alcatel Lucent Methods and apparatuses for detecting anomalies in the compressed sensing domain
US9634690B2 (en) 2010-09-30 2017-04-25 Alcatel Lucent Method and apparatus for arbitrary resolution video coding using compressive sampling measurements
US9823126B2 (en) 2013-06-18 2017-11-21 Ramot At Tel-Aviv University Ltd. Apparatus and method for snapshot spectral imaging
US9983685B2 (en) 2011-01-17 2018-05-29 Mediatek Inc. Electronic apparatuses and methods for providing a man-machine interface (MMI)
US20180288393A1 (en) * 2015-09-30 2018-10-04 Calay Venture S.à r.l. Presence camera
US10354399B2 (en) * 2017-05-25 2019-07-16 Google Llc Multi-view back-projection to a light-field
WO2020019704A1 (fr) * 2018-07-27 2020-01-30 Oppo广东移动通信有限公司 Système de commande de projecteur de lumière structurée, et dispositif électronique
US20200037000A1 (en) * 2018-07-30 2020-01-30 Ricoh Company, Ltd. Distribution system, client terminal, and method of controlling display
US10643343B2 (en) * 2014-02-05 2020-05-05 Creaform Inc. Structured light matching of a set of curves from three cameras
CN112114507A (zh) * 2019-06-21 2020-12-22 三星电子株式会社 用于提供扩展的观看窗口的全息显示装置和方法
US20210044795A1 (en) * 2019-08-09 2021-02-11 Light Field Lab, Inc. Light Field Display System Based Digital Signage System
US11017540B2 (en) 2018-04-23 2021-05-25 Cognex Corporation Systems and methods for improved 3-d data reconstruction from stereo-temporal image sequences
US11163176B2 (en) * 2018-01-14 2021-11-02 Light Field Lab, Inc. Light field vision-correction device
US11202052B2 (en) * 2017-06-12 2021-12-14 Interdigital Ce Patent Holdings, Sas Method for displaying, on a 2D display device, a content derived from light field data
CN114173105A (zh) * 2020-09-10 2022-03-11 精工爱普生株式会社 信息生成方法、信息生成***以及记录介质
US20220094902A1 (en) * 2019-06-06 2022-03-24 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Multi-channel imaging device and device having a multi-aperture imaging device
US11589034B2 (en) 2017-06-12 2023-02-21 Interdigital Madison Patent Holdings, Sas Method and apparatus for providing information to a user observing a multi view content

Families Citing this family (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0511962D0 (en) * 2005-06-14 2005-07-20 Light Blue Optics Ltd Signal processing systems
US8160395B2 (en) * 2006-11-22 2012-04-17 General Electric Company Method and apparatus for synchronizing corresponding landmarks among a plurality of images
US8224068B2 (en) * 2007-09-18 2012-07-17 University Of Kentucky Research Foundation (Ukrf) Lock and hold structured light illumination
US8786596B2 (en) * 2008-07-23 2014-07-22 Disney Enterprises, Inc. View point representation for 3-D scenes
US8330802B2 (en) * 2008-12-09 2012-12-11 Microsoft Corp. Stereo movie editing
JP5222205B2 (ja) * 2009-04-03 2013-06-26 Kddi株式会社 画像処理装置、方法及びプログラム
US20110216160A1 (en) * 2009-09-08 2011-09-08 Jean-Philippe Martin System and method for creating pseudo holographic displays on viewer position aware devices
US8730309B2 (en) 2010-02-23 2014-05-20 Microsoft Corporation Projectors and depth cameras for deviceless augmented reality and interaction
CN103154816A (zh) * 2010-07-13 2013-06-12 R·S·米尔拉伊 用于静态摄影的可变三维照相机组件
JP6000954B2 (ja) * 2010-09-20 2016-10-05 クゥアルコム・インコーポレイテッドQualcomm Incorporated クラウド支援型拡張現実のための適応可能なフレームワーク
US9857868B2 (en) 2011-03-19 2018-01-02 The Board Of Trustees Of The Leland Stanford Junior University Method and system for ergonomic touch-free interface
CN102760234B (zh) 2011-04-14 2014-08-20 财团法人工业技术研究院 深度图像采集装置、***及其方法
US8840466B2 (en) 2011-04-25 2014-09-23 Aquifi, Inc. Method and system to create three-dimensional mapping in a two-dimensional game
US9597587B2 (en) 2011-06-08 2017-03-21 Microsoft Technology Licensing, Llc Locational node device
BR112013032813A2 (pt) 2011-06-20 2016-08-16 Du Pont composto de fórmula, composição e método para o tratamento, controle, prevenção ou proteção dos animais contra a infecção por helmintos
US9269152B1 (en) * 2011-09-07 2016-02-23 Amazon Technologies, Inc. Object detection with distributed sensor array
US10032036B2 (en) * 2011-09-14 2018-07-24 Shahab Khan Systems and methods of multidimensional encrypted data transfer
US9251723B2 (en) * 2011-09-14 2016-02-02 Jonas Moses Systems and methods of multidimensional encrypted data transfer
RU2014126367A (ru) 2011-11-28 2016-01-27 Е.И.Дюпон Де Немур Энд Компани Производные n-(4-хинолинметил)сульфонамидов и их применение в качестве антигельминтных средств
US9137511B1 (en) * 2011-12-15 2015-09-15 Rawles Llc 3D modeling with depth camera and surface normals
US8854433B1 (en) 2012-02-03 2014-10-07 Aquifi, Inc. Method and system enabling natural user interface gestures with an electronic system
US9096920B1 (en) * 2012-03-22 2015-08-04 Google Inc. User interface method
CN103563369B (zh) * 2012-05-28 2017-03-29 松下知识产权经营株式会社 图像处理装置、摄像装置以及图像处理方法
US9111135B2 (en) 2012-06-25 2015-08-18 Aquifi, Inc. Systems and methods for tracking human hands using parts based template matching using corresponding pixels in bounded regions of a sequence of frames that are a specified distance interval from a reference camera
US8934675B2 (en) 2012-06-25 2015-01-13 Aquifi, Inc. Systems and methods for tracking human hands by performing parts based template matching using images from multiple viewpoints
US9696427B2 (en) 2012-08-14 2017-07-04 Microsoft Technology Licensing, Llc Wide angle depth detection
US8836768B1 (en) 2012-09-04 2014-09-16 Aquifi, Inc. Method and system enabling natural user interface gestures with user wearable glasses
US9129155B2 (en) 2013-01-30 2015-09-08 Aquifi, Inc. Systems and methods for initializing motion tracking of human hands using template matching within bounded regions determined using a depth map
US9092665B2 (en) 2013-01-30 2015-07-28 Aquifi, Inc Systems and methods for initializing motion tracking of human hands
US9298266B2 (en) 2013-04-02 2016-03-29 Aquifi, Inc. Systems and methods for implementing three-dimensional (3D) gesture based graphical user interfaces (GUI) that incorporate gesture reactive interface objects
US9232183B2 (en) 2013-04-19 2016-01-05 At&T Intellectual Property I, Lp System and method for providing separate communication zones in a large format videoconference
US9798388B1 (en) 2013-07-31 2017-10-24 Aquifi, Inc. Vibrotactile system to augment 3D input systems
US9473708B1 (en) * 2013-08-07 2016-10-18 Google Inc. Devices and methods for an imaging system with a dual camera architecture
US9507417B2 (en) 2014-01-07 2016-11-29 Aquifi, Inc. Systems and methods for implementing head tracking based graphical user interfaces (GUI) that incorporate gesture reactive interface objects
US9619105B1 (en) 2014-01-30 2017-04-11 Aquifi, Inc. Systems and methods for gesture based interaction with viewpoint dependent user interfaces
CN104008571B (zh) * 2014-06-12 2017-01-18 深圳奥比中光科技有限公司 基于深度相机的人体模型获取方法及网络虚拟试衣***
US10257494B2 (en) 2014-09-22 2019-04-09 Samsung Electronics Co., Ltd. Reconstruction of three-dimensional video
US11205305B2 (en) 2014-09-22 2021-12-21 Samsung Electronics Company, Ltd. Presentation of three-dimensional video
WO2017125507A1 (fr) * 2016-01-19 2017-07-27 Zivid Labs As Unité et système d'imagerie permettant l'obtention d'une image tridimensionnelle
US10475392B2 (en) * 2016-03-07 2019-11-12 Ecole Polytechnique Federale De Lausanne (Epfl) Media item relighting technique
US10044925B2 (en) 2016-08-18 2018-08-07 Microsoft Technology Licensing, Llc Techniques for setting focus in mixed reality applications
JP6809128B2 (ja) * 2016-10-24 2021-01-06 富士通株式会社 画像処理装置、画像処理方法、および画像処理プログラム
US11200675B2 (en) * 2017-02-20 2021-12-14 Sony Corporation Image processing apparatus and image processing method
JP6785181B2 (ja) * 2017-04-12 2020-11-18 株式会社日立製作所 物体認識装置、物体認識システム、及び物体認識方法
US10529074B2 (en) 2017-09-28 2020-01-07 Samsung Electronics Co., Ltd. Camera pose and plane estimation using active markers and a dynamic vision sensor
US10839547B2 (en) 2017-09-28 2020-11-17 Samsung Electronics Co., Ltd. Camera pose determination and tracking
CN110891131A (zh) * 2018-09-10 2020-03-17 北京小米移动软件有限公司 摄像头模组、处理方法及装置、电子设备、存储介质
US11039118B2 (en) 2019-04-17 2021-06-15 XRSpace CO., LTD. Interactive image processing system using infrared cameras
US10955245B2 (en) * 2019-04-30 2021-03-23 Samsung Electronics Co., Ltd. System and method for low latency, high performance pose fusion
GB2584276B (en) * 2019-05-22 2023-06-07 Sony Interactive Entertainment Inc Capture of a three-dimensional representation of a scene
JP2022546053A (ja) * 2019-08-27 2022-11-02 カレオス 仮想ミラーシステム及び方法
US11665330B2 (en) 2021-01-27 2023-05-30 Dell Products L.P. Dynamic-baseline imaging array with real-time spatial data capture and fusion
US11562464B1 (en) * 2022-09-29 2023-01-24 Illuscio, Inc. Systems and methods for image postprocessing via viewport demosaicing

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5130794A (en) * 1990-03-29 1992-07-14 Ritchey Kurtis J Panoramic display system
US5703704A (en) * 1992-09-30 1997-12-30 Fujitsu Limited Stereoscopic image information transmission system
US20030007680A1 (en) * 1996-07-01 2003-01-09 Katsumi Iijima Three-dimensional information processing apparatus and method
US6657670B1 (en) * 1999-03-16 2003-12-02 Teco Image Systems Co., Ltd. Diaphragm structure of digital still camera
US7061532B2 (en) * 2001-03-27 2006-06-13 Hewlett-Packard Development Company, L.P. Single sensor chip digital stereo camera
US7123292B1 (en) * 1999-09-29 2006-10-17 Xerox Corporation Mosaicing images with an offset lens

Family Cites Families (109)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4376950A (en) * 1980-09-29 1983-03-15 Ampex Corporation Three-dimensional television system using holographic techniques
US5049987A (en) * 1989-10-11 1991-09-17 Reuben Hoppenstein Method and apparatus for creating three-dimensional television or other multi-dimensional images
US5515183A (en) * 1991-08-08 1996-05-07 Citizen Watch Co., Ltd. Real-time holography system
US5691885A (en) 1992-03-17 1997-11-25 Massachusetts Institute Of Technology Three-dimensional interconnect having modules with vertical top and bottom connectors
US5745719A (en) 1995-01-19 1998-04-28 Falcon; Fernando D. Commands functions invoked from movement of a control input device
US5835133A (en) 1996-01-23 1998-11-10 Silicon Graphics, Inc. Optical system for single camera stereo video
US6115482A (en) 1996-02-13 2000-09-05 Ascent Technology, Inc. Voice-output reading system with gesture-based navigation
JP3337938B2 (ja) 1996-04-25 2002-10-28 松下電器産業株式会社 3次元骨格構造の動き送受信装置、および動き送受信方法
US5909218A (en) 1996-04-25 1999-06-01 Matsushita Electric Industrial Co., Ltd. Transmitter-receiver of three-dimensional skeleton structure motions and method thereof
US5852450A (en) 1996-07-11 1998-12-22 Lamb & Company, Inc. Method and apparatus for processing captured motion data
US5831633A (en) 1996-08-13 1998-11-03 Van Roy; Peter L. Designating, drawing and colorizing generated images by computer
JPH10188028A (ja) 1996-10-31 1998-07-21 Konami Co Ltd スケルトンによる動画像生成装置、該動画像を生成する方法、並びに該動画像を生成するプログラムを記憶した媒体
US6243106B1 (en) 1998-04-13 2001-06-05 Compaq Computer Corporation Method for figure tracking using 2-D registration and 3-D reconstruction
US6681031B2 (en) * 1998-08-10 2004-01-20 Cybernet Systems Corporation Gesture-controlled interfaces for self-service machines and other applications
US6529643B1 (en) 1998-12-21 2003-03-04 Xerox Corporation System for electronic compensation of beam scan trajectory distortion
US6303924B1 (en) 1998-12-21 2001-10-16 Microsoft Corporation Image sensing operator input device
DE19917660A1 (de) * 1999-04-19 2000-11-02 Deutsch Zentr Luft & Raumfahrt Verfahren und Eingabeeinrichtung zum Steuern der Lage eines in einer virtuellen Realität graphisch darzustellenden Objekts
AU5481500A (en) * 1999-06-11 2001-01-02 Emile Hendriks Acquisition of 3-d scenes with a single hand held camera
US6597801B1 (en) * 1999-09-16 2003-07-22 Hewlett-Packard Development Company L.P. Method for object registration via selection of models with dynamically ordered features
JP2001246161A (ja) 1999-12-31 2001-09-11 Square Co Ltd ジェスチャー認識技術を用いたゲーム装置およびその方法ならびにその方法を実現するプログラムを記憶した記録媒体
GB2358098A (en) 2000-01-06 2001-07-11 Sharp Kk Method of segmenting a pixelled image
EP1117072A1 (fr) 2000-01-17 2001-07-18 Koninklijke Philips Electronics N.V. Amélioration de texte
US6674877B1 (en) 2000-02-03 2004-01-06 Microsoft Corporation System and method for visually tracking occluded objects in real time
US7370983B2 (en) 2000-03-02 2008-05-13 Donnelly Corporation Interior mirror assembly with display
US6554706B2 (en) 2000-05-31 2003-04-29 Gerard Jounghyun Kim Methods and apparatus of displaying and evaluating motion data in a motion game apparatus
US7227526B2 (en) * 2000-07-24 2007-06-05 Gesturetek, Inc. Video-based image control system
US6906687B2 (en) * 2000-07-31 2005-06-14 Texas Instruments Incorporated Digital formatter for 3-dimensional display applications
JP4047575B2 (ja) 2000-11-15 2008-02-13 株式会社セガ 情報処理装置における表示物体生成方法、これを実行制御するプログラム及びこのプログラムを格納した記録媒体
IL139995A (en) * 2000-11-29 2007-07-24 Rvc Llc System and method for spherical stereoscopic photographing
US7116330B2 (en) 2001-02-28 2006-10-03 Intel Corporation Approximating motion using a three-dimensional model
US6862121B2 (en) * 2001-06-05 2005-03-01 California Institute Of Technolgy Method and apparatus for holographic recording of fast phenomena
JP4596220B2 (ja) 2001-06-26 2010-12-08 ソニー株式会社 画像処理装置および方法、記録媒体、並びにプログラム
US7680295B2 (en) 2001-09-17 2010-03-16 National Institute Of Advanced Industrial Science And Technology Hand-gesture based interface apparatus
CA2359269A1 (fr) * 2001-10-17 2003-04-17 Biodentity Systems Corporation Systeme d'imagerie utilise pour l'enregistrement d'images faciales et l'identification automatique
WO2003039698A1 (fr) 2001-11-02 2003-05-15 Atlantis Cyberspace, Inc. Systeme de jeu de realite virtuelle comportant des pseudo-commandes d'affichage 3d et une commande de mission
US20050063596A1 (en) 2001-11-23 2005-03-24 Yosef Yomdin Encoding of geometric modeled images
US6833843B2 (en) 2001-12-03 2004-12-21 Tempest Microsystems Panoramic imaging and display system with canonical magnifier
AU2002361483A1 (en) * 2002-02-06 2003-09-02 Nice Systems Ltd. System and method for video content analysis-based detection, surveillance and alarm management
US7379105B1 (en) * 2002-06-18 2008-05-27 Pixim, Inc. Multi-standard video image capture device using a single CMOS image sensor
AU2003280516A1 (en) 2002-07-01 2004-01-19 The Regents Of The University Of California Digital processing of video images
JP3866168B2 (ja) 2002-07-31 2007-01-10 独立行政法人科学技術振興機構 多重構造を用いた動作生成システム
US8013852B2 (en) 2002-08-02 2011-09-06 Honda Giken Kogyo Kabushiki Kaisha Anthropometry-based skeleton fitting
US8460103B2 (en) 2004-06-18 2013-06-11 Igt Gesture controlled casino gaming system
JP4039234B2 (ja) * 2002-12-25 2008-01-30 ソニー株式会社 固体撮像素子およびその電荷転送方法
AU2003292490A1 (en) * 2003-01-17 2004-08-13 Koninklijke Philips Electronics N.V. Full depth map acquisition
US9177387B2 (en) 2003-02-11 2015-11-03 Sony Computer Entertainment Inc. Method and apparatus for real time motion capture
US7257237B1 (en) 2003-03-07 2007-08-14 Sandia Corporation Real time markerless motion tracking using linked kinematic chains
US8745541B2 (en) * 2003-03-25 2014-06-03 Microsoft Corporation Architecture for controlling a computer using hand gestures
AU2003289108A1 (en) 2003-04-22 2004-11-19 Hiroshi Arisawa Motion capturing method, motion capturing device, and motion capturing marker
EP1627294A2 (fr) 2003-05-01 2006-02-22 Delta Dansk Elektronik, Lys & Akustik Interface homme-machine basee sur des positions tridimensionnelles du corps humain
US7418134B2 (en) 2003-05-12 2008-08-26 Princeton University Method and apparatus for foreground segmentation of video sequences
US7831088B2 (en) 2003-06-13 2010-11-09 Georgia Tech Research Corporation Data reconstruction using directional interpolation techniques
JP2005020227A (ja) * 2003-06-25 2005-01-20 Pfu Ltd 画像圧縮装置
JP2005025415A (ja) * 2003-06-30 2005-01-27 Sony Corp 位置検出装置
US7755608B2 (en) 2004-01-23 2010-07-13 Hewlett-Packard Development Company, L.P. Systems and methods of interfacing with a machine
JP2007531113A (ja) 2004-03-23 2007-11-01 富士通株式会社 携帯装置の傾斜及び並進運動成分の識別
US20070183633A1 (en) 2004-03-24 2007-08-09 Andre Hoffmann Identification, verification, and recognition method and system
US8036494B2 (en) 2004-04-15 2011-10-11 Hewlett-Packard Development Company, L.P. Enhancing image resolution
US7308112B2 (en) * 2004-05-14 2007-12-11 Honda Motor Co., Ltd. Sign based human-machine interaction
US7519223B2 (en) 2004-06-28 2009-04-14 Microsoft Corporation Recognizing gestures and using gestures for interacting with software applications
US7366278B2 (en) 2004-06-30 2008-04-29 Accuray, Inc. DRR generation using a non-linear attenuation model
US8432390B2 (en) 2004-07-30 2013-04-30 Extreme Reality Ltd Apparatus system and method for human-machine interface
US8872899B2 (en) 2004-07-30 2014-10-28 Extreme Reality Ltd. Method circuit and system for human to machine interfacing by hand gestures
US8114172B2 (en) 2004-07-30 2012-02-14 Extreme Reality Ltd. System and method for 3D space-dimension based image processing
GB0424030D0 (en) * 2004-10-28 2004-12-01 British Telecomm A method and system for processing video data
US7386150B2 (en) * 2004-11-12 2008-06-10 Safeview, Inc. Active subject imaging with body identification
US7903141B1 (en) * 2005-02-15 2011-03-08 Videomining Corporation Method and system for event detection by multi-scale image invariant analysis
WO2006099597A2 (fr) 2005-03-17 2006-09-21 Honda Motor Co., Ltd. Estimation de pose reposant sur l'analyse de points critiques
US7774713B2 (en) * 2005-06-28 2010-08-10 Microsoft Corporation Dynamic user experience with semantic rich objects
US20070285554A1 (en) 2005-10-31 2007-12-13 Dor Givon Apparatus method and system for imaging
US9046962B2 (en) 2005-10-31 2015-06-02 Extreme Reality Ltd. Methods, systems, apparatuses, circuits and associated computer executable code for detecting motion, position and/or orientation of objects within a defined spatial region
US8265349B2 (en) * 2006-02-07 2012-09-11 Qualcomm Incorporated Intra-mode region-of-interest video object segmentation
US9395905B2 (en) * 2006-04-05 2016-07-19 Synaptics Incorporated Graphical scroll wheel
US7804486B2 (en) 2006-04-06 2010-09-28 Smyth Robert W Trackball systems and methods for rotating a three-dimensional image on a computer display
JP2007302223A (ja) 2006-04-12 2007-11-22 Hitachi Ltd 車載装置の非接触入力操作装置
CN101479765B (zh) * 2006-06-23 2012-05-23 图象公司 对2d电影进行转换用于立体3d显示的方法和***
US8022935B2 (en) 2006-07-06 2011-09-20 Apple Inc. Capacitance sensing electrode with integrated I/O mechanism
US7783118B2 (en) 2006-07-13 2010-08-24 Seiko Epson Corporation Method and apparatus for determining motion in images
US7701439B2 (en) * 2006-07-13 2010-04-20 Northrop Grumman Corporation Gesture recognition simulation system and method
US7907117B2 (en) * 2006-08-08 2011-03-15 Microsoft Corporation Virtual controller for visual displays
US7936932B2 (en) * 2006-08-24 2011-05-03 Dell Products L.P. Methods and apparatus for reducing storage size
US8356254B2 (en) * 2006-10-25 2013-01-15 International Business Machines Corporation System and method for interacting with a display
US20080104547A1 (en) * 2006-10-25 2008-05-01 General Electric Company Gesture-based communications
US7885480B2 (en) 2006-10-31 2011-02-08 Mitutoyo Corporation Correlation peak finding method for image correlation displacement sensing
US8756516B2 (en) * 2006-10-31 2014-06-17 Scenera Technologies, Llc Methods, systems, and computer program products for interacting simultaneously with multiple application programs
US8793621B2 (en) 2006-11-09 2014-07-29 Navisense Method and device to control touchless recognition
US8075499B2 (en) * 2007-05-18 2011-12-13 Vaidhi Nathan Abnormal motion detector and monitor
US7916944B2 (en) 2007-01-31 2011-03-29 Fuji Xerox Co., Ltd. System and method for feature level foreground segmentation
CA2684020C (fr) 2007-04-15 2016-08-09 Extreme Reality Ltd. Systeme et procede d'interface homme-machine
WO2008134745A1 (fr) 2007-04-30 2008-11-06 Gesturetek, Inc. Thérapie mobile sur la base d'un contenu vidéo
US8432377B2 (en) * 2007-08-30 2013-04-30 Next Holdings Limited Optical touchscreen with improved illumination
US8094943B2 (en) 2007-09-27 2012-01-10 Behavioral Recognition Systems, Inc. Background-foreground module for video analysis system
US8005263B2 (en) * 2007-10-26 2011-08-23 Honda Motor Co., Ltd. Hand sign recognition using label assignment
US9451142B2 (en) * 2007-11-30 2016-09-20 Cognex Corporation Vision sensors, systems, and methods
US8107726B2 (en) 2008-06-18 2012-01-31 Samsung Electronics Co., Ltd. System and method for class-specific object segmentation of image data
US9189886B2 (en) * 2008-08-15 2015-11-17 Brown University Method and apparatus for estimating body shape
CA2735992A1 (fr) 2008-09-04 2010-03-11 Extreme Reality Ltd. Procede, systeme, modules, logiciels utilises pour fournir une interface homme-machine par capteur d'image
EP2350925A4 (fr) 2008-10-24 2012-03-21 Extreme Reality Ltd Procede, systeme et modules associes et composants logiciels pour produire une interface homme-machine basée sur un capteur d'image
US8289440B2 (en) * 2008-12-08 2012-10-16 Lytro, Inc. Light field data acquisition devices, and methods of using and manufacturing same
CA2748037C (fr) * 2009-02-17 2016-09-20 Omek Interactive, Ltd. Procede et systeme de reconnaissance de geste
US8320619B2 (en) * 2009-05-29 2012-11-27 Microsoft Corporation Systems and methods for tracking a model
US8466934B2 (en) * 2009-06-29 2013-06-18 Min Liang Tan Touchscreen interface
US8270733B2 (en) * 2009-08-31 2012-09-18 Behavioral Recognition Systems, Inc. Identifying anomalous object types during classification
KR101577106B1 (ko) 2009-09-21 2015-12-11 익스트림 리얼리티 엘티디. 인간 기계가 가전 기기와 인터페이싱하기 위한 방법, 회로, 장치 및 시스템
US8878779B2 (en) 2009-09-21 2014-11-04 Extreme Reality Ltd. Methods circuits device systems and associated computer executable code for facilitating interfacing with a computing platform display screen
US8659592B2 (en) * 2009-09-24 2014-02-25 Shenzhen Tcl New Technology Ltd 2D to 3D video conversion
US20110292036A1 (en) 2010-05-31 2011-12-01 Primesense Ltd. Depth sensor with application interface
JP2014504074A (ja) 2011-01-23 2014-02-13 エクストリーム リアリティー エルティーディー. 立体三次元イメージおよびビデオを生成する方法、システム、装置、および、関連する処理論理回路
WO2013069023A2 (fr) 2011-11-13 2013-05-16 Extreme Reality Ltd. Procédés, systèmes, appareils, circuits et code exécutable par ordinateur associé pour la caractérisation, la catégorisation, l'identification et/ou la réaction à la présence d'un sujet fondées sur la vidéo

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5130794A (en) * 1990-03-29 1992-07-14 Ritchey Kurtis J Panoramic display system
US5703704A (en) * 1992-09-30 1997-12-30 Fujitsu Limited Stereoscopic image information transmission system
US20030007680A1 (en) * 1996-07-01 2003-01-09 Katsumi Iijima Three-dimensional information processing apparatus and method
US6657670B1 (en) * 1999-03-16 2003-12-02 Teco Image Systems Co., Ltd. Diaphragm structure of digital still camera
US7123292B1 (en) * 1999-09-29 2006-10-17 Xerox Corporation Mosaicing images with an offset lens
US7061532B2 (en) * 2001-03-27 2006-06-13 Hewlett-Packard Development Company, L.P. Single sensor chip digital stereo camera

Cited By (137)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110129124A1 (en) * 2004-07-30 2011-06-02 Dor Givon Method circuit and system for human to machine interfacing by hand gestures
US8681100B2 (en) 2004-07-30 2014-03-25 Extreme Realty Ltd. Apparatus system and method for human-machine-interface
US8928654B2 (en) 2004-07-30 2015-01-06 Extreme Reality Ltd. Methods, systems, devices and associated processing logic for generating stereoscopic images and video
US9177220B2 (en) 2004-07-30 2015-11-03 Extreme Reality Ltd. System and method for 3D space-dimension based image processing
US8872899B2 (en) 2004-07-30 2014-10-28 Extreme Reality Ltd. Method circuit and system for human to machine interfacing by hand gestures
US8462199B2 (en) 2005-10-31 2013-06-11 Extreme Reality Ltd. Apparatus method and system for imaging
US8878896B2 (en) 2005-10-31 2014-11-04 Extreme Reality Ltd. Apparatus method and system for imaging
US9131220B2 (en) 2005-10-31 2015-09-08 Extreme Reality Ltd. Apparatus method and system for imaging
US20100194862A1 (en) * 2005-10-31 2010-08-05 Xtrextreme Reality Apparatus Method and System for Imaging
US9046962B2 (en) 2005-10-31 2015-06-02 Extreme Reality Ltd. Methods, systems, apparatuses, circuits and associated computer executable code for detecting motion, position and/or orientation of objects within a defined spatial region
US20110080496A1 (en) * 2005-10-31 2011-04-07 Dor Givon Apparatus Method and System for Imaging
US10819954B2 (en) 2006-11-16 2020-10-27 Immersive Licensing, Inc. Distributed video sensor panoramic imaging system
US20120075414A1 (en) * 2006-11-16 2012-03-29 Park Michael C Distributed Video Sensor Panoramic Imaging System
US10375355B2 (en) * 2006-11-16 2019-08-06 Immersive Licensing, Inc. Distributed video sensor panoramic imaging system
US20090066784A1 (en) * 2007-09-05 2009-03-12 Sony Corporation Image processing apparatus and method
US8284238B2 (en) * 2007-09-05 2012-10-09 Sony Corporation Image processing apparatus and method
US20100287511A1 (en) * 2007-09-25 2010-11-11 Metaio Gmbh Method and device for illustrating a virtual object in a real environment
US9390560B2 (en) * 2007-09-25 2016-07-12 Metaio Gmbh Method and device for illustrating a virtual object in a real environment
US9035876B2 (en) 2008-01-14 2015-05-19 Apple Inc. Three-dimensional user interface session control
US8031952B2 (en) * 2008-04-21 2011-10-04 Broadcom Corporation Method and apparatus for optimizing memory usage in image processing
US20090262208A1 (en) * 2008-04-21 2009-10-22 Ilia Vitsnudel Method and Apparatus for Optimizing Memory Usage in Image Processing
US8098276B2 (en) * 2008-08-11 2012-01-17 Electronics And Telecommunications Research Institute Stereo vision system and control method thereof
US20100066811A1 (en) * 2008-08-11 2010-03-18 Electronics And Telecommunications Research Institute Stereo vision system and control method thereof
US20110163948A1 (en) * 2008-09-04 2011-07-07 Dor Givon Method system and software for providing image sensor based human machine interfacing
US20110169825A1 (en) * 2008-09-30 2011-07-14 Fujifilm Corporation Three-dimensional display apparatus, method, and program
US8199147B2 (en) * 2008-09-30 2012-06-12 Fujifilm Corporation Three-dimensional display apparatus, method, and program
US8355042B2 (en) * 2008-10-16 2013-01-15 Spatial Cam Llc Controller in a camera for creating a panoramic image
US20100097443A1 (en) * 2008-10-16 2010-04-22 Peter Lablans Controller in a Camera for Creating a Panoramic Image
US8780256B2 (en) * 2008-10-24 2014-07-15 Reald Inc. Stereoscopic image format with depth information
US8482654B2 (en) * 2008-10-24 2013-07-09 Reald Inc. Stereoscopic image format with depth information
US20100103249A1 (en) * 2008-10-24 2010-04-29 Real D Stereoscopic image format with depth information
US8548258B2 (en) 2008-10-24 2013-10-01 Extreme Reality Ltd. Method system and associated modules and software components for providing image sensor based human machine interfacing
US20130294684A1 (en) * 2008-10-24 2013-11-07 Reald Inc. Stereoscopic image format with depth information
WO2010048632A1 (fr) * 2008-10-24 2010-04-29 Real D Format d'image stéréoscopique avec informations de profondeurs
RU2538937C2 (ru) * 2009-06-25 2015-01-10 Конинклейке Филипс Электроникс Н.В. Способ записи стереоизображений, система и камера
US9131221B2 (en) 2009-06-25 2015-09-08 Koninklijke Philips N.V. Stereoscopic image capturing method, system and camera
WO2010150177A1 (fr) * 2009-06-25 2010-12-29 Koninklijke Philips Electronics N.V. Procédé de capture d'images stéréoscopiques, système et caméra
US8648953B2 (en) * 2009-07-03 2014-02-11 Fujifilm Corporation Image display apparatus and method, as well as program
US20110001802A1 (en) * 2009-07-03 2011-01-06 Takeshi Misawa Image display apparatus and method, as well as program
US9218126B2 (en) 2009-09-21 2015-12-22 Extreme Reality Ltd. Methods circuits apparatus and systems for human machine interfacing with an electronic appliance
US8878779B2 (en) 2009-09-21 2014-11-04 Extreme Reality Ltd. Methods circuits device systems and associated computer executable code for facilitating interfacing with a computing platform display screen
US20120162360A1 (en) * 2009-10-02 2012-06-28 Kabushiki Kaisha Topcon Wide-Angle Image Pickup Unit And Measuring Device
US9733080B2 (en) * 2009-10-02 2017-08-15 Kabushiki Kaisha Topcon Wide-angle image pickup unit and measuring device
US20110164032A1 (en) * 2010-01-07 2011-07-07 Prime Sense Ltd. Three-Dimensional User Interface
US20130250067A1 (en) * 2010-03-29 2013-09-26 Ludwig Laxhuber Optical stereo device and autofocus method therefor
US9479759B2 (en) * 2010-03-29 2016-10-25 Forstgarten International Holding Gmbh Optical stereo device and autofocus method therefor
US9210408B2 (en) * 2010-06-24 2015-12-08 Fujifilm Corporation Stereoscopic panoramic image synthesis device, image capturing device, stereoscopic panoramic image synthesis method, recording medium, and computer program
US20130083159A1 (en) * 2010-06-24 2013-04-04 Fujifilm Corporation Stereoscopic panoramic image synthesis device, image capturing device, stereoscopic panoramic image synthesis method, recording medium, and computer program
KR101666019B1 (ko) * 2010-08-03 2016-10-14 삼성전자주식회사 외삽 뷰 생성을 위한 장치 및 방법
KR20120012874A (ko) * 2010-08-03 2012-02-13 삼성전자주식회사 외삽 뷰 생성을 위한 장치 및 방법
US8803947B2 (en) * 2010-08-03 2014-08-12 Samsung Electronics Co., Ltd. Apparatus and method for generating extrapolated view
US20120033038A1 (en) * 2010-08-03 2012-02-09 Samsung Electronics Co., Ltd. Apparatus and method for generating extrapolated view
US9369693B2 (en) * 2010-09-22 2016-06-14 Fujifilm Corporation Stereoscopic imaging device and shading correction method
US20130162780A1 (en) * 2010-09-22 2013-06-27 Fujifilm Corporation Stereoscopic imaging device and shading correction method
US8929456B2 (en) * 2010-09-30 2015-01-06 Alcatel Lucent Video coding using compressive measurements
US8644376B2 (en) 2010-09-30 2014-02-04 Alcatel Lucent Apparatus and method for generating compressive measurements of video using spatial and temporal integration
US8702592B2 (en) 2010-09-30 2014-04-22 David Allan Langlois System and method for inhibiting injury to a patient during laparoscopic surgery
US9634690B2 (en) 2010-09-30 2017-04-25 Alcatel Lucent Method and apparatus for arbitrary resolution video coding using compressive sampling measurements
US9344736B2 (en) 2010-09-30 2016-05-17 Alcatel Lucent Systems and methods for compressive sense imaging
US9492070B2 (en) 2010-09-30 2016-11-15 David Allan Langlois System and method for inhibiting injury to a patient during laparoscopic surgery
US20120148173A1 (en) * 2010-12-08 2012-06-14 Electronics And Telecommunications Research Institute Method and device for generating multi-viewpoint image
US8731279B2 (en) * 2010-12-08 2014-05-20 Electronics And Telecommunications Research Institute Method and device for generating multi-viewpoint image
US8872762B2 (en) 2010-12-08 2014-10-28 Primesense Ltd. Three dimensional user interface cursor control
US8933876B2 (en) 2010-12-13 2015-01-13 Apple Inc. Three dimensional user interface session control
US9983685B2 (en) 2011-01-17 2018-05-29 Mediatek Inc. Electronic apparatuses and methods for providing a man-machine interface (MMI)
US9632626B2 (en) * 2011-01-17 2017-04-25 Mediatek Inc Apparatuses and methods for providing a 3D man-machine interface (MMI)
US20140125587A1 (en) * 2011-01-17 2014-05-08 Mediatek Inc. Apparatuses and methods for providing a 3d man-machine interface (mmi)
US20120194655A1 (en) * 2011-01-28 2012-08-02 Hsu-Jung Tung Display, image processing apparatus and image processing method
US9316541B2 (en) * 2011-04-14 2016-04-19 Office National D'etudes Et De Recherches Aerospatiales (Onera) Imaging system comprising a Fresnel lens
US20140111651A1 (en) * 2011-04-14 2014-04-24 Ulis Imaging system comprising a fresnel lens
US20120274568A1 (en) * 2011-04-27 2012-11-01 Aptina Imaging Corporation Complete digital holographic image sensor-projector computing unit
US8690339B2 (en) * 2011-04-27 2014-04-08 Aptina Imaging Corporation Complete digital holographic image sensor-projector computing unit having a modulator for receiving a fourier image
US8988453B2 (en) * 2011-06-17 2015-03-24 Lg Display Co., Ltd. Stereoscopic image display device and driving method thereof
US20120320036A1 (en) * 2011-06-17 2012-12-20 Lg Display Co., Ltd. Stereoscopic Image Display Device and Driving Method Thereof
US20130002715A1 (en) * 2011-06-28 2013-01-03 Tidman James M Image Sequence Reconstruction based on Overlapping Measurement Subsets
US8881051B2 (en) 2011-07-05 2014-11-04 Primesense Ltd Zoom-based gesture user interface
US9459758B2 (en) 2011-07-05 2016-10-04 Apple Inc. Gesture-based interface with enhanced features
US9377865B2 (en) 2011-07-05 2016-06-28 Apple Inc. Zoom-based gesture user interface
US9398310B2 (en) 2011-07-14 2016-07-19 Alcatel Lucent Method and apparatus for super-resolution video coding using compressive sampling measurements
US9030498B2 (en) 2011-08-15 2015-05-12 Apple Inc. Combining explicit select gestures and timeclick in a non-tactile three dimensional user interface
US9218063B2 (en) 2011-08-24 2015-12-22 Apple Inc. Sessionless pointing user interface
US9398288B2 (en) 2011-11-04 2016-07-19 Empire Technology Development Llc IR signal capture for images
US8970693B1 (en) * 2011-12-15 2015-03-03 Rawles Llc Surface modeling with structured light
US9229534B2 (en) 2012-02-28 2016-01-05 Apple Inc. Asymmetric mapping for tactile and non-tactile user interfaces
US20150062308A1 (en) * 2012-06-22 2015-03-05 Nikon Corporation Image processing apparatus, image-capturing apparatus and image processing method
US9723288B2 (en) * 2012-06-22 2017-08-01 Nikon Corporation Image processing apparatus, image-capturing apparatus and image processing method
US9325966B2 (en) * 2012-07-30 2016-04-26 Canon Kabushiki Kaisha Depth measurement using multispectral binary coded projection and multispectral image capture
US20140028800A1 (en) * 2012-07-30 2014-01-30 Canon Kabushiki Kaisha Multispectral Binary Coded Projection
US9319578B2 (en) 2012-10-24 2016-04-19 Alcatel Lucent Resolution and focus enhancement
US20160309136A1 (en) * 2012-11-08 2016-10-20 Leap Motion, Inc. Three-dimensional image sensors
US10531069B2 (en) * 2012-11-08 2020-01-07 Ultrahaptics IP Two Limited Three-dimensional image sensors
US20190058868A1 (en) * 2012-11-08 2019-02-21 Leap Motion, Inc. Three-Dimensional Image Sensors
US9973741B2 (en) * 2012-11-08 2018-05-15 Leap Motion, Inc. Three-dimensional image sensors
US20140192868A1 (en) * 2013-01-07 2014-07-10 Qualcomm Incorporated Inter-layer reference picture generation for hls-only scalable video coding
US9270991B2 (en) * 2013-01-07 2016-02-23 Qualcomm Incorporated Inter-layer reference picture generation for HLS-only scalable video coding
US9380221B2 (en) * 2013-02-27 2016-06-28 Massachusetts Institute Of Technology Methods and apparatus for light field photography
US20140240532A1 (en) * 2013-02-27 2014-08-28 Massachusetts Institute Of Technology Methods and Apparatus for Light Field Photography
US10184830B2 (en) 2013-06-18 2019-01-22 Michael Golub Apparatus and method for snapshot spectral imaging
US9823126B2 (en) 2013-06-18 2017-11-21 Ramot At Tel-Aviv University Ltd. Apparatus and method for snapshot spectral imaging
US20150156470A1 (en) * 2013-11-04 2015-06-04 Massachusetts Institute Of Technology Reducing View Transitions Artifacts In Automultiscopic Displays
US9967538B2 (en) * 2013-11-04 2018-05-08 Massachussetts Institute Of Technology Reducing view transitions artifacts in automultiscopic displays
US9600899B2 (en) 2013-12-20 2017-03-21 Alcatel Lucent Methods and apparatuses for detecting anomalies in the compressed sensing domain
US9563806B2 (en) 2013-12-20 2017-02-07 Alcatel Lucent Methods and apparatuses for detecting anomalies using transform based compressed sensing matrices
US10643343B2 (en) * 2014-02-05 2020-05-05 Creaform Inc. Structured light matching of a set of curves from three cameras
KR20170017927A (ko) * 2014-06-20 2017-02-15 퀄컴 인코포레이티드 시간 공유를 이용한 자동적인 다수의 심도 카메라들 동기화
US20150373322A1 (en) * 2014-06-20 2015-12-24 Qualcomm Incorporated Automatic multiple depth cameras synchronization using time sharing
CN106461783A (zh) * 2014-06-20 2017-02-22 高通股份有限公司 使用时间共享的自动的多个深度相机同步
US10419703B2 (en) * 2014-06-20 2019-09-17 Qualcomm Incorporated Automatic multiple depth cameras synchronization using time sharing
KR102376285B1 (ko) * 2014-06-20 2022-03-17 퀄컴 인코포레이티드 시간 공유를 이용한 자동적인 다수의 심도 카메라들 동기화
US20160021390A1 (en) * 2014-07-15 2016-01-21 Alcatel-Lucent Usa, Inc. Method and system for modifying compressive sensing block sizes for video monitoring using distance information
US9894324B2 (en) * 2014-07-15 2018-02-13 Alcatel-Lucent Usa Inc. Method and system for modifying compressive sensing block sizes for video monitoring using distance information
US10484671B2 (en) * 2015-05-29 2019-11-19 Interdigital Ce Patent Holdings Method for displaying a content from 4D light field data
US20160353087A1 (en) * 2015-05-29 2016-12-01 Thomson Licensing Method for displaying a content from 4d light field data
US20180288393A1 (en) * 2015-09-30 2018-10-04 Calay Venture S.à r.l. Presence camera
US11196972B2 (en) * 2015-09-30 2021-12-07 Tmrw Foundation Ip S. À R.L. Presence camera
US10354399B2 (en) * 2017-05-25 2019-07-16 Google Llc Multi-view back-projection to a light-field
US11589034B2 (en) 2017-06-12 2023-02-21 Interdigital Madison Patent Holdings, Sas Method and apparatus for providing information to a user observing a multi view content
US11202052B2 (en) * 2017-06-12 2021-12-14 Interdigital Ce Patent Holdings, Sas Method for displaying, on a 2D display device, a content derived from light field data
US11789288B2 (en) 2018-01-14 2023-10-17 Light Field Lab, Inc. Light field vision-correction device
US11163176B2 (en) * 2018-01-14 2021-11-02 Light Field Lab, Inc. Light field vision-correction device
US11556015B2 (en) 2018-01-14 2023-01-17 Light Field Lab, Inc. Light field vision-correction device
US11017540B2 (en) 2018-04-23 2021-05-25 Cognex Corporation Systems and methods for improved 3-d data reconstruction from stereo-temporal image sequences
US11069074B2 (en) * 2018-04-23 2021-07-20 Cognex Corporation Systems and methods for improved 3-D data reconstruction from stereo-temporal image sequences
US11074700B2 (en) * 2018-04-23 2021-07-27 Cognex Corporation Systems, methods, and computer-readable storage media for determining saturation data for a temporal pixel
US11593954B2 (en) * 2018-04-23 2023-02-28 Cognex Corporation Systems and methods for improved 3-D data reconstruction from stereo-temporal image sequences
US20210407110A1 (en) * 2018-04-23 2021-12-30 Cognex Corporation Systems and methods for improved 3-d data reconstruction from stereo-temporal image sequences
US11115607B2 (en) 2018-07-27 2021-09-07 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Control system for structured light projector and electronic device
WO2020019704A1 (fr) * 2018-07-27 2020-01-30 Oppo广东移动通信有限公司 Système de commande de projecteur de lumière structurée, et dispositif électronique
US20200037000A1 (en) * 2018-07-30 2020-01-30 Ricoh Company, Ltd. Distribution system, client terminal, and method of controlling display
US11057644B2 (en) * 2018-07-30 2021-07-06 Ricoh Company, Ltd. Distribution system, client terminal, and method of controlling display
US20220094902A1 (en) * 2019-06-06 2022-03-24 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Multi-channel imaging device and device having a multi-aperture imaging device
CN114270815A (zh) * 2019-06-06 2022-04-01 弗劳恩霍夫应用研究促进协会 多通道成像设备和具有多孔径成像设备的设备
CN112114507A (zh) * 2019-06-21 2020-12-22 三星电子株式会社 用于提供扩展的观看窗口的全息显示装置和方法
US20210044795A1 (en) * 2019-08-09 2021-02-11 Light Field Lab, Inc. Light Field Display System Based Digital Signage System
US11902500B2 (en) * 2019-08-09 2024-02-13 Light Field Lab, Inc. Light field display system based digital signage system
US11538134B2 (en) * 2020-09-10 2022-12-27 Seiko Epson Corporation Information generation method, information generation system, and non-transitory computer-readable storage medium storing program
CN114173105A (zh) * 2020-09-10 2022-03-11 精工爱普生株式会社 信息生成方法、信息生成***以及记录介质

Also Published As

Publication number Publication date
US8462199B2 (en) 2013-06-11
US20110080496A1 (en) 2011-04-07
US9131220B2 (en) 2015-09-08
US20100194862A1 (en) 2010-08-05
WO2007052262A2 (fr) 2007-05-10
US20130188024A1 (en) 2013-07-25
WO2007052262A3 (fr) 2009-04-09
US8878896B2 (en) 2014-11-04

Similar Documents

Publication Publication Date Title
US9131220B2 (en) Apparatus method and system for imaging
Venkataraman et al. Picam: An ultra-thin high performance monolithic camera array
US5530774A (en) Generation of depth image through interpolation and extrapolation of intermediate images derived from stereo image pair using disparity vector fields
Zaharia et al. Adaptive 3D-DCT compression algorithm for continuous parallax 3D integral imaging
US20060187297A1 (en) Holographic 3-d television
CN108141578B (zh) 呈现相机
KR100897307B1 (ko) 집적 영상 방식에 의해 얻어진 3차원 영상을 홀로그램기법을 이용하여 재생하는 방법 및 장치
Lee et al. Three-dimensional display and information processing based on integral imaging
US20150124062A1 (en) Joint View Expansion And Filtering For Automultiscopic 3D Displays
Hong et al. Three-dimensional visualization of partially occluded objects using integral imaging
KR101600681B1 (ko) 집적 영상시스템의 3차원 영상 표시깊이변환방법
Aggoun 3D holoscopic imaging technology for real-time volume processing and display
CN110958442B (zh) 用于处理全息图像数据的方法和装置
Aggoun Pre-processing of integral images for 3-D displays
KR101025785B1 (ko) 3차원 실물 화상장치
Yamaguchi Ray-based and wavefront-based holographic displays for high-density light-field reproduction
KR20140037430A (ko) 수직리그 기반 고화질 디지털 홀로그래픽 비디오 생성 시스템
KR101608753B1 (ko) 초점 이동 영상 촬영을 통한 3차원 컨텐츠 생성 방법 및 장치
Yu et al. Dynamic depth of field on live video streams: A stereo solution
KR100652204B1 (ko) 입체 영상 표시 방법 및 그 장치
KR100708834B1 (ko) 입체 영상 디스플레이 시스템
Hamaguchi et al. Real-time view interpolation system for super multiview 3D display
KR20090091909A (ko) 원형매핑 모델과 보간법을 복합 사용하여 부피형 3차원영상 재생 장치
Olsson et al. An interactive ray-tracing based simulation environment for generating integral imaging video sequences
Swash et al. Reference based holoscopic 3D camera aperture stitching for widening the overall viewing angle

Legal Events

Date Code Title Description
AS Assignment

Owner name: EXTREME REALITY LTD., ISRAEL

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:GIVON, DOR;REEL/FRAME:023432/0752

Effective date: 20060424

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION