US20100220893A1 - Method and System of Mono-View Depth Estimation - Google Patents

Method and System of Mono-View Depth Estimation Download PDF

Info

Publication number
US20100220893A1
US20100220893A1 US12/396,363 US39636309A US2010220893A1 US 20100220893 A1 US20100220893 A1 US 20100220893A1 US 39636309 A US39636309 A US 39636309A US 2010220893 A1 US2010220893 A1 US 2010220893A1
Authority
US
United States
Prior art keywords
ddr
depth
image
objects
mono
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/396,363
Inventor
Gwo Giun Lee
Ming-Jiun Wang
Ling-Hsiu Huang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Himax Technologies Ltd
NCKU Research and Development Foundation
Original Assignee
Himax Technologies Ltd
NCKU Research and Development Foundation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Himax Technologies Ltd, NCKU Research and Development Foundation filed Critical Himax Technologies Ltd
Priority to US12/396,363 priority Critical patent/US20100220893A1/en
Assigned to HIMAX MEDIA SOLUTIONS, INC., NCKU RESEARCH AND DEVELOPMENT FOUNDATION reassignment HIMAX MEDIA SOLUTIONS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HUANG, LING-HSIU, LEE, GWO GIUN, WANG, MING-JIUN
Assigned to HIMAX TECHNOLOGIES LIMITED reassignment HIMAX TECHNOLOGIES LIMITED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HIMAX MEDIA SOLUTIONS, INC.
Publication of US20100220893A1 publication Critical patent/US20100220893A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30221Sports video; Sports image

Definitions

  • the present invention generally relates to mono-view depth estimation, and more particularly to a ground model for mono-view depth estimation.
  • 3D depth information When three-dimensional (3D) objects are mapped onto a two-dimensional (2D) image plane by prospective projection, such as with an image taken by a still camera or video captured by a video camera, a substantial amount of information, such as the 3D depth information, disappears because of the non-unique many-to-one transformation. Accordingly, an image point cannot uniquely determine its depth. Recapture or generation of the 3D depth information is thus a challenging task that is crucial in recovering a full, or at least an approximate, 3D representation.
  • depth may be obtained from the monoscopic spatial and/or temporal domain.
  • the term “monoscopic” or “mono” is used herein to refer to a characteristic in which the left and right eyes see the same perspective view of a given scene.
  • One of the known mono-view depth estimation methods is performed by extracting the depth information from the degree of object motion, and is thus called a depth-from-motion method. The object with a higher degree of motion is assigned smaller (or nearer) depth, and vice versa.
  • Another one of the conventional mono-view depth estimation methods is performed by assigning larger (or farther) depth to non-focused regions such as the background, and is thus called a depth-from-focus-cue method.
  • a further conventional mono-view depth estimation methods is performed by detecting the intersection of vanishing lines, or vanishing point. The points approaching the vanishing point are assigned larger (or farther) depth, and vice versa.
  • DDR depth diffusion region
  • a two-dimensional (2D) image is first segmented into a number of objects.
  • a DDR such as for example the ground or a floor, is then detected among the objects.
  • the DDR generally includes a region or relatively planar region that is about horizontal (e.g., a horizontal plane).
  • the DDR is assigned a depth, such as for example, a depth monotonically increasing from bottom to top of the DDR.
  • An object connected to the DDR is assigned depth according to the depth of the DDR at the connected location. For example, the depth of the connected object is assigned the same depth of the DDR at the connected location.
  • FIG. 1 illustrates a flow diagram demonstrating the steps of a mono-view depth estimation method based on a ground model according to one embodiment of the present invention
  • FIG. 3 shows an exemplary image, in which a golfer stands on the ground or other surface capable of serving as a depth diffusion region (DDR).
  • DDR depth diffusion region
  • FIG. 1 illustrates a flow diagram demonstrating the steps of a mono-view depth estimation method 100 based on a ground model according to one embodiment of the present invention.
  • FIG. 2 illustrates an associated block diagram of a mono-view depth estimation system 200 according to the embodiment of the present invention.
  • an input device 20 provides or receives one or more two-dimensional (2D) input images to be image/video processed in accordance with the embodiment of the present invention.
  • the input device 20 may in general be an electro-optical device that maps 3D object(s) onto a 2D image plane by prospective projection.
  • the input device 20 may be a still camera that takes the 2D image, or a video camera that captures a number of image frames.
  • the input device 20 in another embodiment, may be a pre-processing device that performs one or more of digital image processing tasks, such as image enhancement, image restoration, image analysis, image compression or image synthesis.
  • the input device 20 may further include a storage device, such as a semiconductor memory or hard disk drive, which stores processed images from the pre-processing device.
  • a storage device such as a semiconductor memory or hard disk drive, which stores processed images from the pre-processing device.
  • a relatively large amount of information such as particularly the 3D depth information, is lost when 3D objects are mapped onto the 2D image plane, and therefore according to a feature of the invention the 2D image provided by the input device 20 is subjected to image/video processing through other blocks of the mono-view depth estimation system 200 , which will be discussed below.
  • the input image/video is then processed, in step 12 , by a segmentation unit 22 that partitions the input image into multiple regions, objects or segments.
  • the term “unit” is used to denote a circuit, a piece of program, or their combination.
  • the method and system of the present invention may be implemented in whole or in part using software and/or firmware, including, for example, one or more of a computer, a microprocessor, a circuit, an Application Specific Integrated Circuit (ASIC), a programmable gate array device, or other hardware.
  • the purpose of the segmentation is to change the representation of the image into something easier to assign depth to in the later steps.
  • the pixels in the same region have similar characteristics, such as color, intensity or texture, while the pixels between adjacent regions have distinct characteristics.
  • Step 12 may be performed using one of the conventional segmentation techniques, or may be performed using a segmentation technique to be developed in the future.
  • a depth diffusion region is detected by a DDR detection unit 24 .
  • the DDR may be ground (or earth), ocean, flooring or any other region or surface that is about horizontal (e.g., a horizontal plane).
  • a horizontal plane having the same segmentation characteristics and having substantive area can, according to a feature of the invention, probably be detected as the DDR.
  • FIG. 3 shows an exemplary image in which a golfer 30 stands on the ground (or the lawn) 32 or other region (e.g., horizontal plane or relatively horizontal surface) suitable for serving as the DDR.
  • two objects i.e., the ground 32 and the golfer 30
  • the DDR is assigned depth in step 15 by a DDR depth assignment unit 26 .
  • the depth assignment of the DDR (for example, the ground 32 ) may monotonically increase from the bottom to the top.
  • the depth magnitude of the DDR can be inversely proportional to a vertical dimension of the DDR or location on the DDR.
  • the depth assignment of the DDR may be formulated as follows:
  • the depth of the object (or objects) connected to the DDR is assigned by the depth assignment unit 26 , according to the DDR depth at the connected site. Taking the image in FIG. 3 as an example, as the golfer 30 is connected to (or standing on) the DDR at the bottom of his or her feet, the depth of the golfer 30 is assigned the same depth of the DDR 32 at the connected site; that is, y Obj .
  • the depth assignment may be formulated as follows:
  • DepthObj Depth DDR ( y Obj )
  • the image or partial image is assigned depth according to one of the conventional assignment methods or a technique to be developed in the future.
  • the foreground(s) and background(s) of the non-DDR image are detected (in step 16 ), followed by assigning corresponding depth to the foregrounds/backgrounds (in step 17 ) according to the conventional method.
  • the foreground is assigned depth values smaller than those of the background.
  • the depth obtained from step 15 alone or together with the depth obtained from step 17 are combined (in step 18 ) to arrive at a final depth map.
  • An output device 28 receives the depth map information (e.g., the final depth map) from the DDR depth assignment unit 26 and provides a resulting or output image.
  • the output device 28 may be a display device for presentation or viewing of the received depth information (e.g., depth map information).
  • the output device 28 in another embodiment, may be a storage device, such as a semiconductor memory or hard disk drive, which stores the received depth information.
  • the output device 28 may further and/or alternatively include a post-processing device that performs one or more of digital image processing tasks, such as image enhancement, image restoration, image analysis, image compression or image synthesis.
  • the ground model methods and systems for mono-view depth estimation are capable of providing correct and versatile depth and handling of a relatively large variety of scenes whenever a DDR is present or capable of being determined or estimated.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

A method and system of mono-view depth estimation are disclosed. A two-dimensional (2D) image is first segmented into a number of objects. A depth diffusion region (DDR), such as the ground or a floor, is then detected among the objects. The DDR generally includes a horizontal plane. The DDR is assigned the depth, and each object connected to the DDR is assigned depth according to the depth of the DDR at the connected site.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The present invention generally relates to mono-view depth estimation, and more particularly to a ground model for mono-view depth estimation.
  • 2. Description of the Prior Art
  • When three-dimensional (3D) objects are mapped onto a two-dimensional (2D) image plane by prospective projection, such as with an image taken by a still camera or video captured by a video camera, a substantial amount of information, such as the 3D depth information, disappears because of the non-unique many-to-one transformation. Accordingly, an image point cannot uniquely determine its depth. Recapture or generation of the 3D depth information is thus a challenging task that is crucial in recovering a full, or at least an approximate, 3D representation.
  • In mono-view depth estimation, depth may be obtained from the monoscopic spatial and/or temporal domain. The term “monoscopic” or “mono” is used herein to refer to a characteristic in which the left and right eyes see the same perspective view of a given scene. One of the known mono-view depth estimation methods is performed by extracting the depth information from the degree of object motion, and is thus called a depth-from-motion method. The object with a higher degree of motion is assigned smaller (or nearer) depth, and vice versa. Another one of the conventional mono-view depth estimation methods is performed by assigning larger (or farther) depth to non-focused regions such as the background, and is thus called a depth-from-focus-cue method. A further conventional mono-view depth estimation methods is performed by detecting the intersection of vanishing lines, or vanishing point. The points approaching the vanishing point are assigned larger (or farther) depth, and vice versa.
  • As very limited information may be obtained from the monoscopic spatio-temporal domain, the conventional methods mentioned above, unfortunately, cannot solve all of the scene-contents in a real-world video/image. For the foregoing reason, a need has arisen to propose a novel depth estimation method generally for a versatile mono-view video/image.
  • SUMMARY OF THE INVENTION
  • In view of the foregoing, it is an object of the present invention to provide a ground model method and system for mono-view depth estimation, which is capable of providing correct and versatile depth and handling of a relatively large (i.e., great) variety of scenes whenever a depth diffusion region (DDR) is present or can be identified.
  • According to one embodiment, a two-dimensional (2D) image is first segmented into a number of objects. A DDR, such as for example the ground or a floor, is then detected among the objects. The DDR generally includes a region or relatively planar region that is about horizontal (e.g., a horizontal plane). The DDR is assigned a depth, such as for example, a depth monotonically increasing from bottom to top of the DDR. An object connected to the DDR is assigned depth according to the depth of the DDR at the connected location. For example, the depth of the connected object is assigned the same depth of the DDR at the connected location.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 illustrates a flow diagram demonstrating the steps of a mono-view depth estimation method based on a ground model according to one embodiment of the present invention;
  • FIG. 2 illustrates an associated block diagram of a mono-view depth estimation system according to the embodiment of the present invention; and
  • FIG. 3 shows an exemplary image, in which a golfer stands on the ground or other surface capable of serving as a depth diffusion region (DDR).
  • DETAILED DESCRIPTION OF THE INVENTION
  • FIG. 1 illustrates a flow diagram demonstrating the steps of a mono-view depth estimation method 100 based on a ground model according to one embodiment of the present invention. FIG. 2 illustrates an associated block diagram of a mono-view depth estimation system 200 according to the embodiment of the present invention.
  • In step 11, an input device 20 provides or receives one or more two-dimensional (2D) input images to be image/video processed in accordance with the embodiment of the present invention. The input device 20 may in general be an electro-optical device that maps 3D object(s) onto a 2D image plane by prospective projection. In one embodiment, the input device 20 may be a still camera that takes the 2D image, or a video camera that captures a number of image frames. The input device 20, in another embodiment, may be a pre-processing device that performs one or more of digital image processing tasks, such as image enhancement, image restoration, image analysis, image compression or image synthesis. Moreover, the input device 20 may further include a storage device, such as a semiconductor memory or hard disk drive, which stores processed images from the pre-processing device. As discussed above, a relatively large amount of information, such as particularly the 3D depth information, is lost when 3D objects are mapped onto the 2D image plane, and therefore according to a feature of the invention the 2D image provided by the input device 20 is subjected to image/video processing through other blocks of the mono-view depth estimation system 200, which will be discussed below.
  • The input image/video is then processed, in step 12, by a segmentation unit 22 that partitions the input image into multiple regions, objects or segments. As used herein, the term “unit” is used to denote a circuit, a piece of program, or their combination. In general, the method and system of the present invention may be implemented in whole or in part using software and/or firmware, including, for example, one or more of a computer, a microprocessor, a circuit, an Application Specific Integrated Circuit (ASIC), a programmable gate array device, or other hardware. The purpose of the segmentation is to change the representation of the image into something easier to assign depth to in the later steps. The pixels in the same region have similar characteristics, such as color, intensity or texture, while the pixels between adjacent regions have distinct characteristics. Step 12 may be performed using one of the conventional segmentation techniques, or may be performed using a segmentation technique to be developed in the future.
  • In step 13, a depth diffusion region (DDR) is detected by a DDR detection unit 24. According to the disclosed ground model of the present embodiment, the DDR may be ground (or earth), ocean, flooring or any other region or surface that is about horizontal (e.g., a horizontal plane). A horizontal plane having the same segmentation characteristics and having substantive area can, according to a feature of the invention, probably be detected as the DDR. FIG. 3 shows an exemplary image in which a golfer 30 stands on the ground (or the lawn) 32 or other region (e.g., horizontal plane or relatively horizontal surface) suitable for serving as the DDR. In this exemplary image, two objects (i.e., the ground 32 and the golfer 30) are collected through the segmentation of the previous step 12.
  • When a DDR is identified (i.e., the yes branch of step 14), the DDR is assigned depth in step 15 by a DDR depth assignment unit 26. The depth assignment of the DDR (for example, the ground 32) may monotonically increase from the bottom to the top. According to one feature of the invention, the depth magnitude of the DDR can be inversely proportional to a vertical dimension of the DDR or location on the DDR. The depth assignment of the DDR may be formulated as follows:

  • DepthDDR(y) ↑ as y ↓
  • or

  • DepthDDR=k/y
  • where k is a constant.
  • In another embodiment, depth assignment of the DDR may increase from the bottom to the top in a non-monotonic manner. For example, DepthDDR=k/(y2).
  • Further, the depth of the object (or objects) connected to the DDR is assigned by the depth assignment unit 26, according to the DDR depth at the connected site. Taking the image in FIG. 3 as an example, as the golfer 30 is connected to (or standing on) the DDR at the bottom of his or her feet, the depth of the golfer 30 is assigned the same depth of the DDR 32 at the connected site; that is, yObj. The depth assignment may be formulated as follows:

  • DepthObj=DepthDDR(y Obj)
  • Generally speaking, when a connected object rests or stands on the DDR (or the ground) at a connected point, the depth of the whole object is then assigned the same depth of the DDR at the connected or joined point.
  • When no DDR is identified or the object(s) are not connected to the DDR (i.e., the no branch of step 14), the image or partial image is assigned depth according to one of the conventional assignment methods or a technique to be developed in the future. In the flow diagram of FIG. 1, the foreground(s) and background(s) of the non-DDR image are detected (in step 16), followed by assigning corresponding depth to the foregrounds/backgrounds (in step 17) according to the conventional method. In general, the foreground is assigned depth values smaller than those of the background. The depth obtained from step 15 alone or together with the depth obtained from step 17 are combined (in step 18) to arrive at a final depth map.
  • An output device 28 receives the depth map information (e.g., the final depth map) from the DDR depth assignment unit 26 and provides a resulting or output image. The output device 28, in one embodiment, may be a display device for presentation or viewing of the received depth information (e.g., depth map information). The output device 28, in another embodiment, may be a storage device, such as a semiconductor memory or hard disk drive, which stores the received depth information. Moreover, the output device 28 may further and/or alternatively include a post-processing device that performs one or more of digital image processing tasks, such as image enhancement, image restoration, image analysis, image compression or image synthesis.
  • According to the embodiment discussed above, the ground model methods and systems for mono-view depth estimation are capable of providing correct and versatile depth and handling of a relatively large variety of scenes whenever a DDR is present or capable of being determined or estimated.
  • Although specific embodiments have been illustrated and described, it will be appreciated by those skilled in the art that various modifications may be made without departing from the scope of the present invention, which is intended to be limited solely by the appended claims.

Claims (16)

1. A method of mono-view depth estimation, comprising:
segmenting a two-dimensional (2D) image into a plurality of objects;
detecting a depth diffusion region (DDR) among the objects, the DDR including a region or planar surface that is about horizontal;
assigning depth to the DDR; and
assigning depth to an object connected to the DDR.
2. The method of claim 1, wherein the DDR is a horizontal plane comprising ground, ocean or a floor.
3. The method of claim 1, wherein the depth assignment of the DDR monotonically increases from a bottom of the DDR to a top of the DDR.
4. The method of claim 3, wherein the depth magnitude of the DDR is inversely proportional to a vertical dimension of the DDR or location on the DDR.
5. The method of claim 1, wherein the depth of the connected object is assigned according to the depth of the DDR at a connected location.
6. The method of claim 5, wherein the depth of the connected object is assigned the same depth of the DDR at the connected location.
7. The method of claim 1, further comprising a step of mapping 3D objects onto a 2D image plane.
8. The method of claim 1, further comprising a step of storing or displaying the depth of the DDR and the connected object.
9. A system of mono-view depth estimation, comprising:
a segmentation unit configured to segment a two-dimensional (2D) image into a plurality of objects;
a depth diffusion region (DDR) detection unit configured to detect a DDR among the objects, the DDR including a region or plane that is about horizontal; and
a DDR depth assignment unit configured to assign depth to the DDR, and to assign depth to an object connected to the DDR.
10. The system of claim 9, wherein the DDR is a horizontal plane comprising ground, ocean or a floor.
11. The system of claim 9, wherein the depth assignment of the DDR monotonically increases from a bottom of the DDR to a top of the DDR.
12. The system of claim 11, wherein the depth magnitude of the DDR is inversely proportional to a vertical location.
13. The system of claim 9, wherein the system is configured to assign the depth of the connected object according to the depth of the DDR at a connected location.
14. The system of claim 13, wherein the depth of the connected object is assigned the same depth of the DDR at the connected location.
15. The system of claim 9, further comprising an input device configured to map 3D objects onto a 2D image plane.
16. The system of claim 9, further comprising an output device capable of storing or displaying the depth of the DDR and the connected object.
US12/396,363 2009-03-02 2009-03-02 Method and System of Mono-View Depth Estimation Abandoned US20100220893A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/396,363 US20100220893A1 (en) 2009-03-02 2009-03-02 Method and System of Mono-View Depth Estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/396,363 US20100220893A1 (en) 2009-03-02 2009-03-02 Method and System of Mono-View Depth Estimation

Publications (1)

Publication Number Publication Date
US20100220893A1 true US20100220893A1 (en) 2010-09-02

Family

ID=42667106

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/396,363 Abandoned US20100220893A1 (en) 2009-03-02 2009-03-02 Method and System of Mono-View Depth Estimation

Country Status (1)

Country Link
US (1) US20100220893A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130321571A1 (en) * 2011-02-23 2013-12-05 Koninklijke Philips N.V. Processing depth data of a three-dimensional scene
US9324184B2 (en) 2011-12-14 2016-04-26 Microsoft Technology Licensing, Llc Image three-dimensional (3D) modeling
US9406153B2 (en) 2011-12-14 2016-08-02 Microsoft Technology Licensing, Llc Point of interest (POI) data positioning in image
US10008021B2 (en) 2011-12-14 2018-06-26 Microsoft Technology Licensing, Llc Parallax compensation
US10015478B1 (en) * 2010-06-24 2018-07-03 Steven M. Hoffberg Two dimensional to three dimensional moving image converter
US10038842B2 (en) 2011-11-01 2018-07-31 Microsoft Technology Licensing, Llc Planar panorama imagery generation
US10164776B1 (en) 2013-03-14 2018-12-25 goTenna Inc. System and method for private and point-to-point communication between computing devices

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6031564A (en) * 1997-07-07 2000-02-29 Reveo, Inc. Method and apparatus for monoscopic to stereoscopic image conversion
US20090196492A1 (en) * 2008-02-01 2009-08-06 Samsung Electronics Co., Ltd. Method, medium, and system generating depth map of video image
US20100046837A1 (en) * 2006-11-21 2010-02-25 Koninklijke Philips Electronics N.V. Generation of depth map for an image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6031564A (en) * 1997-07-07 2000-02-29 Reveo, Inc. Method and apparatus for monoscopic to stereoscopic image conversion
US20100046837A1 (en) * 2006-11-21 2010-02-25 Koninklijke Philips Electronics N.V. Generation of depth map for an image
US20090196492A1 (en) * 2008-02-01 2009-08-06 Samsung Electronics Co., Ltd. Method, medium, and system generating depth map of video image

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10015478B1 (en) * 2010-06-24 2018-07-03 Steven M. Hoffberg Two dimensional to three dimensional moving image converter
US11470303B1 (en) 2010-06-24 2022-10-11 Steven M. Hoffberg Two dimensional to three dimensional moving image converter
US20130321571A1 (en) * 2011-02-23 2013-12-05 Koninklijke Philips N.V. Processing depth data of a three-dimensional scene
US9338424B2 (en) * 2011-02-23 2016-05-10 Koninklijlke Philips N.V. Processing depth data of a three-dimensional scene
US10038842B2 (en) 2011-11-01 2018-07-31 Microsoft Technology Licensing, Llc Planar panorama imagery generation
US9324184B2 (en) 2011-12-14 2016-04-26 Microsoft Technology Licensing, Llc Image three-dimensional (3D) modeling
US9406153B2 (en) 2011-12-14 2016-08-02 Microsoft Technology Licensing, Llc Point of interest (POI) data positioning in image
US10008021B2 (en) 2011-12-14 2018-06-26 Microsoft Technology Licensing, Llc Parallax compensation
US10164776B1 (en) 2013-03-14 2018-12-25 goTenna Inc. System and method for private and point-to-point communication between computing devices

Similar Documents

Publication Publication Date Title
US10701332B2 (en) Image processing apparatus, image processing method, image processing system, and storage medium
KR101956149B1 (en) Efficient Determination of Optical Flow Between Images
US9420265B2 (en) Tracking poses of 3D camera using points and planes
US8447141B2 (en) Method and device for generating a depth map
US20100220893A1 (en) Method and System of Mono-View Depth Estimation
JP5923713B2 (en) Image processing apparatus, imaging apparatus, and image processing method
CN118212141A (en) System and method for hybrid depth regularization
US11017587B2 (en) Image generation method and image generation device
US20100079453A1 (en) 3D Depth Generation by Vanishing Line Detection
US8922627B2 (en) Image processing device, image processing method and imaging device
US20160232705A1 (en) Method for 3D Scene Reconstruction with Cross-Constrained Line Matching
Böhm Multi-image fusion for occlusion-free façade texturing
US20150035828A1 (en) Method for processing a current image of an image sequence, and corresponding computer program and processing device
Pan et al. Depth map completion by jointly exploiting blurry color images and sparse depth maps
KR101125061B1 (en) A Method For Transforming 2D Video To 3D Video By Using LDI Method
CN110678905B (en) Apparatus and method for processing depth map
TWI457857B (en) Image processing apparatus, image processing method, and computer program product thereof
US20100079448A1 (en) 3D Depth Generation by Block-based Texel Density Analysis
US9715620B2 (en) Method to position a parallelepiped bounded scanning volume around a person
JP7275583B2 (en) BACKGROUND MODEL GENERATING DEVICE, BACKGROUND MODEL GENERATING METHOD AND BACKGROUND MODEL GENERATING PROGRAM
Jorissen et al. Multi-camera epipolar plane image feature detection for robust view synthesis
KR102587298B1 (en) Real-time omnidirectional stereo matching method using multi-view fisheye lenses and system therefore
Wei et al. Iterative depth recovery for multi-view video synthesis from stereo videos
Zhu et al. Image-based rendering of ancient Chinese artifacts for multi-view displays—a multi-camera approach
Tian et al. Upsampling range camera depth maps using high-resolution vision camera and pixel-level confidence classification

Legal Events

Date Code Title Description
AS Assignment

Owner name: NCKU RESEARCH AND DEVELOPMENT FOUNDATION, TAIWAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LEE, GWO GIUN;WANG, MING-JIUN;HUANG, LING-HSIU;SIGNING DATES FROM 20090223 TO 20090226;REEL/FRAME:022333/0736

Owner name: HIMAX MEDIA SOLUTIONS, INC., TAIWAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LEE, GWO GIUN;WANG, MING-JIUN;HUANG, LING-HSIU;SIGNING DATES FROM 20090223 TO 20090226;REEL/FRAME:022333/0736

AS Assignment

Owner name: HIMAX TECHNOLOGIES LIMITED, TAIWAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HIMAX MEDIA SOLUTIONS, INC.;REEL/FRAME:022923/0871

Effective date: 20090703

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION